S3 Data Export

Service version: v1
Last edit: 2026.05.06

Important Note Explore ready-to-use traffic reports and data visualizations immediately by signing up for a 30-day free trial on the MOVE Portal. Once registered, you'll receive an API key to start using the Traffic Analytics APIs right away. Alternatively, you may contact our Sales team for a tailored solution.

Purpose

The S3 Data Export service automatically delivers junction traffic statistics directly to your AWS S3 bucket. Instead of polling the API for data, this service pushes processed data files to your storage as soon as they become available.

The data flow is one-directional: TomTom exports data to your S3 bucket. No data is read from your bucket.

Important Note This feature is available only for Enhanced Junction Analytics customers. Once access is granted, the S3 Data Export service can be configured for your account.

Data pipelines

The S3 Data Export service supports two distinct data pipelines. Depending on your use case, you can subscribe to one or both.

PipelineDescription
Live

Delivers near real-time junction traffic statistics. Data is exported every 15 minutes, with each file delivered approximately 30 minutes after the corresponding 15-minute interval ends. Use this pipeline when you need up-to-date traffic data with low delay.

Historical (Archive)

Delivers consolidated, post-processed junction traffic statistics. Data is exported periodically after the consolidation of each 6-hour time window is complete. Use this pipeline when you need verified, post-processed historical data.

Both pipelines deliver data in the same file format and directory structure. The key difference lies in when and how the data is produced:

  • Live pipeline: data is collected in 15-minute intervals and exported to S3 approximately 30 minutes after each interval ends.
  • Historical pipeline: data is collected and processed in 6-hour cycles, and the export to S3 is triggered once processing for each cycle is complete.

Delivery schedule comparison

AspectLive pipelineHistorical (Archive) pipeline
TriggerScheduled, every 15 minutesEvent-driven, after data consolidation
Latency~30 minutes after each 15-minute interval endsPeriodic (every ~6 hours)
Processing cyclesEvery 15 minutes (one 15-minute bucket per cycle)Four times per day (~02:00, 08:00, 14:00, 20:00 UTC)

Prerequisites

To enable the S3 Data Export integration, you must provide the following information to your TomTom account representative.

Required informationDescription

bucketName string

The name of your AWS S3 bucket where data will be delivered.

regionName string

The AWS region where the bucket is hosted (e.g. us-east-1).

accessKeyId string

AWS IAM Access Key ID with write permissions to the bucket.

secretAccessKey string

The corresponding AWS IAM Secret Access Key.

pipeline string

The data pipeline(s) you wish to subscribe to: Live, Historical (Archive), or both.

Optional informationDescription

prefixPath string

An optional directory prefix for organizing files within the bucket (e.g. live/, archive/). If not provided, files are placed at the bucket root. If you subscribe to both pipelines, you can specify a separate prefix for each to keep the data organized.

Required IAM permissions

The provided AWS credentials must have at minimum the s3:PutObject permission on the target bucket. We recommend creating a dedicated IAM user specifically for this integration.

Example minimal IAM policy:

Minimal IAM policy - JSON
1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": "s3:PutObject",
7 "Resource": "arn:aws:s3:::your-bucket-name/*"
8 }
9 ]
10}

Security recommendations

  • Create a dedicated IAM user specifically for this integration.
  • Apply the principle of least privilege - grant only s3:PutObject permission.
  • Enable S3 bucket versioning if you need to retain previously overwritten data.

Delivered data

File format

Both pipelines deliver data in the same format: gzip-compressed Apache Avro Object Container files containing junction traffic statistics aggregated in 15-minute intervals. The Avro schema is embedded within each delivered file, allowing standard Avro tools and libraries to read the data without additional configuration.

PropertyValue
File namedata.bin
FormatApache Avro Object Container
Content-Typeapplication/octet-stream
Content-Encodinggzip
Integrity checkChecksum verified on each upload

Data schema

Each Avro file contains records with the following structure:

BatchJunctionTraceStats (root record)

FieldTypeDescription
junctionIdstringUnique identifier of the junction.
approachesarrayList of approach trace data for this junction.

BatchApproachTraces (element of approaches)

FieldTypeDescription
approachIdstringUnique identifier of the approach.
tracesarrayList of individual trace statistics for this approach.

BatchApproachTraceStats (element of traces)

FieldTypeDescription
entryTimelongTimestamp (epoch milliseconds) when the trace entered the approach.
arrivalsarrayList of arrival point observations along the approach.
hasStopsboolean (nullable)Whether the trace had any stops along the approach.
numberOfStopsint (nullable)Number of stops detected along the approach.
exitIdstring (nullable)Identifier of the exit used by the trace, if applicable.
averageSpeedint (nullable)Average speed of the trace through the approach.
travelTimeint (nullable)Total travel time through the approach.
firstStopDistanceint (nullable)Distance to the first stop from the approach entry point.
controlStatsobject (nullable)Additional control-related statistics, if available.

BatchArrival (element of arrivals)

FieldTypeDescription
namestring (nullable)Name of the arrival point.
pointobjectGeographic coordinates (lat: double, lon: double).
timelongTimestamp (epoch milliseconds) when the trace passed the arrival point.
speedint (nullable)Speed of the trace at the arrival point.

BatchControlStats (value of controlStats)

FieldTypeDescription
controlDelayint (nullable)Delay caused by traffic control.
approachDelayint (nullable)Total delay on the approach.
decelerationPointSpeedint (nullable)Speed at the deceleration point.
accelerationEndpointSpeedint (nullable)Speed at the acceleration endpoint.
entryDistanceint (nullable)Distance from the approach entry.
exitDistanceint (nullable)Distance to the approach exit.
coveredLengthint (nullable)Total length covered by the trace through the control area.
numberOfStopsint (nullable)Number of stops within the control area.
firstStopDistanceint (nullable)Distance to the first stop within the control area.

Directory structure

Files are organized in your S3 bucket by date and time in 15-minute intervals. The structure is identical for both pipelines:

<prefix>/<day>/<hour>/<quarter>/data.bin

Where:

  • <prefix> - your configured prefix path (if provided).
  • <day> - date identifier.
  • <hour> - hour of the day (0-23).
  • <quarter> - quarter-hour interval (0-3), corresponding to minutes 0-14, 15-29, 30-44, and 45-59 respectively.

Example file paths:

live/2026-04-30/14/2/data.bin
archive/2026-04-30/14/2/data.bin

The examples above show data for April 30, 2026, between 14:30 and 14:44 UTC, using live/ and archive/ prefixes respectively. If you subscribe to both pipelines, using separate prefixes like this is recommended to keep the data clearly separated.

Write semantics

  • Files are written using the S3 PutObject operation (full replace, not append).
  • If the same time interval is reprocessed, the corresponding file will be overwritten with updated data.
  • Each upload is idempotent - uploading the same data twice produces the same result.
  • If you need to preserve previous versions of files, enable S3 bucket versioning on your bucket.

Onboarding checklist

Follow these steps to set up the S3 Data Export integration:

  1. Decide which pipeline(s) you need: Live, Historical (Archive), or both.
  2. Create an S3 bucket or designate an existing one for receiving data.
  3. If subscribing to both pipelines, decide on separate prefix paths to keep data organized (e.g. live/ and archive/).
  4. Create a dedicated IAM user with s3:PutObject permission on the bucket.
  5. Generate an Access Key ID and Secret Access Key for the IAM user.
  6. Send the required information to your TomTom account representative:
    • Bucket name
    • AWS region
    • Access Key ID
    • Secret Access Key
    • Selected pipeline(s)
    • Prefix path(s) (if desired)
  7. TomTom configures the export and confirms activation.
  8. Verify that data is arriving in your bucket:
    • Live pipeline: data should appear approximately 30 minutes after each 15-minute interval ends.
    • Historical pipeline: data should appear after the next processing cycle (~6 hours).

Troubleshooting

IssuePossible cause
No files appearing in the bucket

The provided credentials may be invalid or the IAM user may lack s3:PutObject permission on the bucket. Verify the IAM policy and credentials.

Files missing for a specific time window

For the Live pipeline, this may indicate a temporary data collection gap. For the Historical pipeline, a processing delay may have occurred and files will arrive after the next processing cycle. If the issue persists, contact your TomTom account representative.

Unexpected file overwrites

This is normal behavior when time intervals are reprocessed. Enable S3 bucket versioning to retain a history of all file versions.

Data differs between Live and Historical pipelines

This is expected. The Live pipeline delivers data as it arrives, while the Historical pipeline delivers post-processed, consolidated data. The Historical pipeline may contain corrections or additional data that was not available at the time of live delivery.