Data & AI

Best practices for annotating mapping data for ADAS and AV systems

Illustration of an autonomous car using its sensors to drive on a road.

Mapping data is essential for the automotive navigation ecosystem as it enables autonomous vehicles (AV) and advanced driver assistance systems (ADAS) to understand four critical things: where it is, what surrounds it, where it’s headed and the best path to get there.

Mapping requires creating digital representations of the road environment to support safe and efficient driving. In contrast to traditional navigation maps, modern automotive maps feature precise lane-level detail and rich semantic information, such as lane boundaries, road markings, signs and intersection geometry. By providing detailed, precise and centimeter-level accurate digital representations of the road environment, digital maps offer vehicles a human-like context that enhances what their sensors perceive, essentially building a rich, 3D view of the world around them.

These maps are critical for a range of vehicle functions, including:

  • Augmenting sensor capabilities to see beyond perception systems, with existing maps and recorded features to assist extraction of information, surpassing real-time detection, especially in adverse weather and obstructed visibility conditions.
  • Better localization through the use of onboard sensors combined with mapped features instead of relying on the global positioning system (GPS) alone.
  • Improved path planning with information on lane connectivity, traffic rules and road geometry to make safer decisions in complex scenarios like intersections or merges.

Three critical stages for mapping annotation

Since mapping plays a critical role in how vehicles understand and navigate their surroundings, building accurate context via annotations requires a structured, multi-step process. There are three key stages in this process that require careful attention and specialized expertise. Let’s walk though these steps on Ground Truth Studio (GTS), our proprietary annotation platform.

Step 1: Raw data transformation and pre-processing

Once raw data is collected, global navigation satellite system (GNSS) and inertial measurement unit (IMU) measurements are used to estimate the vehicle’s position and orientation over time. Using this data, sensor outputs such as lidar point clouds and images are transformed from the ego-vehicle coordinate frame into a common world reference frame to ensure spatial consistency.

  1. For lidar data captured as timestamped frames, each frame is aligned using ego-motion data to account for the vehicle’s movement.
  2. The processed point cloud and corresponding imagery are spatially and geometrically aligned, enabling accurate interpretation of map features. Multiple aligned consecutive frames are combined to generate an aggregated point cloud (APC), forming a unified spatial view of the environment.

The aggregated point clouds or keyframes need to be enhanced using the following methods:

  • Environmental and sensor-induced noise is removed using in-house filtering algorithms to improve clarity and accuracy.
  • The detection and removal of dynamic objects such as vehicles and pedestrians, isolating static features necessary for mapping.
  • Aggregated frames may contain motion blur due to vehicle movement. Using ego motion compensation (EMC) algorithms can correct such distortions, ensuring sharper, more reliable point clouds.
Before EMC and alignmentAfter EMC and alignment

Step 2: Loading high-density point clouds

For making/editing and Quality Control (QC) workflows, Ground Truth Studio supports tens of millions of high-density point clouds, allowing teams to work with detailed spatial data without performance compromise.

Optionally, in-house smart downsampling algorithms can be used to label strategic frames, ensuring a smoother annotation experience for huge point clouds when needed. These algorithms optimize point density by preserving high-resolution details like road markings, in critical regions, while downsampling less relevant areas like, for example, vegetation.

High-density point cloud

Step 3: Tile segmentation and annotation

Given the complexity and precision required in automotive maps, the annotation process is a vital step in ensuring the final output meets accuracy standards for autonomous or assisted driving applications. To manage the scale of mapping large geographic areas, data is segmented into smaller, more manageable units known as map tiles. Each tile contains:

  • A 3D point cloud (APC)
  • Associated imagery (from multiple RGB or grayscale cameras)
  • Calibration metadata for multi-sensor alignment
  • Pose and timestamp data to enable spatial-temporal validation

Once the tile is set up, annotators create vector representations of real-world features, primarily using 3D polylines and 3D cuboids:

  • 3D polylines are used for road boundaries, lane lines, crosswalks, curb edges and stop lines
  • 3D cuboids may represent traffic signs, poles, signal lights or physical obstacles

Each geometric annotation can then be enriched with attributes, which help define the semantics of a feature, for instance, speed limits or the type of traffic sign. After all map tiles are fully annotated and validated, they are stitched together to form a seamless automotive map.

3D cuboid3D polylines

Key features for 3D polyline annotation platforms on Ground Truth Studio

A full-featured annotation platform, as well as expert annotators trained in mapping, play a significant role in accurate mapping annotations. Smart features are a huge part of ensuring this accuracy with operational efficiency.

Ground Truth Studio offers comprehensive support for automotive mapping, including advanced features to enhance productivity and quality. Using our tool, annotators can create precise 3D polylines and 3D cuboids directly on the point cloud. The platform supports automatic 2D projections on camera images using calibration data if users need to view 2D projections for both cuboid and polyline annotations. For dedicated 2D annotations, our sensor fusion tool enables 2D polyline, rectangle creation and 3D-2D annotation linking across sensor modalities.

Attribute support for 3D polylines

Ground Truth Studio offers a wide range of attribute configurations such as single or multi-select and text input for accurate metadata annotation

  • Linking attributes:
    • Parent-child linking (Hierarchical): Used when one annotation depends on another.
    • Two-way linking (Non-hierarchical): Used for representing peer-to-peer relationships where elements are connected but not strictly dependent in a hierarchical sense. (e.g., linking a virtual lane class (VLC) to associated lane lines).
  • Each link can include a property, such as specifying if a lane line is on the left or right of a VLC.

To significantly enhance the efficiency, accuracy and ease of use for annotators working with 3D data, our platform is built with a suite of productivity features specifically designed to streamline the 3D polyline annotation workflow.

Automated ground anchoring with ML-generated meshes

Manually setting the height (Z-value) for each point in a 3D polyline can be tedious and error-prone, particularly when working with complex terrain or large datasets. To simplify this, our platform uses machine learning-generated hybrid ground meshes as anchor surfaces. The first annotation made by an annotator automatically snaps to the ground, reducing the need for manual height adjustments and improving both speed and precision.

Axis locking for precise alignment

Our axis locking tools help maintain geometric accuracy by allowing annotators to lock lines to specific orientations, perfectly vertical (Z-axis lock) or horizontal (XY-plane lock). This feature prevents accidental deviations and ensures cleaner, more reliable annotations, eliminating the need for constant manual corrections.

Polyline duplication and translation

Many real-world road environments contain repetitive structures like parking spaces, building edges and road markings are common. Instead of re-annotating from scratch, our platform allows annotators to duplicate existing polylines and make fine adjustments to individual points. This saves significant time while preserving accuracy.

Line direction visualization and line connection

To provide immediate clarity on the flow or intended direction of an annotation and to connect two 3D polyline annotations to form continuous paths or networks, our platform includes line direction visualization and line connection respectively. This intuitive visual feedback enhances understanding and reduces errors in path-based annotations.

Scaling mapping annotations through partnership

Automotive maps, especially semantic maps enriched with geospatial data, provide the critical context autonomous systems need to make safe, confident decisions on the road. They enhance perception, enable smarter path planning and support scalable operations by helping autonomous systems anticipate road conditions and traffic scenarios.

In contrast, choosing a mapless approach shifts the entire burden of navigation and safety onto real-time systems. This creates several significant challenges:

  • Your vehicle must constantly process large volumes of raw sensor data, demanding costly and complex onboard hardware
  • Without prior map knowledge, rare or unpredictable scenarios become harder to handle safely
  • Proving the safety and reliability of a mapless system across an infinite variety of real-world scenarios is a significant regulatory and engineering challenge

Handling the operational demands of large-scale mapping projects can be highly complex, where efficiency plays a crucial role.

At TELUS Digital, we make mapping scalable and cost-effective. Backed by our team of expert annotators, we combine advanced tools with deep domain knowledge to deliver exceptional accuracy at scale. With tile-based collaboration, smart productivity features and intuitive UX, Ground Truth Studio enables seamless, conflict-free contributions from multiple users working in parallel.

Where are you on your mapping journey? Connect with our experts at TELUS Digital today to review your current approach and consult on the best next steps to help you build safer, smarter autonomous vehicles.

  • Share on Facebook
  • Share via email

Be the first to know

Get curated content delivered right to your inbox. No more searching. No more scrolling.

Subscribe now

Check out our solutions

We can help help with your data collection and data creation for all of your machine learning needs.

Learn more