
Generating HD maps for autonomous driving systems
Mapping data is the unseen but essential infrastructure of the automotive navigation ecosystem. At its core, an autonomous vehicle must understand four critical things: where it is, what surrounds it, where it’s headed and the best path to get there. High-definition (HD) maps play a key role in enabling this understanding. By providing detailed, precise and centimeter-level accurate digital representations of the road environment, HD maps offer vehicles a human-like context that enhances what their sensors perceive, essentially building a rich, 3D view of the world around them.
What is HD mapping?
The core idea of HD maps originated from the necessity to localize vehicles as accurately as possible in order to ensure safety in autonomy mode. Unlike a traditional navigation map, an HD map contains lane-level precision and rich semantic information such as lane boundaries, centerlines, traffic signs, road markings and the location of key roadside infrastructure. It also has several key attributes in lane boundaries, such as type, color and width. This reflects a more realistic detail of the surrounding real-world situation, which is useful in implementing a higher level of autonomy of intelligent vehicles on the road.
HD map applications in autonomous vehicles
HD map applications span across perception, localization and decision-making, forming a key layer of intelligence in self-driving systems, including:
Extending sensor perception
HD maps serve as a predictive supplement to sensor data by providing detailed information about lane geometry, traffic signs and road features beyond the immediate sensor range. This helps the vehicle anticipate its environment, even in adverse conditions such as heavy rain, fog or when sensors are obstructed by large vehicles. By enabling the system to predict expected features and focus on anomalies, HD maps help reduce the computational load required to process real-time sensor inputs.
Enhancing vehicle localization
GPS on its own provides only meter-level accuracy and performs poorly in environments like tunnels or dense urban areas. HD maps enable highly accurate lateral and longitudinal positioning by correlating static map features with real-time sensor data. This allows vehicles to precisely determine their position in relation to their surroundings, improving safety and reliability of autonomous driving.
Improving perception reliability
While onboard sensors provide real-time data about the surroundings, they can be limited by range, occlusions and environmental conditions — including weather and lighting conditions. HD maps help overcome these limitations by providing a pre-built, highly detailed model of the road environment, enabling enhanced perception even in challenging or low-visibility conditions.
Enabling safer path planning
With granular information about lane connections, road curvature, intersections and traffic rules, HD maps support both global and local path planning. Global path planning focuses on finding an optimal route based on a known map of the environment, while local path planning deals with dynamic obstacles and real-time adjustments. At decision points such as intersections or complex merges, HD maps provide the critical contextual detail that enables the local planner to navigate safely around nearby vehicles and comply with traffic rules, while staying aligned with the global plan.
Considerations for HD map creation
There are three key stages in the HD map creation process that require careful attention and expertise.
1. Data acquisition with mobile mapping systems
The first step in HD map creation is data collection, typically performed using mobile mapping systems (MMS). These are specialized vehicles equipped with a range of high-precision sensors, including global navigation satellite system (GNSS), lidar, inertial measurement units (IMU), high-resolution cameras, wheel speed sensors and radar. These sensors work together to capture accurate spatial data and environmental features while the vehicle travels along designated routes. Establishing an in-house fleet for HD map data collection is a complex and resource-intensive undertaking. It requires not only investment in specialized hardware such as lidar, high-resolution cameras and data storage systems, but also the operational infrastructure to manage calibration, data logging, fleet logistics and ongoing maintenance. Beyond the setup, organizations must also address challenges related to data quality assurance, sensor synchronization and compliance with regional data regulations. As a result, many organizations, especially those concentrating on core autonomy development, choose to collaborate with external partners such as TELUS Digital, who provide comprehensive support across the HD mapping pipeline, from data acquisition to final map delivery. This approach allows them to access high-quality, multi-sensor datasets without diverting significant capital and engineering resources toward building and managing dedicated mapping fleets.
2. Raw data transformation and pre-processing
Once raw data is collected, GNSS and IMU measurements are used to estimate the vehicle’s position and orientation over time. Using this data, sensor outputs such as lidar point clouds and images are transformed from the ego-vehicle coordinate frame into a common world reference frame.
For lidar data captured as timestamped frames, each frame is aligned using ego-motion data to account for the vehicle’s movement. Multiple aligned frames are aggregated to generate an aggregated point cloud (APC), forming a unified 3D snapshot of the environment.
Pre-aggregated point clouds or selected keyframes undergo refinement to ensure mapping accuracy. This includes:
- Filtering out environmental and sensor-induced noise.
- Detecting and removing dynamic objects (for example, vehicles and pedestrians) to isolate static map features.
- Applying ego-motion compensation if motion distortion is present and correcting spatial inconsistencies caused by vehicle movement during capture.
- Optimizing point cloud density by preserving high-resolution details like, for example, land markings, cubs and more, in critical regions while downsampling less relevant areas like, for example, vegetation.
The processed point cloud and corresponding imagery are spatially and geometrically aligned, enabling accurate interpretation of map features.
3. Tile segmentation and annotation
Given the complexity and precision required in HD maps, the annotation process is a vital step in ensuring the final output meets accuracy standards for autonomous or assisted driving applications. To manage the scale of mapping large geographic areas, data is segmented into smaller, more manageable units known as map tiles. Each tile contains:
- A 3D point cloud (APC).
- Associated imagery (from multiple RGB or grayscale cameras).
- Calibration metadata for multi-sensor alignment.
- Pose and timestamp data to enable spatial-temporal validation.
Once the tile is set up, annotators create vector representations of real-world features, primarily using 3D polylines and 3D cuboids:
- 3D polylines are used for road boundaries, lane lines, crosswalks, curb edges and stop lines.
- 3D cuboids may represent traffic signs, poles, signal lights or physical obstacles.
Each geometric annotation can then be enriched with attributes, which help define the semantics of a feature — for instance, speed limits or the type of traffic sign. After all map tiles are fully annotated and validated, they are stitched together to form a seamless HD map.
Creating HD maps that scale through partnership
HD maps, especially semantic maps enriched with geospatial data, provide the critical context autonomous systems need to make safe, confident decisions on the road. They enhance perception, enable smarter path planning and support scalable operations by helping autonomous systems anticipate road conditions and traffic scenarios.
In contrast, choosing a mapless approach shifts the entire burden of navigation and safety onto real-time systems. This creates several significant challenges:
- Your vehicle must constantly process large volumes of raw sensor data, demanding costly and complex onboard hardware.
- Without prior map knowledge, rare or unpredictable scenarios become harder to handle safely.
- Proving the safety and reliability of a mapless system across an infinite variety of real-world scenarios is a significant regulatory and engineering challenge.
Handling the operational demands of large-scale HD mapping projects can be highly complex, where efficiency plays a crucial role.
At TELUS Digital, we make HD mapping scalable and cost-effective. Using Ground Truth Studio, our proprietary annotation platform, we can generate high-quality maps from existing data sources like lidar top-down captures so you avoid redundant data collection and reduce operational costs. Backed by our team of expert annotators, we combine advanced tools with deep domain knowledge to deliver exceptional accuracy at scale. With tile-based collaboration, smart productivity features and intuitive UX, Ground Truth Studio enables seamless, conflict-free contributions from multiple users working in parallel.
Where are you on your mapping journey? Connect with our experts at TELUS Digital today to review your current approach and consult on the best next steps to help you build safer, smarter autonomous vehicles with HD maps.