The self-driving Fusion Hybrids rely on LiDAR-Enabled 3D digital mapping to identify their precise position.
by Staff
March 16, 2016
To navigate snowy roads, Ford autonomous vehicles are equipped with high-resolution 3D maps – complete with information about the road and what’s above it, including road markings, signs, geography, landmarks and topography. Photo courtesy of Ford.
3 min to read
To navigate snowy roads, Ford autonomous vehicles are equipped with high-resolution 3D maps – complete with information about the road and what’s above it, including road markings, signs, geography, landmarks and topography. Photo courtesy of Ford.
Snow-covered roads in Michigan have provided a rigorous testing ground for Ford Fusion Hybrid autonomous research vehicles this past winter, and the challenging road conditions have also helped Ford demonstrate the versatility of the 3D digital mapping capabilities made possible by recent advances in LiDAR (light detection and ranging) systems.
To operate in snow, Ford Fusion Hybrid autonomous vehicles first need to scan the environment to create high-resolution 3D digital maps. By driving the test route in ideal weather, an autonomous vehicle creates highly accurate digital models of the road and surrounding infrastructure using four LiDAR scanners. These scanners generate a total of 2.8 million laser points a second, according to Ford.
Ad Loading...
The resulting map then serves as a baseline that’s used to identify the car’s position when driving in autonomous mode. Using the LiDAR sensors to scan the environment in real time, the car can locate itself within the mapped area later, when the road is covered in snow.
While mapping their environment, Ford autonomous vehicles collect and process a diverse set of data about the road and surrounding landmarks — signs, buildings, trees and other features. The car collects up to 600 gigabytes per hour, which it uses to create a high-resolution 3D map of the landscape, Ford said.
Ford’s autonomous vehicles generate so many laser points from the LiDAR sensors that some can even bounce off falling snowflakes or raindrops, returning the false impression that there’s an object in the way. Of course, there’s no need to steer around precipitation, so Ford — working with University of Michigan researchers — created an algorithm that recognizes snow and rain, filtering them out of the car’s vision so the vehicle can continue along its path.
When the subject of vehicle navigation comes up, most people think of GPS. But where current GPS is accurate to just more than 10 yards, autonomous operation requires precise vehicle location. By scanning their environment for landmarks, then comparing that information to the 3D digital maps stored in their databanks, Ford’s autonomous vehicles can precisely locate themselves to within a centimeter, Ford said.
In addition to LiDAR sensors, Ford uses cameras and radar to monitor the environment around the vehicle, with the data generated from all of those sensors fused together in a process known as sensor fusion. This process results in 360-degree situational awareness, according to Ford.
Ad Loading...
Sensor fusion means that one inactive sensor — perhaps caused by ice, snow, grime or debris buildup on a sensor lens — doesn’t necessarily hinder autonomous driving. Still, Ford autonomous vehicles monitor all LiDAR, camera and radar systems to identify the deterioration of sensor performance. This helps keep sensors in ideal working order, Ford said. Eventually, the cars might be able to handle ice and grime buildup themselves through self-cleaning or defogging measures.
The company's winter weather road testing takes place in Michigan, including at Mcity — a 32-acre, real-world driving environment at the University of Michigan. Ford’s testing on this full-scale simulated urban campus is aimed at supporting the company’s mission to advance autonomous driving.
Distracted driving remains one of the most persistent risks in fleet operations. New approaches focus on removing mobile device use entirely while adding real-time safety support.
As distraction risks evolve, fleets are turning to smarter, more connected technologies to better understand what’s happening behind the wheel. Part 2 explores how these tools are helping identify risky behaviors and improve visibility across operations.
Distracted driving is often measured by what we can see—phones in hand, eyes off the road. But what about the distractions we can’t? A recent incident raises a bigger question about awareness, attention, and why subtle risks so often go unnoticed.
Fleets have more driver data than ever, so why isn't behavior changing? Training requires more than reports and coaching — it requires real-world practice.
A two-part conversation with Stefan Heck on how AI is transforming the fight against distracted driving. As fleets adopt smarter tools, the focus shifts from reacting to preventing risk. In Part 1, we look at where AI is making an impact for fleets today.
An 11% drop in pedestrian fatalities in early 2025 signals progress in U.S. road safety, but elevated death rates and ongoing risks underscore the need for continued action from fleets and policymakers.