The use of artificial intelligence is bringing important new value to video telematics, adding functions such as real-time alerts that help drivers avoid collisions and smart event analysis that saves fleet managers hours of time by reducing false-positive reports of risky driving behavior. But the widely touted use of “edge AI” in which virtually all AI processing is performed on the camera has several downsides that affect driver visibility as well as fleet budgets.
The first drawback of edge-centric AI deployments is that it increases dashcam size, consuming more real estate on the windshield and reducing drivers’ view of the road. That’s because of the extra space required to accommodate AI-related processors, graphics processing units (GPUs), and heat sinks needed to prevent overheating from multiple demanding AI operations.
The more AI functions on the camera, the larger the size because more horsepower is needed to execute the individual algorithms associated with each function. The extra components also double, triple or even quadruple device costs, pushing them to price points of $1,800 or more.
These cameras also have a limited life span because replacements are needed to take advantage of new edge-based AI capabilities, increasing deployment expenses over the long term.
Newer options are overcoming these hurdles by limiting analysis of video events to critical issues requiring real-time safety alerts and moving all other AI processing to the cloud. This design approach not only saves hundreds of dollars per camera and lengthens replacement intervals, but also shrinks dimensions to a more manageable size.
One of the newest “high cloud/low edge” cameras, for example, has a 31% smaller windshield footprint and is just half the height of a comparable edge-focused product. This lower profile provides a larger forward view while also increasing visibility, yielding significant benefits for safety as well as reducing driver anxiety.
The flaw in the almost-everything-on-the-edge design is that 90% of AI features don’t require real-time, camera-based processing and driver alerts to ensure safety. Yet most video telematics dashcams load as many AI features on the edge as possible, with the inevitable complications mentioned above.
Consider units that use edge-based AI to detect smoking or eating activity. If a driver lights a cigarette or bites into a sandwich, companies with policies about those issues will want that information for rule enforcement and scorecarding, but processing the video on the edge and alerting the driver to the rule violation is unlikely to prevent an accident. All that’s needed is to capture the video and send it to the cloud for later analysis using machine vision.
Similarly, AI functions such as recognition of speed limit signs, street lights and lane changes over dotted lines may be valuable for post-event driver evaluation and coaching purposes but are unlikely to avert a catastrophic incident. Nevertheless, they are built into many dashcams and require significant space allotments that contribute to hardware size inflation.
On the other hand, advanced driver assistance system (ADAS) functions such as detecting solid-line lane departures and imminent forward collisions need to be processed immediately on the camera so that drivers can be warned without the delay involved in sending data to the cloud for interpretation. That lag time – even though it’s typically not more than a few seconds – increases the risk of a serious or even fatal accident, not to mention a black mark on a fleet’s safety record and the associated impact on insurance premiums, litigation and/or court verdicts.
These differing needs can be balanced by processing only the “critical few” AI functions on the edge for real-time intervention and pushing the rest of the data to an AI engine in the cloud. The first dashcams designed with this strategy hit the market last year, downsizing both the physical footprint and the price tag without losing the ability of AI technology to perform needed forensics on driving events.
Reducing False Positives
Performing AI processing in the cloud also opens the door to another emerging use case for the technology: intelligent video event analysis that dramatically reduces the number of incidents flagged for review by fleet managers.
An early entry in this category is SmartWitness AIDE (Artificial Intelligence Driving Events), which combines machine learning, machine vision and artificial intelligence to differentiate between situations such as a pothole and a collision, a sharp curve and a collision avoidance maneuver, an expected acceleration and an unsafe speed increase, or braking required for downhill driving and harsh braking suggesting unsafe vehicle operation.
Solutions like these – which apply contextual factors such as road type, elevation, weather conditions and traffic patterns to interpret video events – not only increase fleet manager productivity by eliminating hours of unnecessary video review but also help overcome driver resistance to camera deployments by avoiding inaccurate driver scoring and unneeded coaching sessions. Equally significant, they can only be deployed on the edge because the resources required would balloon camera size beyond the practical.
The lesson is this. The edge has its place in AI-enabled video telematics deployments, but a hybrid edge/cloud approach with only the critical AI functions on the camera provides substantial benefits in size, price and versatility. 100% edge-based solutions may offer as many bells and whistles as a premium accessory package on a new car, but they’re not necessarily the right fit for bringing AI to your fleet operations.
Michael Bloom is Vice President of Product and Marketing for SmartWitness, a global provider of video telematics solutions that help fleets optimize operations, improve driving behavior and mitigate risk. SmartWitness was recently acquired by Sensata Technologies, a global industrial technology company striving to create a cleaner, more efficient, electrified, and connected world.