Research into autonomous cars is kicking into high gear. Companies like Google’s Waymo, Argo AI, Uber ATG, Tesla, GM Cruise and many others have made lots of progress, but they still have lots of problems to solve to achieve levels of safety that will pass regulatory muster and stimulate widespread adoption.
Quite a few of these companies utilize advanced computations involving recognizing things from signals, combining many disparate information sources to understand what the things mean and deciding on the best course of action.
The first challenge any autonomous car software must tackle is what to do when something unexpectedly enters (or looks like it’s about to enter) the vehicle’s path. In the real world, that something could be a pedestrian, a deer or a newspaper blown by the wind. Detecting the thing, identifying it and determining its projected path must all be completed in milliseconds in order to have time to choose the best course of action. At its core, this challenge is a probabilistic cost estimation.
Let’s tackle the probabilistic part of that last statement first. Autonomous vehicles use a combination of multiple sensors to gather inputs plus software to interpret those inputs, in order to determine what they represent in the real world. In some autonomous vehicles, there’s a forward-facing camera array that includes both near- and far-field cameras; a 360-degree radar and a 360-degree lidar that all work together with GPS and map data. That’s a minimum of four different sensors that could detect something in the road.
The next challenge is that those sensors don’t always agree. But with additional context the system can determine which one to trust and how to combine them to best effect. If it happens to be raining or snowing, the radar may be far more trustworthy than the lidar. By bringing all the different inputs together, each with its own potential error rate, you get a probability distribution over the likelihood that there is something you need to avoid hitting in the path of the car.
So the next question is, “What is it?” – which is really only to answer the question, “What will it do?” Again, combining multiple sensors using some nifty math provides a range of options about where this thing is headed and what the optimal response is. Image recognition from the cameras, velocity data from radar calculations and more all are combined, but there is still no way of knowing for certain what this thing will do, so the system must act with uncertainty.
The goal is to minimize the uncertainty and the ‘cost’ of the action. The vehicle could slam on its brakes, but that may have a cost to the second car that’s following too closely. It will also impact the passengers’ comfort and overall experience. The car could swerve, but which way avoids hitting something else? It could simply slow down enough that the thing clears the path, but can that be done in time to avoid a collision given current speed and braking capability? Or do we need to do anything at all?
The range of probabilities over what this thing may do, combined with the possible outcomes of the possible response actions allows the system to take the best action to mitigate the risk.
This is exactly how Haystax approaches insider risk in an enterprise. Almost every organization has implemented tons of sensors – from data loss prevention (DLP) and endpoint detection and response (EDR) to employee performance management and travel and expense reporting. Each sensor collects data, and that data doesn’t always agree. But by combining the outputs of multiple sensors probabilistically using expert models, machine learning and other AI techniques, the Haystax for Insider Threat solution provides decision-makers with the confidence to act with uncertainty.
This approach is critical in protecting the enterprise. Without it, analysts wait for something bad to happen and then deal with the consequences. That’s like letting the self-driving car hit something because insurance will pay for the damages.
In the case of insider risk, the required sensors are already in place. The only thing needed is the model that knows how to bring the sensor outputs together as evidence to provide the context and insight needed to take the most appropriate action.
# # #
Note: Check out our Resources page for our latest white papers and fact sheets on insider threat mitigation, security analytics and data science.