Bryan Ware, Chief Technology Officer at Haystax Technology
What if you knew…automatically and continuously…whom you could trust? Or perhaps when someone was engaged in behaviors that might lead to a lapse in judgment or discretion. I like to call this Dynamic Trust and it’s something I believe is deeply in need today, not just for national security, but for many aspects of our personal or business lives. Dynamic Trust is a data informed, analytical way to prioritize the risk a person and their actions present. To describe how Dynamic Trust might work, it’s easier to consider the alternatives. For example, does it make sense to grant access to Top Secrets – data that presents, by definition, “exceptionally grave damage” to the United States if disclosed – based on a background review performed five years ago? Without updating it on a continuous basis? But, as I’ll lay out in a future post this non-dynamic system is the current mechanism for establishing trust. Or similarly, what if your financial advisor was going through a personal bankruptcy and had his driver’s license suspended after multiple DUIs? Would you trust that person’s judgement with your life savings? Perhaps you’d look a little more closely at your statements. How would you even know if he was engaged in high-risk behaviors in his personal life? A recent article in the Wall Street Journal identified 1,600 brokers that had bankruptcy filings or criminal charges that weren’t publicly reported. And that their clients had no way of knowing. I certainly don’t want to suggest that we need a persistent surveillance state where anything and everything that anyone does is subject to collection and analysis. But rather, that positions of trust should require that some behaviors are unallowable, and we should not rely solely on infrequent reviews or self reporting. Those methods are clearly insufficient for important positions. And the technology is readily available to automate these reviews and allow them to be performed more frequently. I’m a really big fan of the Uber car service when I travel, especially in areas where it’s difficult to get a cab in a timely way (like the Tenderloin in San Francisco at night). Uber has a remarkably easy to use App that not only automates the call for a car and payment, but also keeps track of each driver and customers rating. Each time I take a ride, I have to rate the driver on a 5 star scale (and similarly he rates me as a customer). If he falls below a 4.6 average rating, he is deactivated for working for Uber. Similarly, if a customer has a low rating, perhaps for being rude or drunk on his previous ride, that rating is presented to drivers and they can decide whether to pick him up, say, at midnight on a Friday. This dynamic rating system ensures the integrity of Uber service, driver and passenger safety, and its brand. It immediately weeds out a “bad” driver or customer. What a remarkable contrast to the way taxi drivers are given a pre-employment screen and a driving test and then have no real feedback or monitoring from that moment on. When you think about it like that, once you receive your taxi license you’re effectively a 5 star driver because there’s no feedback system to monitor your performance. I’ve had a lot of bad taxi drivers over the years but their employer or the next passenger would have no way of knowing that because there is no mechanism for rating. So, in the era of ubiquitous data and cheap processing, we can automate the risk analysis of people in the same way Uber has. Dynamic Trust will require more than the collection of more data. That data must be applied to trust models or threat models (depending on whether you’re a glass half empty or half full kind of thinker) that can provide automated indications of increased risk. Why do we need analytics? Well, imagine in my Uber example if instead of a 5 star rating there was a survey Uber emailed to you after your ride. Would you bother to fill it out? And what if there was a room full of people reading those surveys that were emailed back? They would fall behind and would not make consistent judgments. Establishing Dynamic Trust requires that we have an analytical model for what trustworthy behaviors and characteristics look like. Then, relevant data for that level of trust can be collected and applied to the model automatically so that significant issues (like a felony arrest) are identified but also so that deviations from a person’s normal pattern of life may be detected. These deviations may allow for early warning and prevention, perhaps even allowing for a business to provide help to an employee going through difficulties. Some data and its use is controversial. There’s a lot of debate about the acceptable use of social media data for employment decisions, for example. Without getting into that debate yet, it is possible to know – within a day – whether someone has been arrested for a felony, or convicted of a sex crime, or filed for bankruptcy and similar. And all of this data is available for pennies and is often already purchased by a company or agency, but is only in a form that can be read by humans, not applied to algorithms or models. And, wouldn’t you expect that the same technology that a credit card company uses to immediately alert you of potential fraud could be used to alert a government agency when a person with a Top Secret clearance purchased a one -way ticket to China or Russia? Again, I don’t believe that everyone’s travel patterns should be monitored, but it seems reasonable to alert when someone with a Top Secret clearance is traveling to a country that requires advanced disclosure. So, establishing Dynamic Trust is about using algorithms and models, tuned to the unique characteristics of a position of trust, and feeding them with appropriate data. School teachers would have different models than financial advisors and their models would be different than Top Secret cleared analysts. But these models describe the characteristics of trust and allow us to apply data and continuously evaluate an individual’s risk. The kinds of data may also be different based on the type of position someone holds, for example using increasingly more “private” data for holders of Top Secret than Secret clearances. In practice you could use Dynamic Trust to grant a clearance, sure. But you’d use it every day like Uber does, to perhaps prevent an analyst from exploring network drives that contain sensitive data that they shouldn’t normally look at, if their pattern of life indicated a trend towards higher risk. Or a Dynamic Trust model might limit the trading ability of a trader who received three speeding tickets in the last two months, indicating risk seeking behavior and poor judgment in his personal life that may effect the decisions he makes at work. It seems that every time there is a major event, we later learn that it was possible to see that a person was a high risk. From the Sandy Hook shooting to the Washington Navy Yard shooting to the WikiLeaks, each of the actors had left a long trail of information indicating that they posed a risk or that they were deviating from their pattern of life. Forensics is always easier than detection, but the time has come to pay serious attention to detecting and preventing events. And, we must do this while protecting civil rights and liberties and avoid indiscriminate surveillance of everyone. But, I believe we can strike a balance that analytically identifies people in positions of trust who may present a risk to their business economically, or the safety of those around them, or to national security. And, I believe that the data is inexpensively available to drive these Dynamic Trust models today.