The steady drumbeat of concern that artificially intelligent computers and bots will take over, superseding humanity, is a legitimate one. So much so that AI and robotics engineers like Anca Dragan of UC Berkeley’s Center for Human-Compatible AI are working every day to “ensure our interactions with robots are positive ones,” as noted in a recent Forbes article. Ultimately in a lot of ways, that comes down to how these technologies make decisions.
“[W]e use algorithms we believe are more accurate to make important decisions, but they turn out to be biased. Or, we train learning systems to care about accuracy, treating every mistake as just as important when, in fact, some mistakes have much bigger implications in practice than others,” said Dragan. “That’s why I believe as we build tools that can better optimize any objective, we need to also build tools that work with people to figure out what the right objectives are.”
In so many walks of life today we are increasingly reliant on so-called “intelligent” systems to inform and assist what once was facilitated solely through human cognition. While a lot of discussion around things like AI typically involve still-emerging applications like robots and self-driving vehicles, systems incorporating AI and machine learning that are intended to inform our decisions are all around us today, in business and our personal lives.
Incorporating Human Intuition into Data-Supported Decision Making
The thing is, most technologies geared toward using data and analytics to improve on human capabilities remove the human element entirely. Instead, they should incorporate human intuition into the decision support process. This way, these systems would complement the human approach to decisioning rather than making business or consumer decisions for us. We call this humanistic hybrid approach Decision Intelligence, or DI.
Many on the cutting edge of data-driven computing intelligence research agree. “We need to be much more human-centered,” said Fei-Fei Li, director of Stanford’s AI Lab and chief scientist of Google Cloud, speaking with MIT Technology Review. “If you look at where we are in AI, I would say it’s the great triumph of pattern recognition. It is very task-focused, it lacks contextual awareness, and it lacks the kind of flexible learning that humans have. We also want to make technology that makes humans’ lives better, our world safer, our lives more productive and better. All this requires a layer of human-level communication and collaboration.”
Li argued, “we’ve got to bring back the contextual understanding. We’ve got to bring knowledge abstraction and reasoning.”
When it comes to decision-making, that abstract thought so evident in human intelligence yet out-of-reach for computers comes through when we assess tradeoffs. When three research professors wrote about the importance of tradeoffs in decision-making recently for Harvard Business Review, they put it in the context of how computers predict and detect credit card fraud. “If such predictions were perfect, the network’s decision process is easy,” they wrote. “Decline if and only if fraud exists.”
But, data-assisted decision-making without the human element is prone to error. As the researchers note, “there is a trade-off between detecting every case of fraud and inconveniencing the user. (Have you ever had a card declined when you tried to use it while traveling?) And since convenience is the whole credit card business, that trade-off is not something to ignore.”
This process, like so many decisions in business and other contexts, requires a human touch. “Someone at the credit card association needs to assess how the entire organization is affected when a legitimate transaction is denied. They need to trade that off against the effects of allowing a transaction that is fraudulent. And that trade-off may be different for high net worth individuals than for casual card users. No AI can make that call. Humans need to do so,” wrote the researchers.
It’s no surprise so many of the world’s AI researchers see the flaws in relying on systems that leave out the human element. At Element Data, we believe making confident decisions requires an approach that integrates the human thought process while assisting and refining it. We call it Decision Intelligence, and it is at the core of our Decision Cloud platform.