Skip to main content

Solving AI’s (over)confidence problem

Nisar Ahmed and Eric Frew

Nisar Ahmed and Eric Frew

University of Colorado Boulder researchers are developing artificial intelligence systems so computers can recognize and explain their own limitations to users.

It takes on an important issue people face with each other every day.

“We all have different competencies and we know our own limitations. If I'm asked to complete a task, I generally know if I can do it. Machines aren't programmed like that,” said Nisar Ahmed, an assistant professor in the Ann and H.J. Smead Department of Aerospace Engineering Sciences at the University of Colorado Boulder.

Ahmed is serving as principal investigator at Boulder on a new, multi-university grant from the Defense Advanced Research Projects Agency.

The $3.9 million grant, which is being led by the and also includes the University of Texas at Austin, seeks to build “competency-aware machine learning”—essentially, machine learning systems that, when given a task, can tell you if they'll be able to do it and also explain why.

It is an area with broad and serious applications, according to Eric Frew, a Boulder aerospace professor serving as a co-investigator on the project.

“Do you trust this drone to deliver a package of medicine, or do you take it in your own car, which will take three times as long to get there? If you're a soldier, do you trust a drone to go over a hill and search for an enemy? Will it be thorough enough?” Frew said.

Ahmed notes the engineers who design drones generally understand their every capability or lack thereof, but end-users naturally will not have the same level of knowledge. A drone that can tell you if it will likely be successful in completing a task should be more trustworthy to the operator.

The work is focused on unmanned aerial vehicles but has applications to ground robots and other AI systems.

“It's a combination of aerospace, computer science, and a little bit of psychology,” Ahmed said. “It's very interdisciplinary.”

The goal is not to pre-program drones with every possible mission or obstacle they could face, but rather develop a learning-based AI that has a base level of knowledge and can think abstractly in new situations and explain its decisions – just like people do.

“Humans are generally better than machines at adapting to unknowns, taking an unforeseen problem they have never faced before and comparing it to past events to find solutions. Machines haven't been programmed like that up to now,” Ahmed said.

Frew compares it to a situation understood by nearly all American adults - getting your driver's license.

“We don't test you on every possible circumstance you could face as a driver. We give you a driving test that covers a handful of situations and a knowledge test and then trust you with a license and that you can use reasoning behind the wheel,” Frew said.

Over the course of the grant, they will develop new competency-awareness assessment algorithms for AI systems, and then put them to the test using drones.

“We’re working on a problem that has mostly gone unnoticed in the computing, machine learning, and AI world, but gets at questions a lot of people have about trust. Will this robot do what I tell it to? Can it?” Ahmed said. “By developing systems that are aware that they have lots of answers, but don't have all the answers all the time and can tell us that, it should make them easier to use. I'm very excited about the possibilities.”