AI mimics human decision-making for greater accuracy

Resume: Researchers developed a neural network that mimics human decision-making by incorporating elements of uncertainty and evidence accumulation. This model, trained on handwritten digits, produces more human-like decisions compared to traditional neural networks.

It shows similar accuracy, response time and trust patterns as humans. This advancement could lead to more reliable AI systems and reduce the cognitive load of daily decision-making.

Key Facts:

  1. Human decisions: The neural network mimics human uncertainty and evidence gathering in decision-making.
  2. Performance comparison: The model shows similar accuracy and reliability patterns to humans when tested on a noisy dataset.
  3. Future potential: This approach can improve the reliability of AI and reduce the cognitive burden of everyday decisions.

Source: Georgia Institute of Technology

People make nearly 35,000 decisions every day, from whether it’s safe to cross the road to what to eat for lunch. Each decision involves weighing up options, remembering similar previous scenarios, and feeling reasonably confident about the right choice. What may seem like a snap decision is actually the result of gathering evidence from the environment. And often the same person makes different decisions in the same scenarios at different times.

Neural networks do the opposite, making the same decisions over and over again. Now, Georgia Tech researchers in the lab of associate professor Dobromir Rahnev are training them to make decisions like humans.

Here you see the outline of a head.
“If we try to bring our models closer to the human brain, that will show up in the behavior itself without fine-tuning,” he said. Credit: Neuroscience News

According to the researchers, this science of human decision-making has only recently been applied to machine learning. But by developing a neural network that is even closer to the real human brain, the network can become more reliable.

In an article in Nature Human behavior“The RTNet neural network exhibits the hallmarks of human perceptual decision-making,” a team from the School of Psychology unveils a new neural network trained to make decisions similar to those of humans.

Decoding decision

“Neural networks make a decision without telling you whether they have confidence in it,” said Farshad Rafiei, who earned his Ph.D. in psychology from Georgia Tech. “This is one of the key differences from how humans make decisions.”

For example, large language models (LLMs) are prone to hallucinations. When a LLM is given a question to which it does not know the answer, it makes something up without acknowledging the trick. In contrast, most people in the same situation will admit that they do not know the answer. Building a more human-like neural network can avoid this duplicity and lead to more accurate answers.

Making the model

The team trained their neural network on handwritten digits from a famous computer science dataset called MNIST, asking it to decipher each number. To determine the model’s accuracy, they ran it on the original dataset and then added noise to the digits to make them harder for humans to distinguish.

To compare the model’s performance with that of humans, they trained their model (and three other models: CNet, BLNet, and MSDNet) on the original MNIST dataset without noise. They then tested them on the noisy version used in the experiments and compared the results on the two datasets.

The researchers’ model was based on two key components: a Bayesian neural network (BNN), which uses probability to make decisions, and an evidence accumulation process that keeps track of the evidence for each choice. The BNN produces responses that are slightly different each time.

As it gathers more evidence, the accumulation process may sometimes favor one choice and sometimes another. Once there is enough evidence to decide, the RTNet stops the accumulation process and makes a decision.

The researchers also measured the model’s decision-making speed to see if it aligned with a psychological phenomenon called the “speed-accuracy tradeoff,” which dictates that people are less accurate when they have to make decisions quickly.

Once they had the model’s results, they compared them to the results of humans. Sixty Georgia Tech students looked at the same dataset and rated their confidence in their decisions, and the researchers found that the accuracy rate, response time, and confidence patterns were similar between the humans and the neural network.

“In general, we don’t have enough human data in existing computer science literature, so we don’t know how people will behave when exposed to these images. This limitation hinders the development of models that accurately mimic human decision-making,” Rafiei said.

“This work provides one of the largest datasets of people responding to MNIST.”

The team’s model not only outperformed all rival deterministic models, it was also more accurate in higher-speed scenarios thanks to another fundamental element of human psychology: RTNet behaves like humans. For example, people feel more confident when they make the right decisions. Without even having to specifically train the model to promote trust, the model adopted it automatically, Rafiei noted.

“If we try to bring our models closer to the human brain, that will become visible in the behavior itself, without any further adjustments being needed,” he said.

The research team hopes to train the neural network on more varied datasets to test its potential. They also expect to apply this BNN model to other neural networks to enable them to rationalize more like humans.

Ultimately, algorithms could not only mimic our decision-making skills, but could even alleviate some of the cognitive burden of the 35,000 decisions we make every day.

About this news about artificial intelligence research

Author: Tess Malone
Source: Georgia Institute of Technology
Contact: Tess Malone – Georgia Institute of Technology
Image: The image is attributed to Neuroscience News

Original research: Closed access.
“RTNet neural network exhibits the characteristics of human perceptual decision-making” by Dobromir Rahnev et al. Nature Human Behavior


Abstract

The RTNet neural network exhibits the characteristics of human perceptual decision making

Convolutional neural networks show promise as models of biological vision. However, their decision-making behavior, including the fact that they are deterministic and use equal amounts of computation for easy and difficult stimuli, differs significantly from human decision-making, limiting their applicability as models of human perceptual behavior.

Here we develop a novel neural network, RTNet, that generates stochastic decisions and human-like response time (RT) distributions. We further performed extensive tests that showed that RTNet reproduces all fundamental features of human accuracy, RT, and confidence, outperforming all current alternatives.

To test RTNet’s ability to predict human behavior on novel images, we collected accuracy, RT, and confidence data from 60 human participants performing a digit discrimination task. We found that the accuracy, RT, and confidence produced by RTNet for individual novel images correlated with the same amounts produced by human participants.

Importantly, human participants who more closely resembled average human performance also appeared to be closer to RTNet’s predictions. This suggests that RTNet successfully captured average human behavior.

Overall, RTNet is a promising model of human RTs that reveals the critical features of perceptual decision making.

Leave a Comment