Georgia Tech’s New Neural Network Mimics Human Decision-Making

A groundbreaking study by Georgia Tech researchers has unveiled a neural network that mimics human decision-making processes. This advancement could innovate how AI systems function, making them more reliable and accurate.

A team of researchers at Georgia Tech has made a groundbreaking advancement in artificial intelligence by developing a neural network that emulates human decision-making. This innovation could transform AI systems, making them more reliable and accurate.

Humans make nearly 35,000 decisions daily, a process involving complex environmental evidence gathering and varying decision responses even in similar scenarios. In contrast, traditional neural networks are consistent in their decisions every time, potentially limiting their practical applications. Associate Professor Dobromir Rahnev’s lab at Georgia Tech, however, is changing that norm.

In their study, published in Nature Human Behaviour, researchers from the School of Psychology at Georgia Tech, introduced RTNet, a neural network designed to match human decision-making patterns.

“Neural networks make a decision without telling you whether or not they are confident about their decision,” Farshad Rafiei, who earned his doctoral degree in psychology from Georgia Tech, said in a news release.

This is a stark contrast to humans, who typically acknowledge uncertainties.

To develop and test this innovative model, the team employed the renowned MNIST dataset, asking their neural network to decipher handwritten digits. They added noise to the dataset, making it difficult for both humans and machines to identify the digits accurately. The researchers then compared the neural network’s performance to that of 60 Georgia Tech students. The results were strikingly similar in terms of accuracy, response time and confidence levels, indicating the network’s human-like behavior.

The research team’s neural network leverages a Bayesian neural network (BNN) framework that uses probability for decision-making and an evidence accumulation process. This means the model’s responses vary with each decision, just like human decisions, which can differ based on accumulated evidence.

As Rafiei noted, the model also intrinsically followed the “speed-accuracy trade-off,” a psychological phenomenon where faster decisions often result in reduced accuracy.

“Generally speaking, we don’t have enough human data in existing computer science literature, so we don’t know how people will behave when they are exposed to these images,” added Rafiei. “This work provides one of the biggest datasets of humans responding to MNIST.”

This neural network not only outpaced deterministic models but also performed better in scenarios requiring fast decisions due to its human-like confidence application.

“If we try to make our models closer to the human brain, it will show in the behavior itself without fine-tuning,” Rafiei added.

Looking ahead, the researchers aim to train RTNet on more diverse datasets and incorporate this BNN framework into other neural networks. Their ultimate goal is to create algorithms that can lessen the cognitive load of the 35,000 decisions we make each day, improving both efficiency and accuracy.

This advancement underscores the potential for neural networks to evolve beyond their current constraints, paving the way for AI systems that think and react more like humans.