Guidance based on artificial intelligence (AI) may be uniquely placed to foster biases in humans, leading to less effective decision making, say researchers, who found that people with a positive view of AI may be at higher risk of being misled by AI tools. The study, titled “Examining Human Reliance on Artificial Intelligence in Decision Making,” is published in Scientific Reports.
Lead author Dr. Sophie Nightingale of Lancaster University said, “Understanding human reliance on AI is critical given controversial reports of AI inaccuracy and bias. Furthermore, the erroneous belief that using technology removes biases may lead to overreliance on AI.”
The research team also included Joe Pearson, formerly of Lancaster University, Itiel Dror from Cognitive Consultants International (CCI-HQ), and Emma Jayes, Georgina Mason, and Grace-Rose Whordley from the Defence Science and Technology Laboratory.









