More Like Us Than We Realize: ChatGPT Gets Caught Thinking Like a Human

Ad

Somaderm


Artificial Intelligence Data AI Problem SolvingA new study finds that ChatGPT mirrors human decision-making biases in nearly half of tested scenarios, including overconfidence and the gambler’s fallacy. While it excels at logic and math, the AI struggles with judgment calls, revealing it may think more like us than we realize.

Groundbreaking research reveals that AI doesn’t just process data, it also makes the same judgment errors as humans.

Can we really trust AI to make better decisions than humans? According to a recent study, the answer is: not always. Researchers found that OpenAI’s ChatGPT, one of the most advanced and widely used AI models, sometimes makes the same decision-making errors as humans. In certain scenarios, it exhibits familiar cognitive biases, such as overconfidence and the hot-hand (or gambler’s) fallacy. Yet in other cases, it behaves in ways that differ significantly from human reasoning, for example, it tends not to fall for base-rate neglect or the sunk-cost fallacy.

The study, published in the INFORMS journal Manufacturing & Service Operations Management, suggests that ChatGPT doesn’t simply analyze data, it mirrors aspects of human thinking, including mental shortcuts and systematic errors. These patterns of bias appear relatively consistent across various business contexts, although they may shift as newer versions of the AI are developed.

AI: A Smart Assistant with Human-Like Flaws

The study put ChatGPT through 18 different bias tests. The results?

  • AI falls into human decision traps – ChatGPT showed biases like overconfidence or ambiguity aversion, and conjunction fallacy (aka as the “Linda problem”), in nearly half the tests.
  • AI is great at math, but struggles with judgment calls – It excels at logical and probability-based problems but stumbles when decisions require subjective reasoning.
  • Bias isn’t going away – Although the newer GPT-4 model is more analytically accurate than its predecessor, it sometimes displayed stronger biases in judgment-based tasks.

Why This Matters

From job hiring to loan approvals, AI is already shaping major decisions in business and government. But if AI mimics human biases, could it be reinforcing bad decisions instead of fixing them?

“As AI learns from human data, it may also think like a human biases and all,” says Yang Chen, lead author and assistant professor at Western University. “Our research shows when AI is used to make judgment calls, it sometimes employs the same mental shortcuts as people.”

The study found that ChatGPT tends to:

  • Play it safeAI avoids risk, even when riskier choices might yield better results.
  • Overestimate itself – ChatGPT assumes it’s more accurate than it really is.
  • Seek confirmationAI favors information that supports existing assumptions, rather than challenging them.
  • Avoid ambiguity AI prefers alternatives with more certain information and less ambiguity.

“When a decision has a clear right answer, AI nails it – it is better at finding the right formula than most people are,” says Anton Ovchinnikov of Queen’s University. “But when judgment is involved, AI may fall into the same cognitive traps as people.”

So, Can We Trust AI to Make Big Decisions?

With governments worldwide working on AI regulations, the study raises an urgent question: Should we rely on AI to make important calls when it can be just as biased as humans?

AI isn’t a neutral referee,” says Samuel Kirshner of UNSW Business School. “If left unchecked, it might not fix decision-making problems – it could actually make them worse.”

The researchers say that’s why businesses and policymakers need to monitor AI’s decisions as closely as they would a human decision-maker.

AI should be treated like an employee who makes important decisions – it needs oversight and ethical guidelines,” says Meena Andiappan of McMaster University. “Otherwise, we risk automating flawed thinking instead of improving it.”

What’s Next?

The study’s authors recommend regular audits of AI-driven decisions and refining AI systems to reduce biases. With AI’s influence growing, making sure it improves decision-making – rather than just replicating human flaws – will be key.

“The evolution from GPT-3.5 to 4.0 suggests the latest models are becoming more human in some areas, yet less human but more accurate in others,” says Tracy Jenkin of Queen’s University. “Managers must evaluate how different models perform on their decision-making use cases and regularly re-evaluate to avoid surprises. Some use cases will need significant model refinement.”

Reference: “A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?” by Yang Chen, Samuel N. Kirshner, Anton Ovchinnikov, Meena Andiappan and Tracy Jenkin, 31 January 2025, Manufacturing & Service Operations Management.
DOI: 10.1287/msom.2023.0279


Ad

Somaderm

SomaDerm, SomaDerm CBD, SomaDerm AWE (by New U Life).

Somaderm Gel is an advanced scientific formulation created to support your body’s natural growth hormone production. Somaderm is based on the latest research and technology in the field of nutritional supplements and is designed to help you feel and look your best.