dopetalk

Non-core Topics => Deep Learning => Topic started by: smfadmin on May 05, 2025, 09:13:27 AM

Title: AI is just as overconfident and biased as humans can be, study shows
Post by: smfadmin on May 05, 2025, 09:13:27 AM
https://www.livescience.com/technology/artificial-intelligence/ai-is-just-as-overconfident-and-biased-as-humans-can-be-study-shows

AI is just as overconfident and biased as humans can be, study shows

May 4, 2025


* RgmiboScVHY3qs2MEqVeQA-970-80.png.webp (23.33 kB . 970x546 - viewed 166 times)
(Image credit: SEAN GLADWELL/Getty Images)

Irrational tendencies — including the hot hand, base-rate neglect and sunk cost fallacy — commonly show up in AI systems, calling into question how useful they actually are.

Although humans and artificial intelligence (AI) systems "think" very differently, new research has revealed that AIs sometimes make decisions as irrationally as we do.

In almost half of the scenarios examined in a new study, ChatGPT exhibited many of the most common human decision-making biases. Published April 8. in the journal Manufacturing & Service Operations Management, the findings are the first to evaluate ChatGPT's behavior across 18 well-known cognitive biases found in human psychology.

The paper's authors, from five academic institutions across Canada and Australia, tested OpenAI's GPT-3.5 and GPT-4 — the two large language models (LLMs) powering ChatGPT — and discovered that despite being "impressively consistent" in their reasoning, they're far from immune to human-like flaws.

What's more, such consistency itself has both positive and negative effects, the authors said.

"Managers will benefit most by using these tools for problems that have a clear, formulaic solution," study lead-author Yang Chen, assistant professor of operations management at the Ivey Business School, said in a statement. "But if you’re using them for subjective or preference-driven decisions, tread carefully."

The study took commonly known human biases, including risk aversion, overconfidence and the endowment effect (where we assign more value to things we own) and applied them to prompts given to ChatGPT to see if it would fall into the same traps as humans.

Rational decisions — sometimes:

The scientists asked the LLMs hypothetical questions taken from traditional psychology, and in the context of real-world commercial applicability, in areas like inventory management or supplier negotiations. The aim was to see not just whether AI would mimic human biases but whether it would still do so when asked questions from different business domains.

GPT-4 outperformed GPT-3.5 when answering problems with clear mathematical solutions, showing fewer mistakes in probability and logic-based scenarios. But in subjective simulations, such as whether to choose a risky option to realize a gain, the chatbot often mirrored the irrational preferences humans tend to show.

"GPT-4 shows a stronger preference for certainty than even humans do," the researchers wrote in the paper, referring to the tendency for AI to tend towards safer and more predictable outcomes when given ambiguous tasks.

More importantly, the chatbots' behaviors remained mostly stable whether the questions were framed as abstract psychological problems or operational business processes. The study concluded that the biases shown weren't just a product of memorized examples — but part of how AI reasons.

One of the surprising outcomes of the study was the way GPT-4 sometimes amplified human-like errors. "In the confirmation bias task, GPT-4 always gave biased responses," the authors wrote in the study. It also showed a more pronounced tendency for the hot-hand fallacy (the bias to expect patterns in randomness) than GPT 3.5.

Conversely, ChatGPT did manage to avoid some common human biases, including base-rate neglect (where we ignore statistical facts in favor of anecdotal or case-specific information) and the sunk-cost fallacy (where decision making is influenced by a cost that has already been sustained, allowing irrelevant information to cloud judgment).

According to the authors, ChatGPT’s human-like biases come from training data that contains the cognitive biases and heuristics humans exhibit. Those tendencies are reinforced during fine-tuning, especially when human feedback further favors plausible responses over rational ones. When they come up against more ambiguous tasks, AI skews towards human reasoning patterns more so than direct logic.

"If you want accurate, unbiased decision support, use GPT in areas where you'd already trust a calculator," Chen said. When the outcome depends more on subjective or strategic inputs, however, human oversight is more important, even if it's adjusting the user prompts to correct known biases.

"AI should be treated like an employee who makes important decisions — it needs oversight and ethical guidelines," co-author Meena Andiappan, an associate professor of human resources and management at McMaster University, Canada, said in the statement.

"Otherwise, we risk automating flawed thinking instead of improving it."
Title: -
Post by: Cliftonpleme on June 01, 2025, 12:19:32 PM
An interesting or I thought interesting question came up during a discussion with a couple of my good friends.

While discussing Calvinism, Armenianism and Open Theism one of us asked if God can change the past? Now the context is that I am Armenian, but do lean towards Open theism. My one friend is soundly in the middle and the other is Armenian, but leans pretty well to the Calvinist position in many ways.

So one asked if God can change the past. I had been arguing the Open position that God cannot "know" the actual future because it doesnt exist. Time is not a literal, tangible contruct. It isnt a linear "string" with any actual points on it that can be visited. So, according to the Open Theism position, God knows every possible outcome and every possible consequence of every possible choice we can make in every situation...but cannot know the actual choice itself that we will make because until it is made...it has not happened. There IS no future yet that can be known. The lone exception, as any Open theist would readily admit, is in the instances of actual prophecy where God essentially just asserts His absolute will and demands an outcome.

While my one friend was disagreeing, the other asked "can God change the past". His point was that if we come to the conclusion that God cannot change the past because there actually IS NO past to change, then how can we then say God can know the actual choice we will make, and therefore know the actual future and not just every potential future. If the past is not tangible in a sense, like a point on a string behind us, then why do so many assume the future is tangible in a sense? If the string behind us doesnt really exist, then why do we assume the string in front of us does?

It mostly would make time merely a series of "now" where the past and future are NOT actually places but merely constructs of our memory and recorded history. You wuldnt be able to go back to yesterday because yesterday doesnt actually exist. All yesterday is is a series of moments that we lived and happen to remember. It cannot be gone to becuase it never really was a place.

So if that summation is incorrect in your opinion, then can God actually change the past? Can God go back to yesterday and change what you were eating or dinner? Or if a loved one died tragically, can God go back and prevent it? If you say yes, then wouldnt that require God to re-write all of reality, time and space and our memories from the moment of change forward? God would have to erase the butterfly effect right?

Or is God unable to change the past? And if He cannot change the past, then why is the future any different assuming the nature of time itself is constant?
SimplePortal 2.3.6 © 2008-2014, SimplePortal