Embracing AI's imperfections: Why 'Less Wrong' is the new 'Always Right'

LinkedIn
Facebook
X

Prefer listening? I transformed this article into a podcast using Google’s NotebookLM. It’s surprisingly accurate and even expands on some of the ideas. Give it a listen!

 

In my presentations about artificial intelligence (AI), the most frequent comment I receiveis,

“Yes, but AI makes mistakes.”

Okay, but it’s also true that even the best human experts make mistakes!

This skepticism is understandable, given our historical expectation that computers deliver flawless results. However, as AI becomes more integrated into critical fields like healthcare, it’s essential to recalibrate our understanding: while AI isn’t infallible, it often makes fewer mistakes than even the most seasoned professionals.

 

Human error vs. AI accuracy

Human experts, despite their extensive training and experience, are not immune to errors. In medical diagnostics, for instance, misdiagnoses can occur due to factors like fatigue, cognitive biases, or limited exposure to rare conditions. AI systems, on the other hand, can analyze vast datasets rapidly, identifying patterns that might elude human observers. Recent advancements have led to AI models that, in certain scenarios, outperform human counterparts in accuracy.

It’s true that AI doesn’t always provide the correct answer. However, the latest models have demonstrated a reduced error rate compared to humans, and this trend is expected to continue. For example, a study published in November 2024 revealed that large language models surpassed physicians in diagnostic accuracy, highlighting AI’s potential to enhance clinical decision-making when integrated strategically alongside human expertise.

🎲 The Dice Analogy: Why we’ll bet on AI (almost) every time

Let’s talk probabilities — and dice.

Imagine a simple game: you roll a fair six-sided die, and you have to bet your money on what number will come up. But you’re not choosing a specific number. Instead, you have two betting options:

  • Option A: The die will land on 1 through 5
  • Option B: The die will land on 6

Easy choice, right? You’d bet on Option A every time — because it has a five out of six chance of being correct. It’s not guaranteed, of course. There’s always that 1-in-6 chance that a 6 shows up, and you lose your bet. But still, you’ll go with the odds.

Now, replace the die with AI, and the concept becomes clear:

AI won’t always be right — but it will be more likely to be right than any human.

That’s what matters. You’ll bet on AI not because it’s perfect, but because it’s statistically the better bet. As models improve, and especially as we approach Artificial General Intelligence (AGI), that bet becomes even safer — with AI outperforming humans across an increasing number of domains.

Of course, just like you still keep in mind that a 6 can come up, you’ll always remember that AI can be wrong. And that’s healthy. It’s not about blind trust — it’s about informed trust based on probability.

In the future, the smartest thing we can do won’t be to expect perfection from AI, but to accept the odds and bet accordingly.

Cultural Shift: Accepting AI’s Fallibility

Our longstanding interaction with computers has conditioned us to expect perfection—calculations in spreadsheets, for instance, are invariably precise. However, AI operates differently. It offers probabilistic assessments based on patterns and data, which means occasional errors are inherent. Recognizing that AI’s outputs are often our best approximation of accuracy, even if not flawless, requires a cultural shift in our perception of technology.

One challenge with AI is its tendency to present information with unwavering confidence,regardless of the underlying uncertainty. This can be problematic, especially when users are unaware of the potential for error.

Developing AI systems that can express degrees of confidence or uncertainty will be crucial in fostering trust and appropriate reliance on AI-generated insights.


As AI continues to permeate various sectors, it’s imperative to balance our expectations. While AI may not always be right, its capacity to be “less wrong” than humans positions it as a valuable tool in our decision-making arsenal. By embracing AI’s imperfections and understanding its strengths, we can harness its potential to complement human expertise, leading to more accurate and efficient outcomes across diverse fields.