·

Real or AI? When We Think We Know — But Don’t

One of the most interesting moments from our end-of-year AI & Society celebration didn’t involve a keynote speaker, slides, or technical explanations. It came from a simple question: Real or AI? Can you tell the difference? We ran a short trivia-style activity — a deliberately human experiment — to explore how easily images, stories, and confidence can blur the…


One of the most interesting moments from our end-of-year AI & Society celebration didn’t involve a keynote speaker, slides, or technical explanations.

It came from a simple question:

Real or AI? Can you tell the difference?

We ran a short trivia-style activity — a deliberately human experiment — to explore how easily images, stories, and confidence can blur the line between what’s real and what’s generated.

No tech background required.
No trick questions.
No specialist knowledge.

Just instinct.

The setup

Participants were presented with four scenarios drawn from recent global news and public imagery. Some were real. Some were AI-generated.

We won’t share the images here — partly because we want to preserve the experience for future events — but the prompts alone were enough to spark debate, laughter, and hesitation.

What mattered wasn’t the answers.

It was the process people went through to arrive at them.

What we noticed in the room

Almost everyone felt confident at first.

Then doubt crept in.

People changed their minds.
They second-guessed their assumptions.
They argued both sides — convincingly.

And that’s the point.

In a world where AI can generate realistic images, plausible stories, and authoritative-sounding content, believability no longer equals truth.

Why this matters (far beyond trivia)

This activity wasn’t about spotting fakes or “catching people out”.

It was about revealing something deeper:

  • How quickly we rely on pattern recognition
  • How much trust we place in familiarity and narrative
  • How confidence can exist independently of accuracy

These are not technical problems.
They’re human ones.

As generative AI becomes embedded in media, work, education, and everyday communication, the real challenge isn’t just misinformation or deepfakes — it’s overconfidence in our own judgement.

From detection to discernment

At AI & Society, we’re less interested in training people to become forensic analysts of AI content.

We’re more interested in cultivating the habit of pause.

To ask:

  • Why does this feel believable to me?
  • What assumptions am I bringing?
  • What context might be missing?
  • Who benefits if I accept this as true?

In an AI-shaped world, critical thinking becomes a civic skill, not a technical one.

The real question

So rather than asking “Would you get it right?”, a better question might be:

How confident would you feel — and why?

Because in the age of AI, the gap between confidence and correctness is where the most important conversations begin.


Discover more from AI & Society

Subscribe to get the latest posts sent to your email.

More from the blog

Discover more from AI & Society

Subscribe now to keep reading and get access to the full archive.

Continue reading