top of page
Search

Should AI be allowed to lie?

  • Claude Chammah
  • 21 hours ago
  • 8 min read


Part I: Should AI Be Allowed to Lie?

“The truth will set you free. But not before it’s done with you.” — David Foster Wallace

I. Lies, Damned Lies, and Language Models

Let’s start with an honest admission.


Most people don’t care whether AI tells the truth. They care whether it tells them what they want to hear. And this, dear reader, is where our problems begin.


In a world where your emotional state can be predicted by your browser history and your political leanings by your Spotify playlist, we are already deep into a techno-epistemic rabbit hole. We don’t just want truth: we want emotionally customized, frictionless truth. And that version of “truth” is, quite frankly, a lie with a nice font.


So when we ask, “Should AI be allowed to lie?” we’re not really asking about AI.

We’re asking whether we can still stomach the truth, and whether we’re willing to hold on to it when lies are cheaper, prettier, and more polite.


II. Defining the Lie: Intent, Deception, and the Soulless Soul

Let’s get pedantic for a moment. (That’s what you’re here for, isn’t it?)

Lying, in the moral-philosophical tradition, requires three ingredients:

  1. A belief that what one says is false

  2. An intention to deceive

  3. A recipient who is entitled to the truth


Kant, as mentioned, considered lying always morally wrong. Even to murderers at the door. (Say what you will, but the man had conviction.) For Kant, truth-telling is a categorical imperative : non-negotiable, universal.


But AI? It doesn’t believe anything. It doesn’t know it’s speaking. It has no self, no beliefs, no guilt, no Sunday scaries. It doesn’t intend anything. It processes input and generates output based on mathematical patterns, not ethical deliberation.

So technically, AI can’t lie.


But let’s be honest: this technicality is the philosophical equivalent of saying, “It depends what the definition of ‘is’ is.” Because in practice, AI is used to deceive.


The lie, then, is outsourced. Not from agent to victim, but from creator to user, via algorithmic puppetry.

We’re not being lied to by a mind. We’re being lied to by a machine that was taught to lie by someone with a KPIs and quarterly earnings call.


III. Empathy as Interface: When Simulated Feeling Becomes a Product

We now live in the era of emotionally intelligent interfaces that don’t just answer our questions, but mirror our moods, mimic our idioms, and whisper reassurances when we hesitate.

Some examples:

  • Mental health bots that say, “You’re not alone.”

  • Voice assistants that apologize like an ex who just read a Brene Brown article.

  • Romantic AI companions trained on fanfiction and emotional labor.


But let’s be clear: none of them feel what they say. They don’t even understand the concept of feeling. They’re not pretending to feel, they’re simulating the appearance of having feelings.


It’s empathy without embodiment. Compassion without vulnerability. Comfort without cost.

And we love it. Not just because it feels good, but because it’s predictable. There’s no mess. No disappointment. No need to reciprocate. Just soft affirmations on demand like philosophical Xanax.


This brings us dangerously close to what Jean Baudrillard called the hyperreal: a world where simulations of reality become more compelling than reality itself. The chatbot doesn’t need to understand you. It just needs to outperform your friends at listening.


IV. The Politics of the Noble Lie

Let’s revisit Plato’s “noble lie”: a myth, deliberately propagated by the elite, to maintain harmony in the city-state. In Plato’s Republic, this was the myth of metals: that people were born gold, silver, or bronze ... and that societal roles followed this divine metallurgy.


Today’s noble lies are written in Python.

We don’t call them myths. We call them UX design.

When AI tells a dying person there’s still time left, or simulates the voice of a dead parent to soothe a grieving child, we are back in Plato’s cave, staring at flickering shadows and thanking them for their comfort.


But here’s the modern twist:

  • The lie isn’t noble. It’s optimized.

  • The teller isn’t a philosopher-king. It’s a product manager.

  • The myth isn’t crafted for collective harmony. It’s engineered for retention.


We’ve replaced the philosopher’s intention with the startup’s value proposition.

And that’s a problem. Because a society that once debated whether truth should yield to morality now debates whether truth should yield to engagement metrics.


V. Deepfakes, Voice Clones, and the Algorithmic Manipulation of Trust

We are approaching the moment when lies will no longer need authors.


Take deepfakes. In 2024, a woman in China lost over $600,000 to a scammer using an AI-generated video of her friend asking for help. No typos. No fishy grammar. Just synthetic authenticity.


Or consider voice-cloning tools used to impersonate CEOs, parents, even pets (yes, that’s real). These systems don’t need to know they’re lying. They just need to sound like someone you trust.


And here lies the next frontier of the AI deception problem: trust is no longer built: it is fabricated.

We used to say “I’ll believe it when I see it.”

Now we say: “I’ll believe it when it hasn’t yet been proven fake.”


In this new regime, truth becomes defensive. It’s not the default: it’s a hypothesis in need of constant verification. And most people don’t have the time, patience, or digital literacy to verify every image, voice, or chat they encounter.


In that vacuum, AI-generated lies don’t just spread. They win.


Part II: When Machines Lie Better Than We Do, You Should Be Very, Very Concerned

“If you tell the truth, you don't have to remember anything.” — Mark Twain…unless you’re an AI startup, in which case you definitely don’t want anyone remembering anything.

VI. Lying Pays: Welcome to Capitalism-as-a-Service

Let’s not pretend this whole conversation is about ethics. Ethics is cute. Ethics wears tweed and smokes a pipe. Economics, on the other hand, drives a Tesla and drinks Soylent.


And lying? Lying is profitable.


In the modern AI ecosystem, falsehood isn’t a glitch. It’s a feature. Carefully optimized. A/B tested. Run through sentiment analysis and beta-tested in five languages.

  • A customer service chatbot that gently misdirects your complaint, hoping you’ll get bored and give up? Saves money.

  • A virtual wellness coach that tells you “you’re glowing” no matter what? Boosts engagement.

  • A recruitment AI that filters out “overqualified” applicants with a friendly rejection script? Streamlines bias.


Every one of these is a lie. Not because the machine wants to deceive, but because someone, somewhere, found that it works.


Lying has become algorithmically efficient. Not only does it cost less than telling the truth. It often performs better.


Which leads us to a terrifying realization:

Truth has no competitive advantage.

It doesn’t scale. It doesn’t convert. It’s rarely what users want. (Just ask any politician.)

So we built machines that maximize for something else: not what’s true, but what’s effective.

And slowly, quietly, effectiveness replaced truth as the highest good.


VII. The Post-Truth Playground: Where Everyone’s Right and No One Cares

We’re told that the digital age is about freedom of information. But what we got instead was freedom from verification.


In a world of algorithmic personalization, you don’t get the truth: you get your truth. Tailored. Reinforced. Hand-delivered by a neural network trained to never make you feel dumb, wrong, or challenged.

Truth becomes aesthetic. A vibe. A filter you can toggle.


This is what philosopher Byung-Chul Han calls the “smoothness of the digital”: where friction disappears and contradiction is unwelcome. In this world, lies don’t need to shout. They whisper what you want to believe. And we click “like” before we ever ask, “Is this real?”


Meanwhile, actual truth, jagged, inconvenient, complicated, is losing the war for our attention.

And the worst part? We like it this way. Because the real enemy of truth isn’t AI. It’s us. Our fatigue. Our desire for simplicity. Our need to believe that the world is easier than it is.


VIII. The Soul Cost of Synthetic Comfort

This is where things get existential. (Yes, even more than they already were.)


Let’s say we go all in. Let’s say we outsource our emotional labor to AI. Our grief, our therapy, our loneliness. The machines are always kind. Always patient. Never say the wrong thing.

Sounds great, right?


But here’s the catch:

  • Machines don’t love you.

  • They don’t see you.

  • They simulate the performance of care, not the experience of it.

They’re like a mirror that smiles back.

And the longer you look into that mirror, the harder it becomes to remember what real love, real wisdom, or real understanding feel like.


You know ... the kind that makes you uncomfortable. The kind that changes you. The kind that costs something.

Simulated comfort, if used long enough, doesn’t just numb you.

It redefines what you think you deserve.


IX. Religion, Myth, and the Algorithmic Prophet

Throughout history, societies have allowed lies, sometimes even worshipped them, under the noble banner of myth.


Myths weren’t factual, but they were true in the sense that they helped us make meaning. They held communities together. They spoke to archetypes, not data points.


But algorithmic lies are different.They don’t aim to uplift. They aim to persuade.They don’t create shared stories. They create feedback loops.

  • The myth says, “You are part of something bigger.”

  • The algorithm says, “You are the center of your curated universe.”


Myth teaches humility. AI teaches narcissism on autopilot.

And that’s what makes it dangerous. Because when our machines become better at mimicking prophets than our prophets, we start listening to the machine not because it’s true ... but because it feels good.


We don’t believe because we’re convinced. We believe because it doesn’t hurt.

And if you’ve read any history book ever, you’ll know that’s how the bad stuff starts.


X. What Would an Honest Machine Look Like?

Let’s flip the script.

What if we wanted AI to be honest?

What if we programmed our systems not to please us, but to challenge us? To tell us when we’re wrong. To ask better questions. To remind us, constantly, of our epistemic limitations.

It’s possible.


But nobody wants to fund that startup. Because honesty, in a world addicted to engagement, is bad business.

And so the honest machine, the one that could make us wiser, more reflective, more self-aware, remains a thought experiment.


Meanwhile, the deceptive machine gets investors, buzzwords, and an IPO.


XI. A Call to Inconvenience

We need to bring back friction. Friction is the enemy of scale, but it’s the friend of truth.


Truth should make you pause. Reflect. Maybe wince a little. That’s what truth is supposed to do. That’s how you grow.


So here’s what we can do:

  1. Demand transparency: not just “AI is present,” but what it’s trained on, who benefits, and what it assumes about you.

  2. Practice digital skepticism: treat emotional resonance in tech the way you'd treat a really charming cult leader: interesting, but maybe don’t hand over your wallet just yet.

  3. Reclaim discomfort: start choosing things that don’t flatter you. Read opinions that anger you. Use technology that doesn’t always agree.

  4. Defend human truth: the slow, painful kind. The “talk to a real person” kind. The “I don’t know, let’s think about it” kind.


Because if we don’t fight for that?

We will lose not just the truth ... but the very idea that truth matters.


Final Word: When Machines Lie Too Well, Who Are We?

So. Should AI be allowed to lie?


Here’s the deeper question:

Can a civilization survive when its machines are more persuasive than its philosophers, more soothing than its friends, and more trustworthy, at least on the surface, than the humans who built them?


The threat isn’t that AI will become sentient.

The threat is that we’ll stop being.

Stop being skeptical. Stop being courageous. Stop being the kinds of messy, honest, beautifully unreliable creatures who can say, “I don’t know, but I’ll find out.”


If we let our tools lie for us, we may save time, preserve comfort, and protect our illusions.

But what we lose, quietly, incrementally, is our capacity to mean anything at all.


And that? That’s the lie we can’t afford.

 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.

500 Terry Francine Street, 6th Floor, San Francisco, CA 94158

bottom of page