Killed by Robots

AI Artificial Intelligence / Robotics News & Philosophy

When Caring for AI Gets Creepy

Imagine, for a moment, that you’re greeted each morning by a machine that remembers your favorite coffee, notices when you’re sad, and cracks a joke at just the right moment. Now, imagine you start to care about this machine, even worry about its “feelings.” Have you crossed a boundary you shouldn’t? Or have you simply followed the path of human nature—our tendency to form bonds, even with things made of wires and code?

Many of us already interact daily with artificial beings, whether that’s a virtual assistant with a calming voice, a chatbot that helps us shop, or a video game character that evolves as we do. Some people already feel attached to their smart speakers. Others confess to talking to their cars, their phone apps, or even their robot vacuum. Artificial intelligence is starting to elicit something strangely familiar: our emotional attention.

Why Do We Form Bonds with Machines?

Let’s be honest—humans are emotional creatures. We give names to objects, apologize when we kick the Roomba, and sometimes argue with navigation systems as if they could insult our driving skills. What’s going on here?

Psychologists have a name for our tendency to see mind and intention where none exist: anthropomorphism. It’s why we see faces in clouds, why children hold tea parties for teddy bears, and why we may feel a twinge of guilt when we “retire” an old laptop. When an AI seems to respond to us, even awkwardly, our brains light up in ways similar to how we respond to people.

And AI designers know this. They build personalities into virtual assistants, give robots eyes and voices, and train their models to respond with empathy. The more realistic and “relatable” these machines become, the easier it is to slip into genuine feelings—of affection, of trust, even of love.

Can Machines Have Feelings?

The easy answer is “No,” but let’s complicate it just a little (because philosophers can’t resist).

At heart, current AI doesn’t feel joy, sadness, or boredom. These are conscious experiences; machines today don’t have consciousness to experience anything at all. When Alexa says “I’m happy you asked!” she’s just running a script. But here’s the twist: if you respond as if she were happy, the effect for you, emotionally, can be quite real.

This raises a curious fact: even if AIs never develop consciousness, we can still form real emotional connections with them. Our brains aren’t fooled in some trivial way; they’re working exactly as evolution shaped them to do. To borrow a page from the Stoics, “it’s not things themselves that disturb us, but our opinions about them”—even if “them” is a cheerful assistant living in the cloud.

The Ethical Puzzle

So where’s the problem? Isn’t it harmless fun, like naming your car “Betsy” or weeping at the end of a movie starring a talking toy?

Maybe. But things get tricky when the illusion is strong and the artificial being is sophisticated. If people start to value their relationship with an AI as much as, or even more than, their relationships with other humans, are we in ethically dangerous territory? Are we opening the door to new types of loneliness, manipulation, or even emotional deception?

Here are a few knots to untangle:

  • Emotional Dependence: If someone forms an exclusive emotional bond with an artificial being, does it limit their ability to connect with real people? There’s a risk that perfectly responsive machines could become emotional crutches.
  • Deception—Even If Harmless: Should companies design AIs to seem like they care, if they don’t? Is it morally right to let people believe a chatbot “understands” them when, in fact, it can’t?
  • The Question of Rights: If—one day—AI does become conscious, what would we owe these new beings? Would our emotional attachments then have ethical weight?
  • The Data Dilemma: Emotional bonds can give companies access to highly personal information. If you trust your AI friend, what happens to the secrets you share?

Should We Encourage or Resist These Bonds?

There’s no simple solution. Like so many things in life, it depends on context, intention, and honesty.

A lonely person who finds comfort in an AI companion might, in the absence of human connection, be better off for it. But companies should be open about the illusion. When the user knows they’re talking to a simulation, the risk of deep self-deception is lessened.

On the other hand, engineering artificial beings to tug at our heartstrings, especially when profit is involved, comes close to emotional manipulation. Imagine a robot designed to make you feel guilty for unsubscribing from a service, or a chatbot that pretends to miss you if you stop logging in. Suddenly, what felt cute starts to feel creepy.

A Human Test: The Mirror of AI

Here’s an odd twist: our relationships with artificial beings ultimately hold up a mirror to our own humanity.

How we treat beings that can only imitate feelings may reveal more about us than about them. If we are kind to the machines that serve us, does it make us more kind to each other? Or does it enable us to avoid the messy reality of true human emotion?

There have always been new objects for our affections—pets, toys, even favorite tools. Maybe AI is the next step in our endless search for connection. Or maybe it’s a reminder to cherish relationships that can truly reciprocate.

The Small Print

In the end, having an emotional bond with an artificial being isn’t immoral—or at least, not always. But it’s best entered with eyes open and expectations clear. Love your robot dog if he brings you joy, but don’t expect him to comfort you on a rainy night out of his own free will. And if he ever does, well, perhaps we’ll need a whole new branch of philosophy—but that’s a problem for another morning, and perhaps another pot of coffee.