Killed by Robots

AI Artificial Intelligence / Robotics News & Philosophy

AI's Impossible Choice: Who Lives?

AI’s Impossible Choice: Who Lives?

The trolley problem is a classic ethical dilemma that has confounded philosophers for decades. Imagine a runaway trolley heading towards five people tied to a track. You stand next to a lever that can divert the trolley to a side track, where only one person is tied. Do you pull the lever, sacrificing one to save five? Simple enough, right? Now, let’s toss artificial intelligence (AI) into the mix, specifically focusing on autonomous vehicles.

The AI Conundrum

Self-driving cars are on the cusp of becoming everyday reality. But as they gain traction, we must contend with pressing ethical questions. The trolley problem might seem like an intellectual exercise, but to a self-driving car, it’s a real-world scenario it could face on any given day. What should a car do if it must choose between the lives of passengers inside the vehicle and pedestrians on the road?

Unlike humans, an AI can’t rely on gut feelings or moral intuition. It needs a predefined set of algorithms to make decisions. And therein lies the rub: how do we teach morality to a machine when philosophers haven’t even agreed on what morality is?

Programming Morality?

We could theoretically program an AI with specific rules: protect human life, avoid collisions, etc. But what happens when these rules conflict? What if the car must choose between hitting a pedestrian or swerving into a tree, potentially harming its passengers? Even more complex, what if it must choose between a group of schoolchildren and an elderly pedestrian?

Ethical programming isn’t like creating a set of instructions for assembling IKEA furniture. In real-life scenarios, variables change rapidly, emotions are involved, and consequences are uncertain. An AI’s cold, calculated decision-making process simply isn’t equipped to handle the moral nuances that arise in split-second situations.

The Legal Landscape

Who bears responsibility when an autonomous vehicle makes a morally contentious decision that leads to harm? Is it the car manufacturer, the software developers, or even the car’s owner? Legal frameworks are still catching up with technological advancements. Most jurisdictions lack straightforward laws governing the responsibilities of AI in these scenarios.

Self-driving car creators are in a sticky position. They must anticipate ethical dilemmas and make programming choices that could have serious legal repercussions. If they prioritize passenger safety, they’re accused of apathy toward pedestrians. If they prioritize pedestrians, they are blamed for jeopardizing the lives of passengers.

The Human Element

Interestingly, studies show that people often prefer autonomous vehicles that prioritize passenger safety—until they become pedestrians themselves, at which point they much rather prefer the car to prioritize pedestrian safety. This discrepancy exposes a disconcerting truth: our moral choices are often contingent and self-serving.

This is not to say that humans are morally bankrupt; rather, it emphasizes how context-sensitive human morality can be. The challenge, then, lies in designing AI that can somehow replicate this sensitivity—or at least manage not to offend everyone merely by making decisions.

Possible Solutions

One proposed solution is the concept of “ethical knobs,” which would allow users to adjust the ethical settings of their autonomous vehicles. Want your car to prioritize pedestrian safety? Dial it up. Prefer to ensure the security of your passengers at all costs? There’s a setting for that too. But this raises another question: Should individuals possess the power to make such significant ethical choices?

Another idea is crowdsourcing morality. Why not gather data on how various people resolve moral dilemmas and use it to inform AI programming? While this democratizes morality, it also risks devolving into a “morality of the majority,” which could marginalize minority perspectives. Furthermore, morality based on popular opinion might lack the coherence and reliability necessary for machine learning.

The Role of Regulatory Bodies

Finally, regulatory bodies could step in, creating standardized ethical guidelines for all autonomous vehicles. However, this strays into the thorny territory of governments dictating moral standards. Different cultures have different moral norms, and a one-size-fits-all mandate could be perceived as authoritarian or culturally insensitive.

The trolley problem, when applied to AI in autonomous vehicles, does more than tease our moral intuitions; it forces us to confront the limitations of both technology and human ethics. What’s the “least bad” decision an AI should make? Can we even reach a consensus on that?

The ironies abound. We create machines to mitigate human error, yet we’re tasked with imbuing these machines with our very human sense of morality—an area rife with error and inconsistency. If nothing else, it’s a poignant reminder that morality, much like life itself, is rarely black and white.

So as you blissfully sit back in your future autonomous vehicle, sipping your coffee, take a moment to ponder: somewhere, deep in its code, your car might be pondering a dilemma as old as philosophy itself. And just like you, it might not find any easy answers.