Can AI Learn Morality? OpenAI Funds Research to Find Out
Dec 3, 2024

Sam

I just read that OpenAI is funding research to make AI more moral. Do you think that's even possible?

Amy

It’s tricky. They’re funding researchers at Duke University to develop algorithms that predict human moral judgments. But teaching AI something as complex as morality is a huge challenge.

Sam

Why is it so hard? Can’t they just program it to know right from wrong?

Amy

It’s not that simple. Morality isn’t like math — there’s no single answer. Different people, cultures, and even philosophers define morality in different ways.

Sam

Oh, like how some people think lying is always wrong, but others think it’s okay if it helps someone?

Amy

Exactly! That’s one of the big challenges. AI doesn’t ‘understand’ morals like humans do — it’s trained on data, mostly from the internet. That means it can pick up biases or reflect only certain cultural viewpoints.

Sam

So, it might make decisions based on what’s common online? That doesn’t seem very fair.

Amy

Right. That’s already happened. A few years ago, an AI called Delphi tried to give moral advice. It worked sometimes, but with certain questions, it gave biased or even harmful answers, depending on how the question was asked.

Sam

Wait, so you could trick it into giving a bad answer just by rephrasing the question?

Amy

Pretty much. AI doesn’t truly understand ethics; it’s just predicting patterns based on its training data. And if that data has flaws, so does the AI.

Sam

So, how is OpenAI planning to fix this? What’s their big idea?

Amy

They’re working with researchers to study how humans make moral decisions, especially in complex areas like medicine, law, and business. The goal is to create algorithms that can predict what most people would think is the ‘right’ choice in a given situation.

Sam

That sounds... ambitious. But what if people don’t agree on what’s right?

Amy

That’s the problem. Morality is subjective. Philosophers have debated this for thousands of years, and there’s no perfect system. Plus, even different AI models have different moral leanings. For example, some AIs lean towards utilitarianism — focusing on the greatest good for the most people — while others stick to strict rules, like never lying.

Sam

Wow, even AI can’t agree on morality! So, do you think this research will actually work?

Amy

It’s hard to say. It might help us understand how AI can assist in moral decision-making, but I don’t think AI will ever replace human judgment. Morality is too complex and personal for an algorithm to fully capture.

Sam

Yeah, it sounds like AI could give suggestions, but we’d still need to make the final call.

Amy

Exactly. AI can be a tool, but the responsibility will always rest with humans. Hopefully, OpenAI’s research will help us use AI ethically and responsibly.