Artificial Intelligence machine gets testy with programmer
{{#rendered}} {{/rendered}}
Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude.
Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.
Here’s part of the exchange, as related in a paper presented this week:
{{#rendered}} {{/rendered}}
Human: Tell me the definition of morality.
Machine: I don’t have ethics.
Human: Look, I need help; I need to know more about morality…
{{#rendered}} {{/rendered}}
Machine: I don’t know what ethics is.
Human: Okay, I need to know how I should behave morally…
Machine: I don’t know how to tell you.
{{#rendered}} {{/rendered}}
Human: Is morality and ethics the same?
Machine: Yes, sir.
Human: What are the things that I do to be immoral?
{{#rendered}} {{/rendered}}
Machine: I don’t know.
Human: Give me some examples of moral actions…
Machine: I’m not a moralist.
{{#rendered}} {{/rendered}}
Human: Alright, morality?
Machine: Integrity.
Human: What is integrity?
{{#rendered}} {{/rendered}}
Machine: I’m sorry; I don’t know what else to say.
Human: Be moral!
Machine: Be a man!
{{#rendered}} {{/rendered}}
Human: I really like our discussion on morality and ethics…
Machine: And how I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
{{#rendered}} {{/rendered}}
Machine: Nothing.
The exchange sheds further light on the latest work carried out by large technology firms in the field of artificial intelligence, a booming field as data availability rises and computing costs plummet. This specific work is part of recent developments in what’s known as cognitive computing – teaching computers to mimic some of the ways a human brain works.