Are robots destined to be EVIL? Can we program machines to know right from wrong?

Lethal autonomous weapons promise to revolutionise warfare - and raise a multitude of ethical and legal questions,’ they wrote.

WHAT IS THE TROLLEY PROBLEM?
Although there are several versions, one aspect of the problem involves a trolley running towards a group of five people, whom it will fatally injure if it is allowed to keep running.

A person standing near the switch is given the option to divert the trolley towards just a single person.
The right course of action to take in the scenario is a moral dilemma, but considering there is no absolute, correct solution, it poses a problem for the future artificial intelligence.
Namely, if they cannot work out the right thing to do, they will simply choose seemingly at random.
While it has been suggested that robots could be employed with a ‘moral compass’, such as the Geneva Conventions, the researchers said this will not be sufficient.
This is due to something known as the halting problem, which states it is not possible to tell whether an arbitrary computer program will finish running eventually, or run forever.
Michael Byrne, writing for Vice, said that how this relates to artificial intelligence is that ‘algorithms do unexpected things; software has bugs.’
He continued: ‘An algorithm programmed to do the right thing might do the wrong thing.’
In the paper, the researchers give the example of the ‘trolley problem’.
Although there are several versions, one aspect of the problem involves a trolley running towards a group of five people, whom it will fatally injure if it is allowed to keep running

A person standing near the switch is given the option to divert the trolley towards just a single person.
The right course of action to take in the scenario is a moral dilemma, but considering there is no absolute, correct solution, it poses a problem for the future artificial intelligence.
Namely, if they cannot work out the right thing to do, they will simply choose seemingly at random.
This means that artificial intelligence will always have an air of unpredictability to it; no matter how it is programmed.
And even with the strictest and safest level of morality, there are some scenarios it simply can't handle.
The scenario is reminiscent of Isaac Asimov's iRobot, in which a robot goes rogue and is accused of killing a Dr. Alfred Lanning, the co-founder of story's US Robotics (USR).
To deal with the potential problem, the researchers list a number of suggestions that all future robots should be programmed with.
One is that no robot should be designed with the sole or primary task of killing or harming humans.
The manufacturer of a robot should also be held accountable for any actions it takes, ‘to comply with existing laws and fundamental rights and freedoms.’
Ultimately, though, future AI may require even more checks and balances - or self-imposed limitations - to prevent robots having to deal with moral dilemmas.


.

Robotics FAQs | Source

You May Like

Comments