На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Science World

74 подписчика

Are robots destined to be EVIL? Moral uncertainty means experts will never be able to program machines to know right from wrong

  • Researchers in Germany say it is impossible for robots to ever have morals
  •  In a study they claim the famous 'trolley problem' prevents such action
  • This famous thought experiment involves using a switch to save the lives of many in favour of a few
  • But there is no 'correct' answer - so robots will always be unpredictable
  • Follows warning from prominent scientists that AI is a threat to humanity

 

The likes of Stephen Hawking and Elon Musk have warned about the potential threats posed by artificial intelligence.

And now a group of researchers has fuelled this debate, saying no matter what we teach a robot, it will never know right from wrong.

Ultimately they conclude robots can't determine what is the ‘right’ thing to do in certain situation, and instead will be unpredictable in their response to uncomputable moral dilemmas.

Researchers at Darmstadt University of Technology in Germany claim robots (illustration shown) can never be moral. In a study, they said the famous moral dilemma called the 'trolley problem' prevents such action being programmed. As there is no 'correct' answer, robots will always be unpredictable, they warn
 Researchers at Darmstadt University of Technology in Germany claim robots (illustration shown) can never be moral. In a study, they said the famous moral dilemma called the 'trolley problem' prevents such action being programmed. As there is no 'correct' answer, robots will always be unpredictable, they warn

In their paper ‘Logical limitations to machine ethics with consequences to lethal autonomous weapons,’ the group from Darmstadt University of Technology in Germany discussed the impending use of autonomous weapons.

‘Lethal autonomous weapons promise to revolutionise warfare - and raise a multitude of ethical and legal questions,’ they wrote.

While it has been suggested that robots could be employed with a ‘moral compass’, such as the Geneva Conventions, the researchers said this will not be sufficient.

This is due to something known as the halting problem, which states it is not possible to tell whether an arbitrary computer program will finish running eventually, or run forever.

Michael Byrne, writing for Vice, said that how this relates to artificial intelligence is that ‘algorithms do unexpected things; software has bugs.’

He continued: ‘An algorithm programmed to do the right thing might do the wrong thing.’

In the paper, the researchers give the example of the ‘trolley problem’.

Although there are several versions, one aspect of the problem involves a trolley running towards a group of five people, whom it will fatally injure if it is allowed to keep running. 

 

Will humans and robots ever be able to co-exist? Teaching a robot right from wrong might turn out to be more difficult than we think, even if we were to program them with sets of pre-determined ethical instructions such as the Geneva Convention. Shown is humanoid robot Roboy shaking hands in June 2013

Will humans and robots ever be able to co-exist? Teaching a robot right from wrong might turn out to be more difficult than we think, even if we were to program them with sets of pre-determined ethical instructions such as the Geneva Convention. Shown is humanoid robot Roboy shaking hands in June 2013

Recently Elon Musk and Stephen Hawking (shown) have both warned of the rise of artificial intelligence. The latter said that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment. 'The development of full artificial intelligence could spell the end of the human race,' he said.

Recently Elon Musk and Stephen Hawking (shown) have both warned of the rise of artificial intelligence. The latter said that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment. 'The development of full artificial intelligence could spell the end of the human race,' he said.

A person standing near the switch is given the option to divert the trolley towards just a single person.

The right course of action to take in the scenario is a moral dilemma, but considering there is no absolute, correct solution, it poses a problem for the future artificial intelligence.

Namely, if they cannot work out the right thing to do, they will simply choose seemingly at random. 

This means that artificial intelligence will always have an air of unpredictability to it; no matter how it is programmed. 

And even with the strictest and safest level of morality, there are some scenarios it simply can't handle. 

The scenario is reminiscent of Isaac Asimov's iRobot, in which a robot goes rogue and is accused of killing a Dr. Alfred Lanning, the co-founder of story's US Robotics (USR).

To deal with the potential problem, the researchers list a number of suggestions that all future robots should be programmed with.

One is that no robot should be designed with the sole or primary task of killing or harming humans.

The manufacturer of a robot should also be held accountable for any actions it takes, ‘to comply with existing laws and fundamental rights and freedoms.’

Ultimately, though, future AI may require even more checks and balances - or self-imposed limitations - to prevent robots having to deal with moral dilemmas.

The scenario is reminiscent of Isaac Asimov's iRobot, (film adaptation starring Bridget Moynahan left and Will Smith right) in which a robot (centre) goes rogue and is accused of killing Dre Alfred Lanning, the co-founder of story's US Robotics (USR)
 

The scenario is reminiscent of Isaac Asimov's iRobot, (film adaptation starring Bridget Moynahan left and Will Smith right) in which a robot (centre) goes rogue and is accused of killing Dre Alfred Lanning, the co-founder of story's US Robotics (USR)

 

Source



 

 



Картина дня

наверх