Global EditionASIA 中文双语Français
Opinion
Home / Opinion / Opinion Line

AI and weapons might not be the smartest move

China Daily | Updated: 2018-06-12 07:24
Share
Share - WeChat
The Google logo is seen at the Young Entrepreneurs fair in Paris, France, February 7, 2018. [Photo/VCG]

ON THURSDAY, Google released its manifesto of principles guiding Artificial Intelligence, which states it will not support the use of AI for weaponized systems. Thepaper.cn commented on Monday:

It was under the pressure of its staff and a public outcry that Google released the manifesto, after it was reported that Google had signed a contract with the US military, according to which it would provide the military with its Tensor Flow API interface for machine learning.

That sparked criticism both within and outside Google, with many people worried that it might assist in threatening human lives. Reports even show that about 4,000 staff members wrote letters to oppose it.

Google's release of its AI principles may have pacified people. However, it raises the question: How do we prevent AI from posing a threat to humans? Science fiction writers have been asking this question in their works for a long time, and many of them have expressed worries about robots killing people.

In 2012, when US troops were reported to use intelligent unmanned aerial vehicles in the battlefield, that aroused deeper worries. Some said that when UAVs have intelligence, that means a machine will have the power to decide to kill a human.

In order to solve this problem, many have contributed their wisdom. Isaac Asimov, in his novel I, Robot listed three laws of robotics so that smart machines would not hurt humans.

However, these principles depend on AI, not humans. A more effective way is to prevent robots and AI from controlling weapons, so that they will never get the chance to kill humans.

With the progress of deep learning algorithms, AI will become increasingly more independent from humans. If they are given control over weapons, the day when they could decide to kill humans might come. It is better to prevent it from happening at the very beginning and strictly limit AIs from controlling weapons.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US