〜Windowless Planes Could Be on the Horizon〜
Google says it will no longer permit its artificial intelligence, or AI technology to be used in any activities involving weapons.
The company’s CEO, Sundar Pichai, announced the decision in an internet post. He wrote that the new policy was one of several newly-launched “principles” aimed at guiding the company’s AI work in the future.
Google says it will no longer design or launch AI for weapons or other technologies whose main purpose is to cause harm to people.
“We believe these principles are the right foundation for our company and the future development of AI,” Pichai wrote.
The principles were announced after more than 4,000 Google employees signed a document calling for the company to cancel an AI agreement with the U.S. Department of Defense. That agreement, known as Project Maven, involves the use of Google’s AI technology to examine drone images for the U.S. military.
Kirk Hanson is the director of the Markkula Center for Applied Ethics at Santa Clara University in California. He told VOA the opposition by Google employees to the U.S. military agreement was based on fears that AI technology could lead to the creation of “autonomous weapons.”
Hanson said other companies could also face pressure from employees or the public if their AI technology is used to develop autonomous weapons. Just as with driverless vehicles, autonomous weapon systems may not be as safe as their supporters promise.
【the first challenge】