WASHINGTON - The Pentagon is developing a secret program, in which the United States (US) military will use artificial intelligence (AI) to predict and detect enemy missile launches (missiles). The program, which is still within the study stage, is a sign of increased American interest in AI.
A Pentagon source revealed the program to Reuters. According to the source, several programs are in progress. All intended for the application of artificial intelligence in anticipation and warning of enemy missile launches.
Computer systems will explore large amounts of data, such as drone recording or satellite imagery, much faster and more accurately than humans. In a pilot program focused on North Korea, AI is used to locate and track mobile missiles that can be hidden in tunnels, forests, and caves. AI then assesses whether the activity is a direct threat or not, and warns the commanders.
As soon as signs of a missile launch are detected, the US government will have time to choose a diplomatic option or switch by destroying missiles, ideally even before enemy missiles leave the ground.
The Trump Administration has proposed three times the funding for an AI-driven missile program next year to USD83 million. The $ 83 million budget, according to a Reuters report, looks like a modest sum. Because, the budget only funds one of many "hush-hush" programs, and represents Washington's growing interest in AI technology for the military.
However, not everyone is like "gung-ho" about AI military development. Earlier this week, Google canceled a controversial AI contract with the Pentagon after receiving a reaction from its employees. In a letter to management, 3,000 Google staff said the company "should not be involved in the war business". Employees claimed to work with the military against the internet giant's ethos, "Do not do evil".
Under the contract, Google and the Department of Defense work together on "Project Maven", an AI program that will increase the targeting of unmanned drone strikes. The program will analyze video footage from drones, track objects on the ground, and study their movements, as well as apply machine learning techniques.
Anti-drone campaign activists and human rights activists complain that "Project Maven" will pave the way for AI to set its own targets, completely erasing humans from the "chain of murders".
There are other risks as well. Developing AI technology can provoke an arms race with Russia or China. This technology is also still in its early stages, and can make mistakes. US Air Force General John Hyten, the supreme commander of the US nuclear force, said that once the system operates, human protection would still be needed to control the "escalations", the process by which nuclear missiles are launched.
"(Artificial Intelligence) can force you to the ladder if you do not put the security," Hyten said in an interview. "Once you do that, then everything starts to move."
The dangers inherent in allowing AI to make life or death decisions highlighted by an MIT study that found the AI neural network can easily be fooled into thinking that the actual plastic turtle is a rifle. Hackers can theoretically exploit this vulnerability, and force AI-driven missile systems to attack the wrong targets.
Despite the potential "cost" of human error, the Pentagon pressed forward with his research. Some officials interviewed by Reuters believe that elements of the AI missile program could be operational in the early 2020s. Others believe that the government is not investing enough.
"The Russians and Chinese must be pursuing these kinds of things," Mac Thornberry, head of the Republican Parliament's Service Committee of the Republican Committee, told Reuters on Wednesday (6/6/2018). ""Perhaps with a greater effort in some ways than we have."
No comments:
Post a Comment