![Google Drone AI military DARPA](https://cdn.statically.io/img/d.newsweek.com/en/full/834240/google-drone-ai-military-darpa.jpg)
Revelations that Google is quietly working with the U.S. military to develop technologies to analyze drone footage has reportedly provoked outrage from the tech giant's employees.
A report from Gizmodo revealed details of Google's partnership with the Department of Defense on Project Maven, an initiative that uses artificial intelligence in surveillance operations.
The project was not a secret but had not been previously reported. It came to public attention only after Google employees leaked details of an internal mailing list detailing the project.
![Predator Drone](https://cdn.statically.io/img/d.newsweek.com/en/full/537401/predator-drone.jpg?w=1200&f=b756560762b5d7736f1709493cb7d67c)
The Department of Defense gave details of Project Maven when it was first announced last year, though Google's direct involvement was not mentioned.
"People and computers will work symbiotically to increase the ability of weapon systems to detect objects," Marine Intelligence Officer Drew Cukor said in a Department of Defense press release last year.
"Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they're doing now. That's our goal," he added.
Related: AI experts urge ban on weaponized AI with 'life and death powers over humans'
Google acknowledged the work it does with the Department of Defense, adding that it was currently involved in internal discussions about how its machine learning technologies are being used.
"We have long worked with government agencies to provide technology solutions," a Google spokesperson said. "This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data."
The spokesperson added that the technology was for "non-offensive uses only" and acknowledged that the use of machine learning for military applications "naturally raises valid concerns" and that the company was developing policies and safeguards around its development.
Several Google employees expressed concern that the project presents ethical questions about the "development and use of machine learning," Gizmodo added. Furthermore, hundreds of artificial intelligence experts have previously warned of the dangers posed by the technology within a military context.
![russiafedorrobot](https://cdn.statically.io/img/d.newsweek.com/en/full/795513/russiafedorrobot.png?w=1200&f=94ad599900a33c76a058fb36f6b4362a)
Open letters, which were published in parallel last year, were sent to the prime ministers of Australia and Canada to highlight the "spectacular advances" of AI and machine learning in recent years.
"Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line," the open letter to Canadian Prime Minister Justin Trudeau stated.
"Canada's AI research community is calling on you and your government to make Canada the 20th country in the world to take a firm global stand against weaponizing AI," the letter indicated.
Uncommon Knowledge
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
About the writer
Anthony Cuthbertson is a staff writer at Newsweek, based in London.
Anthony's awards include Digital Writer of the Year (Online ... Read more