Google’s Fight to be “Right” 🎖️

 Photo: Minh Uong/The New York Times.

Photo: Minh Uong/The New York Times.

Project Maven and killer drones

The US Department of Defense (DoD) established Project Maven in July of last year to use machine learning and artificial intelligence to analyze the vast amounts of footage shot by US drones. The DoD and Google signed a contract to utilize Google’s TensorFlow AI systems to analyze drone data, detect objects of interest and flag them for a human analyst to review.
  
“Do No Evil”


Earlier this year, 3,000 Google employees signed a petition demanding that Google make a commitment to avoid building warfare technology. This ethos derives from Google’s “Do No Evil” motto, which was removed from its Code of Conduct in late 2015.  Google acquiesced and will not seek to renew its Maven contract in March 2019. The internal backlash has created a firestorm about the role tech behemoths should play in ensuring national defense.



The right took a more fact-based approach with less selective rhetoric. The WSJ acknowledged the internal employee backlash and steps being taken by Google to change its internal ethical guidelines, while also airing more moderate views. The right generally tried to emphasize the business and national security case for Google to work with the DoD. One publication on the right referred to the internal ambivalence within Google as “baggage” that made them a less attractive partner.
 

The left focuses on the negative potential of AI military technology and interlaces stories with incendiary quotes, such as from Google’s Chief Scientist for AI at Google Cloud, “Avoid at ALL COSTS any mention or implication of AI...this is red meat to the media to find all ways to damage Google.” On the one hand, the left seeks to blame Google for “weaponizing AI” while with the other hand emphasizing the selective positive employee reactions to the news that Google will back away from Maven.


Idealism vs Duty

The world is split on the cost/benefit tradeoff of AI. Naysayers, such as Elon Musk, are taking a Terminator view where machines end up ruling over humans thanks to their involvement in military projects. Proponents, like Mark Zuckerberg, believe that AI can serve humankind by one day removing the need for the vast majority of humans to work. Both views are extremes while the US faces more immediate national security challenges. Does it make sense that some of the most innovative American companies cannot find a way to contribute both to the betterment of humankind while safeguarding our nation?

When your drone decides it doesn't like you

drone.gif

 

Share this story!