Google released a set of principles about artificial intelligence that it says will guide its work going forward.
In a blog post, the company's CEO, Sundar Pichai, listed seven guiding objectives about AI, which include that it be socially beneficial, incorporate privacy design principles and that it uphold high standards of scientific excellence.
Pichai's post also listed four application areas Google will not design or deploy AI in. Those include "technologies that cause or are likely to cause overall harm," "technologies that gather or use information for surveillance violating internationally accepted norms" and technologies that conflict with "widely accepted principles of international law and human rights." A fourth area Google says it won't design or deploy AI in are technologies whose main purpose is to directly injure people.
While Google says it's not developing AI for use in weapons, it says it will continue to work with governments and the military on things like cybersecurity and veterans' health care.