Google promises AI that will not harm humans
Company sets out AI principles and promises technology will not be used for weapons or surveillance
Google has promised that it will not use its AI technology to harm people.
In a company blog post, CEO Sundar Pichai has laid out new principles for Google AI and promised it will not be used in technologies to harm people, conduct surveillance outside of accepted norms or breach human rights.
Last week Google said it was not renewing a contract with the US Department of Defence's Project Maven, which was using AI to analyse footage from military drones.
While the program was relatively small, it had attracted negative publicity, including objections and resignations from Google staffers.
In the new announcement, Pichai pledges to go beyond this, and promises that Google will not design or deploy AI in technologies that cause or are likely to cause harm; weapons or other technologies to harm people; surveillance technologies that violate internationally accepted norms; and technologies whose purpose contravenes widely accepted principles of international law and human rights.
The blog post also sets out seven principles for AI, including that AI research must be socially beneficial; it should avoid creation or reinforcement of bias such as cultural bias becoming coded into AI; it should be built and test for safety; it should be accountable to people, with appropriate opportunities for feedback, relevant explanations, and appeal under human direction; incorporate privacy by design; uphold high standards of scientific excellence including rigor and integrity; and should be made available for uses that accord with these principles.
Pichai wrote: "We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we're announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."
Google is already a member of the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI for short), a non-profit industry body launched in September 2016, to address policy and ethics around AI, and other companies, such as Microsoft, have already set out AI codes of ethics.