Facebook has launched German initiatives to defend election integrity and examine the ethics of artificial intelligence (AI) -- to ensure AI treats people fairly, protects their safety, respects their privacy, and works for them.
The world’s largest social network had a tough 2018 as it was buffeted by revelations that UK consultancy Cambridge Analytica had improperly acquired data on millions of its U.S. users to target election advertising.
The FTC is also discussing fining Facebook for violating a binding agreement to protect the privacy of its users.
“We are not the same company that we were in 2016 or even a year ago,” Chief Operating Officer Sheryl Sandberg told the DLD Munich technology conference.
“We have a fundamentally different approach to how we run our company today.”
Facebook announced a npartnership with the Technical University of Munich (TUM) to support the creation of an independent AI ethics research center. The Institute for Ethics in Artificial Intelligence, which is supported by an initial funding grant from Facebook of $7.5 million over five years, will help advance the field of ethical research on new technology and will explore fundamental issues affecting the use and impact of AI.
As AI technology increasingly impacts people and society, the academics, industry stakeholders and developers driving these advances need to do so responsibly and ensure AI treats people fairly, protects their safety, respects their privacy, and works for them.
Drawing on expertise across academia and industry, the Institute will conduct independent, evidence-based research to provide insight and guidance for society, industry, legislators and decision-makers across the private and public sectors. The Institute will address issues that affect the use and impact of artificial intelligence, such as safety, privacy, fairness and transparency.
Through its work, the Institute will seek to contribute to the broader conversation surrounding ethics and AI, pursuing research that can help provide tangible frameworks, methodologies and algorithmic approaches to advise AI developers and practitioners on ethical best practices to address real world challenges.
Facebook uses AI to spot and remove terrorist content and hate speech before it is reported to its 30,000 moderators, said Sandberg, adding it is also important to ensure that the technology is managed to prevent bias.