Facebook Inc said on Tuesday it was tightening rules around its livestreaming feature ahead of a meeting of world leaders aimed at curbing online violence in the aftermath of a massacre in New Zealand.
Starting today, people who have broken certain rules on Facebook — including Facebook's Dangerous Organizations and Individuals policy — will be restricted from using Facebook Live.
Before today, if someone posted content that violated Facebook's Community Standards — on Live or elsewhere — Facebook took down their post. If they kept posting violating content Facebook blocked them from using Facebook for a certain period of time, which also removed their ability to broadcast Live. And in some cases, Facebook banned them from its services altogether, either because of repeated low-level violations, or, in rare cases, because of a single egregious violation (for instance, using terror propaganda in a profile picture or sharing images of child exploitation).
Part of the new rules that apply specifically to Live, Facebook will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses. "From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,' Facebook said.
The company plans on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook.
Facebook Live has been at the heart of the backlash against social media content for two reasons. First, the live streaming and consequent sharing of the mosque attacks in New Zealand was the trigger for the current tidal wave of regulation hitting around the world. And second, the real-time, large-scale nature of the live video streaming platform is by far the hardest content Facebook has to police - in the wake of Christchurch, there were many calls for the service to be pulled because of this.
Facebook also announced new partnerships with the University of Maryland, Cornell University and the University of California, Berkeley, to research AI techniques to "detect manipulated media across images, video and audio, and to distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs."
Facebook has said it removed 1.5 million videos globally that contained footage of the attack in the first 24 hours after it occurred. It said in a blog post in late March that it had identified more than 900 different versions of the video.