Google will implement additional measures to identify and remove terrorist or violent extremist content on its video sharing platform YouTube, the company said.
Google said it would increase the use of technology to help identify extremist and terrorism-related videos. The company will devote more engineering resources to apply advanced machine learning research to train new "content classifiers" in order to identify and remove extremist and terrorism-related content.
Because technology alone is not a silver bullet, Google will also increase the number of independent experts in YouTube's Trusted Flagger programme. Human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. Google says Trusted Flagger reports are accurate over 90 per cent of the time. The company will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and will support them with operational grants.
Third, Google will be taking a tougher stance on videos that do not clearly violate its policies - for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.
Finally, YouTube will expand its role in counter-radicalisation efforts. Building on the Creators for Change programme promoting YouTube voices against hate and radicalisation, Google is working with Jigsaw to implement the "Redirect Method" more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.
Facebook on Thursday offered additional insight on its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.
Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly.