Facebook is making a new AI to put terrorists offline
Facebook has begun to deploy its artificial intelligence capabilities to help combat the use of its service by terrorists.
Officials of the company said that Facebook would use the AI in collaboration with human reviewers to find and remove immediately from "terrorist content", before other users see. Such technology is already used to block child pornography on Facebook and other services such as YouTube, but Facebook has been reluctant to apply it to other potentially less NET uses. In most cases, Facebook removes only the objectionable material if users report it first. Facebook and other Internet companies are facing growing pressure from the Government to identify and prevent the spread of terrorist propaganda and the recruitment of messages about their services. Earlier this month, the British Prime Minister, Theresa May, called on Governments to conclude international agreements to prevent the spread of extremism online. Some proposed measures would impose on legally responsible businesses to post their material on their sites. Monika Bickert, Director of the management of world politics, and Brian Fishman, responsible for the policy against terrorism did not specifically mention calls from May. But they acknowledged that "following recent terrorist attacks, people have questioned the role of technology companies in the fight against terrorism in line". "We want to answer these questions in mind. We agree with those who say that social networking should not be a place where terrorists have a voice,"they wrote. Among the techniques to have used in this effort, we find the image matching, comparing photos and videos people upload on Facebook to pictures or videos of terrorism "known." Matches mean that either that Facebook has already deleted this material, or that he had finished in a database of images such as Facebook share with Microsoft, Twitter and YouTube. Facebook is also developing "signals based on the text" previously deleted messages that were rented or supported terrorist organizations. It will fuel these signals in a system of machine learning, over time, will learn to detect similar publications. Bickert and Fishman said that when Facebook would receive reports on 'posts of terrorism' potential, it examines these reports on an urgent basis. In addition, they say that, in exceptional cases where it reveals evidence of imminent damage, it quickly informs the authorities. But I is just part of the process. The technology isn't yet at the point of understanding the nuances of the language and the context, so that human beings are still in the loop. Facebook says that it employs more than 150 people who are "exclusively or mainly focused on the fight against terrorism as a main responsibility". This includes academic experts on counterterrorism, former prosecutors, former officials of law enforcement and the analysts and engineers.