Please disable your adblock and script blockers to view this page

Facebook’s AI detects 97% of all prohibited content but there’s a lot more to do

.

Facebook has released its third Community Standards Enforcement Report.

For the first time this report includes data on appeals and content restored, in addition to the eight policy areas covered in the second edition of the report — adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, hate speech, spam, global terrorist propaganda and violence, and graphic content —  it also includes data on illicit sales of regulated goods (specifically, firearm and drug sales).

“We have a responsibility to protect people’s freedom of expression in everything we do. But at the same time, we also have a responsibility to keep people safe on Facebook and prevent harm from playing out across our services,” Facebook CEO Mark Zuckerberg said shortly after the Community Standards update was published, according to BuzzFeed News.

Essentially, the report highlights that Facebook is catching huge volumes of harmful content. In six of the nine areas tracked in the report, Facebook says its AI proactively detected 96.8 percent of the content before a human spotted it. In Q4 2018, Facebook was able to detect 96.2 percent content using AI.

Facebooks AI detects 97% of all prohibited content but theres a lot more to do

Stickers bearing the Facebook logo are pictured at Facebook’s F8 developers conference. Image: Reuters

For hate speech, it says it now identifies 65 percent of the more than four million hate speech posts removed from Facebook each quarter, up from 24 percent just over a year ago and 59 percent in Q4 2018.

Further, Facebook also said that it is using AI to identify personal ads, pictures, and videos that violate its regulations.

In the first quarter of this year, Facebook says it took action on about 9,00,000 pieces of drug sale content, of which 83.3 percent were detected proactively by its AI models.

With respect to terrorism, child nudity, and sexual exploitation, those numbers were far lower — Facebook says that in Q1 2019, for every 10,000 times people viewed content on the social network, less than three views contained content that violated each policy.

Additionally, Facebook also said that it removed 3.4 billion fakes accounts in the last six months. Of these 1.2 billion came during the fourth quarter of 2018 and 2.2 billion during the first quarter of this year. More than 99 percent of these were disabled before someone reported them to the company. Facebook says most of the fake accounts were blocked “within minutes” of their creation. In the April-September period last year, Facebook blocked 1.5 billion accounts.

The increase in removals shows the challenges Facebook faces in removing accounts created by computers to spread spam, fake news, and other objectionable material. Even as Facebook’s detection tools get better, so do the efforts by creators of these fake accounts.

Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.

..
..

Post a Comment

[blogger]

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget