Despite Facebook’s push, AI isn’t ready to moderate disturbing content

Despite Facebook’s push, AI isn’t ready to moderate disturbing content

Although these guys probably aren't the ones doing the struggling.
Although these guys probably aren’t the ones doing the struggling.

Image: Justin Sullivan/Getty Images

2017%2f02%2f23%2f8d%2fjackmorse2copy.daa9eBy Jack Morse

Artificial intelligence is the way of the future. When it comes to Facebook’s content moderation, however, we are still very much in the present.

The almost 2 billion-user strong social media giant is working at a furious pace to change that. 

As a series of leaked internal Facebook slides revealing how the company decides what content violates its community standards shows, the Menlo Park-based Facebook is still largely dependent on its skin and bones Community Operations team to decide which posts stay and which posts go. 

In other words, despite Facebook CEO Mark Zuckerberg’s efforts to push AI in all things, humans still rule the content-moderation roost. But that is changing. 

Facebook announced in March that it would roll out an AI system designed to identify users contemplating suicide or self-harm. Its AI uses pattern recognition to determine if a post — and the comments surrounding it — resemble previous posts identified as indicating a risk of suicide. If so, Facebook users who see the post in question will also more prominently see options for reporting self-harm. 

Importantly, in the end, the determination as to whether or not to actually make the report is left up to people. The AI just can’t cut it alone, but it’s helping. 

Facebook’s artificial intelligence is also getting better at determining what’s in the photos we post — even if there’s no text or tagging accompanying them. The company noted in February that its AI-based image-recognition tools allow users to search photos for things like whether or not someone is wearing a black shirt.

These advancements — identifying what’s in photos and making informed guesses as to the intent behind certain types of posts — are huge assets when it comes to helping humans wade through the violence, abuse, and threats posted to Facebook on a daily basis. As a Facebook spokesperson confirmed to Mashable, assisting human moderators is at present the primary purpose of the company’s automated systems. 

But if those systems can determine what’s in photos, as well the possible intention driving some posts, can they also be used to flag and remove the troubling content posted to the site? Because there’s a lot of it. According to The Guardian, in just one month Facebook saw the posting of 54,000 potential cases of sexually related extortion and revenge porn.

At present, the answer appears to be no. There are several reasons why this may be the case, but a Facebook spokesperson confirmed that the company’s automated system isn’t enough when it comes to the holy grail of content moderation: understanding context.

And is still going wrong.

And is still going wrong.

Image: Justin Sullivan/Getty

For Facebook, the context of a post is vital in determining whether or not it should be removed from the site. But it is perhaps in part because context is so, well, contextual, that its AI has trouble making the call — especially when the guidelines themselves are so murky. 

Take the rules on sexual and violent content. According to the leaked Facebook documents obtained by The Guardian, two very disturbing posts are to be treated differently by the Community Operations team. It is not OK to post "#stab and become the fear of the Zionist," but it is OK to post "Little girl needs to keep to herself before daddy breaks her face."

If the online reaction is any indication, understanding why one of these is OK and the other is not is difficult for humans. Artificial intelligence apparently isn’t faring much better.

Facebook’s desire to change this is clear, and the company is already making strides to bring AI to the monitoring of visual content on the site.

“We are researching systems that can look at photos and videos to flag content our team should review," explained Zuckerberg in a February open letter. "This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.” 

Whether this same level of AI automation can ever be applied to monitoring the words we post and the context we post them in remains to be seen. We do know one thing for sure: Facebook is working hard to make it so. 

In the meantime, even for one of the world’s most influential tech companies, it’s still up to the humans. 

from Mashable! http://ift.tt/2raQ5Oc
via IFTTT