Social Media Technology and Innovation

Artificial Intelligence and moderation on Social Networks: where are we now?

A little glimpse into the Artificial Intelligence Background

Facebook Moderation,

 

 

1950’s was the start of something big. Something REALLY big!

Did you know that Artificial Intelligence emerged with Alan Turing’s work in the 50’s? He simply asked himself if a machine could think as a human being.

Computer programs have excelled humans on several cognitive abilities over the years, such as the game of chess in 1997. Garry Kasparov, the best chess player in the world played chess against Deep Blue (a supercomputer built by IBM).

After 6 rounds, Kasparov has bowed to Deep Blue’s success. If you would like to see how Deep Blue excelled Kasparov, watch this video showing the game !

 

A.I. and moderation on social media : what’s up ?

Since then A.I. rises in many sectors and takes a very large place in our world. However, a specific area of the A.I. is way behind when we talk about sorting out the acceptable contents on social media. Let’s talk here about two major actors that are really concerned by this matter : Youtube and Facebook.

When we talk about moderation, we talk about the avoidance of excess or extremes, especially in one’s behavior or political opinions. And both Facebook and Youtube are often dealing with this issue. This is due to the way they work: before it might be deleted, people can post anything on social media.

Facebook

When you read the news, you can often see apologies from Facebook.

The main reasons for those excuses are about uneven content moderation. Last December, 29th, Facebook has apologized for allowing Hate Speech to Fester. Another problem of moderation? During the American presidential elections for letting Russia publish ads aiming to influence the next American president.

All of those news show there clearly is a lack of knowledge in this area. Facebook uses for some publications A.I. For instance, Facebook has image-detection software and a new A.I. system that  can understand words in context. But Facebook recognizes its A.I. is not sophisticated enough to answer to all the needs of moderation.

Youtube-logo

Youtube is facing similar issues.

Advertisers fear to see their ads placed next to inappropriate content and parents are anxious about violent content apparently directed at children. A global concern is that their A.I. is not efficient enough. Since June 2017 150,000 videos for violent extremism have been removed by the platform : 98% of those have also been flagged by the A.I.

But so many subjects are missing in their A.I. systems which makes moderation on Youtube not perfect at all.  The A.I. is today only capable of limiting the risks of free content posting.

As a consequence, thousands of people are still focused on staring and forbidding millions of unadapted content every day. By the end of 2018 30,000 people will be hired to do the job (which is quite horrible, let’s say it : war acts, violence, brutality, suicide, abuses on minors… Nothing funny).

But both Youtube and Facebook stress that Machine Learning is moving forward in order to automate the management of some of the content. Today the A.I. helps dedicated operators to remove 5 times more unadapted videos from the social media.

But let’s remain positive : Mark Zuckerberg just announced his resolutions for 2018 (click here to see his full post). Moderation is clearly a part of this year’s next actions for Facebook!

As he wrote on a Facebook post:
« The world feels anxious and divided, and Facebook has a lot of work to do — whether it’s protecting our community from abuse and hate » or « defending against interference by nation states »”

About the author

Anne-Sophie Minart

Anne-Sophie Minart