Platforms have trouble keeping up with the content of moderation in COVID-19

0
102


TORONTO –
While hundreds of thousands of businesses across the country have seen their work stop in the middle of COVID-19, Chris Priebe knows the opposite. Owner of Two Hat, an artificial intelligence moderator based in Kelowna, British Columbia, has never been more busy helping customers, including game brands Nintendo Switch, Habbo, Rovio and Supercell, browse billions of comments and conversations, and quickly identify and remove dangerous items for users.

“We dealt with 60 billion last month. Previously, it was 30 billion. This is how bad the coronavirus is. This is at least double the normal volume, “said Priebe in April, before monthly treatment volumes hit 90 billion.

“(The platforms) are faced, in some cases, with 15 times the volume. How can they possibly take care of their audience? Because that does not mean that incomes have increased by 15 times or that they can afford to hire as many people. ”

Priebe is not alone in fighting to keep online gaming, social media and gaming platforms safe among COVID-19. Companies like Facebook, Instagram, Twitter, YouTube and Google have been warning all users since at least April that they are experiencing a shortage of content moderators, which results in a delay in removing harmful posts.

The stakes are high. A record number of people around the world are spending more and more time at home on their favorite platforms, challenging servers and turning messaging, social media and comment sections into a Wild West.

The situation has increased concerns from privacy experts about the spread of misinformation and the likelihood that users will come across hate speech, pornography, violence and other harmful content.

“A number of people are quite dissatisfied with the process of moderating content as it is … and then you add to this pandemic … You see a huge increase in harassing behavior and problem behavior, then the content stays longer, Said Suzie Dunn, a professor at the University of Ottawa who specializes in the intersection of technology, equality and law.

“This is a real challenge because content moderators are a bit like front line workers. This is an essential service that we need to have at a time like this, so we hope to see more content moderators working. ”

However, unlike workers in other industries who have been working from home since the advent of the COVID-19 pandemic, such a change is difficult for many content moderators, as their work involves images and language that you would not want not that children or other family members catch a glimpse of.

“Some of them may not be able to work on certain things they would be working on in the office,” Kevin Chan, chief public policy officer for Facebook Canada, told Canadian Press.

“They’re looking at the potentially private and sensitive things that have been brought to their attention and we need to make sure … that these things can be handled in the secure and private way they deserve. ”

Full-time Facebook workers have stepped up and are doing some of the moderation work, including with entrepreneurs who can’t have proprietary and sensitive content at home. These workers deal with content related to “real-world harms” such as child safety and suicide and self-harm.

“There is no doubt that this will pose challenges as long as we can be so responsive,” said Chan.

To deal with the situation, Facebook has put in place measures to curb the flow of COVID-19 misinformation and is focused on eliminating and removing content related to terrorism and anything that incites violence or links to “dangerous” individuals and organizations.

On Twitter, machine learning and automation are used to help the business examine the reports most likely to cause damage first and to help automatically classify content or “dispute” accounts.

“Although we work to ensure the consistency of our systems, they can sometimes lack the context that our teams provide, which can lead us to make mistakes,” Twitter said in a blog. “As a result, we will not permanently suspend any accounts solely on the basis of our automated application systems. ”

Google has also increased its reliance on machine-based systems to reduce the need for people to work from the office, and has said that increasing automation has many drawbacks, including a potential increase in content classified for deletion and shorter time limits for calls.

“They’re not always as precise or precise in their content analysis as human reviewers,” added a Google blog published in March.

It is a feeling that Priebe has encountered many times, but he has a counter argument: “AI is not perfect but … humans are not perfect either. ”

He gave the example of a child playing a game at home during the pandemic, when pedophiles could be more active online and try to contact young people.

“You have three different humans watching the same conversation and they will not give you the same answer. Some of them will call it grooming and some won’t, “said Priebe.

Priebe believes that an ideal system mixes humans and AI because the latter knows well what to do with obvious cases such as when a user’s content is reported almost a dozen times in a short time or when someone receives a message that reads hello and hits the report just to see what the button does.

“You don’t need a human to watch his screen and watch this absolutely sexual content potentially in front of his children who have sneaked behind them because artificial intelligence will win over it every time,” he said. -he declares.

“Let humans do what humans do well, that is, treat this intermediate category of subjective things, difficult or difficult to understand, of which AI is not sure. ”

No matter how moderation is done, there are always things that will fall through the cracks, especially during a pandemic, said Dunn.

“No system is perfect. ”

This report from The Canadian Press was first published on June 7, 2020.

LEAVE A REPLY

Please enter your comment!
Please enter your name here