Home Technology Social Media Facebook’s Lack Of Moderators Is Hurting Its Fight Against Misinformation

Facebook’s Lack Of Moderators Is Hurting Its Fight Against Misinformation

In a statement, a Facebook spokesperson told Engadget the company has taken down “millions of ads” that violate its rules.

Facebook's Lack Of Moderators Is Hurting Its Fight Against Misinformation - SurgeZirc NG
Attendees walk past a Facebook logo during Facebook Inc's F8 developers conference in San Jose, California, U.S., April 30, 2019. REUTERS/Stephen Lam

Facebook has taken a number of steps meant to curb the spread of coronavirus misinformation, yet the company’s shortage of human moderators has left some major gaps in its ability to enforce its policies.

Consumer Reports successfully bought a number of ads that blatantly violated the company’s rules, including one with a claim that people should drink “small daily doses” of bleach in order to stave off the virus. Consumer Reports pulled the ads before they ran, and Facebook disabled the accounts associated with them after they were flagged by the publication.

Facebook has previously warned that it’s been unable to rely on many of its human content moderators, most of whom are contractors and not able to work from home, due to the coronavirus pandemic. Instead, the company has been relying heavily on its automated systems, which use artificial intelligence to flag potential violations.

Several weeks into this new arrangement, though, and it appears that Facebook’s automated tools are coming up short. As CR points out, the ad recommending drinking bleach is especially egregious considering Facebook executives, including Mark Zuckerberg, have regularly cited the claim as one that would be barred under Facebook’s rules.

Other ads called the coronavirus a “hoax” and discouraged social distancing efforts — both of which violate Facebook’s policies prohibiting posts that discourage treatment or “taking appropriate precautions.”

A Facebook spokesperson noted  the company’s automated system can continue to flag ads after they’ve been purchased and after they’ve started running on the service.

There were other potential “red flags” that Facebook’s system could have used as a signal to give Consumer Reports’ ads extra scrutiny.

YOU MAY ALSO LIKE: Facebook’s ‘Campus’ Test Hints At A Return To Its College Roots

The posts were linked to a non-existent organization that only created Facebook accounts days before buying the ads — a technique often used by spammers. But if Facebook’s ad approval takes any of these signals into consideration, it didn’t raise any alarms in this instance.

In a statement, a Facebook spokesperson told Engadget the company has taken down “millions of ads” that violate its rules.

“While we’ve removed millions of ads and commerce listings for violating our policies related to COVID-19, we’re always working to improve our enforcement systems to prevent harmful misinformation related to this emergency from spreading on our services,” the spokesperson said.

Critics have long claimed that Facebook’s ad policies are often enforced unevenly. The company has been accused of spreading medical misinformation about HIV prevention medication, and a recent paper from New York University concluded the company’s transparency rules — meant to make political ads easier to track  are “easy to evade.”

0 0 votes
Article Rating
Subscribe
Notify of
guest
17 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
17
0
Would love your thoughts, please comment.x
()
x