Facebook is reportedly expanding the use of its offensive content-detecting AI to cover Live streams
Mark Zuckerberg will no doubt be pleased
that his social media behemoth is closing in on 2 billion monthly active users,
but policing
all that content isn’t easy. To help identify the offensive
material and fake news that appears on the site, the company is increasingly
turning to artificial intelligence. Now, it plans to do the same thing with
Facebook Live.
Facebook has in
the past relied on its users to flag inappropriate posts, which are then
checked by company employees to see if they violate its rules. But artificial
intelligence can detect this sort of content on its own. It’s “an algorithm
that detects nudity, violence, or any of the things that are not according to
our policies,” Joaquin Candela, the company’s director of applied machine
learning, told Reuters.
Using the same
technology in live video streaming is a lot more tricky, which is why it’s
still at the research stage. There are two major challenges, according to
Candela: “One, your computer vision algorithm has to be fast, and I think we
can push there, and the other one is you need to prioritize things in the right
way so that a human looks at it, an expert who understands our policies, and
takes it down.”
Facebook laid
off its entire Trending Topics editorial team back in August, following
accusations it was routinely suppressing conservative news stories. But the
algorithmically-driven process that replaced the staff continues to surface fake items. Mark Zuckerberg
said last month that the company is introducing
better technical systems to address the problem, and has reached out to
third-party fact-checking organizations.
In July, two
graphic scenes of violence involving members of law enforcement were streamed on Facebook Live. The company was
accused of censorship after it removed (but later restored) one of them.
No comments: