Dive Brief:
- Reuters reports that Facebook is developing a new machine learning program to help monitor feeds on Facebook Live for offensive material, part of an ongoing effort to use AI technology to weed out bad content from the site.
- The new algorithm will look for “nudity, violence or any of the things that are not according to our policies,” according to Joaquin Candela, the director of applied machine learning at Facebook
- Two major hurdles for the program at the moment are speed in flagging down content and prioritizing in the right way so that a human can remove the flagged content afterward, per Candela.
Dive Insight:
Many brands are now showing a more public aversion toward inflammatory sites and content that doesn't necessarily align with their “values,” as Kellogg’s recent ban of the alt-right publisher Brietbart News demonstrates. Extreme or inappropriate video on Facebook is another area most marketers might wish to avoid, but there’s few solid safeguards in place to stop a Facebook ad from appearing adjacent to graphic user or publisher content in News Feed or on Live.
Driven by machines, Facebook’s new AI program can potentially pick out unsavory posts more efficiently than human policing efforts or community flagging would. The company already boasts world-renowned facial recognition software, and the increased monitoring efforts might further build out its already impressive automated image analysis.
One of the interesting examples Reuters points to where Facebook has fallen into hot water over content policing is the recent removal of an iconic photo featuring a naked child during the Vietnam War — something the new technology might flag without fully understanding historical context merely because it displays "nudity." While AI can work faster than humans at repetitive tasks, it isn't necessarily capable of making those types of judgment calls.
In the past, Facebook has been largely hands-off in terms of weeding out bad material, often letting content policing fall into the lap of its users in a bid to retain political neutrality. “We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” CEO Mark Zuckerberg said in a recent statement.
However, as research emerges suggesting that up to 44% of U.S. adults garner their news Facebook — and as more estimates point to much of that news being fake — the platform is likely feeling increased pressure to take on more active responsibilities as a de facto publisher and, subsequently, an arbiter of appropriate of content.
This isn’t a purely moral play either, as the continued hosting of disreputable or unpalatable material may start to turn off audiences and, more importantly to Facebook, advertisers, who continue to supply the vast majority of the site’s revenue.
Facebook has recently made a strong push to turn Live into a hit with media buyers and publishers, establishing streaming APIs and allowing live video to be cast directly onto TVs. Cleaning up Live feeds will likely make the feature even more appealing, but the technology still has a ways to go before it operates at its full potential, per Reuters.