Dive Brief:
- A video that inaccurately identified a student survivor of last week's school shooting in Parkland, FL, as a paid actor reached the top of YouTube's Trending page, according to Bloomberg.
- The video featured student David Hogg, who has been outspoken in favor of gun control, in a Los Angeles TV news clip from last summer, with the description "David Hogg the Actor." The video was listed at the top of Trending for several hours on Feb. 21 before being removed by YouTube.
- The news comes as calls for tighter regulations over social media platforms like YouTube, Twitter and Facebook continue to grow despite internal efforts to improve transparency and the quality of content. All of these companies recently testified before Congress to explain why Russian actors and bots used the platforms to spread fake news and misleading and divisive messages around the 2016 presidential election.
Dive Insight:
This isn't the first time that fake news stories have been promoted by digital platforms in the wake of a major, tragic news event. Following the Las Vegas shooting last fall, Google's top news stories section prominently showed a thread from the anonymous message board 4chan that misidentified the perpetrator, and Facebook's Crisis Response hub showed an article from the far-right publisher Gateway Pundit spreading the same false information.
For marketers, the news is additionally worrying because YouTube has recently enacted considerable efforts to ensure this type of slip-up doesn't happen. Its parent company Google has made a lot of noise about plans to hire 10,000 employees to more carefully monitor videos and prevent ads from appearing next to content that's controversial in nature. The Parkland conspiracy video not only reaching No. 1 on Trending but also remaining there for hours without being flagged by either the platform's machine learning algorithms or its human reviewers might come as another blow to YouTube's credibility as a brand-safe site.
To date, the digital media space has been more open and self-regulating than traditional channels, but the continuous spread of misinformation and offensive or inappropriate content has marketers and the public wondering if social media companies and the major online platforms are doing enough without outside oversight. While many of these companies are using artificial intelligence to identify misleading content, the same technology is also being used more frequently to create misinformation in a worrying trend. Bloomberg cited research showing how AI is getting better at creating fake news through videos and audio that look and sound realistic but have been fabricated using these technologies.