Dive Brief:
- Bing has introduced a “Fact Check” label on search results on news, major stories and within webpages to provide people with additional information about which online sources are providing trustworthy content, per a Bing blog post.
- The post stated Bing may apply this label to any page that has the schema.org ClaimReview markup included on the page, but pages that don’t meet the criteria for the tag might not get the new label in search results. The criteria include: the analysis has to be transparent about sources and methods with citations and references to primary sources, claims and claim checks have to be easily identified within the body of fact-check content, and the page hosting the ClaimReview markup must have a brief summary of the fact check and the evaluation.
- Bing is joining Google and Facebook in fighting fake news with the new label, as reported by International Business Times and others.
Dive Insight:
The issues around fake news and brand safety have both made headlines this year. Bing’s new Fact Check label definitely falls into the category of fighting fake news more than protecting brand safety. It is geared toward people searching for information around breaking news and noteworthy stories and designed to help steer them to credible resources rather than websites that are presenting blatantly false narratives.
While Bing's role in search and news is dwarfed by Google, the news still points to how a growing number of digital platforms are embracing strategies for fighting fake news, which is an important step in the right direction of rebuilding trust with users. Such trust is necessary to support brands' efforts to engage consumers in a positive way on digital.
Facebook has also been taking steps in the same direction after it was hit with charges that its platform was used to propagate fake news throughout the 2016 presidential election, and that it profited from ads around those fake news items. The issue became a big enough headache for the social media giant, Facebook CEO Mark Zuckerberg released a manifesto on how it was going to handle the challenge last November. This year it has taken a number of steps to help improve its ability to uncover what it considers “bad” content through a third-party review fact-check process and tools like artificial intelligence that add an automated layer to finding content miscreants.