A number of advertisers have pulled ad campaigns from the Google Display Network after brand messages were associated with associated with extremist content and material. Those brands include (at the time of writing) Marks & Spencer, McDonalds, RBS, Lloyds Bank, Sainsbury’s, Audi, Argos, HSBC, The Guardian and HM Government.

The problems relate primarily to ads across the YouTube platform, with ads appearing alongside content that promoted racist and discriminatory views, sexual violence and other forms of extremely graphic content. The Guardian newspaper pulled its advertising after its branded message appeared alongside content published by the far-right group Britain First.

The problem with exclusions

shutterstock_589111370

Google has since issued an apology to its advertisers, who naturally don’t want to be associated with the sort of content that their brands were appearing alongside.

Speaking at Advertising Week Europe on Monday, president of Google EMEA Matt Brittin said that Google was working hard to raise standards and, that whilst the search engine was taking steps to raise standards, it was working hard to clean up its network.

“Within YouTube and the Google display network we need to consider, what do we categorise as being safe for advertising? So we’re reviewing those policies and looking at how we define hate speech and inflammatory content,” said Brittin.

“We also found in some cases advertisers had the controls, but weren’t fully using them as they are quite granular. If the controls are there, but they’re too complex that’s our problem. So we’re simplifying the controls and looking to set the defaults to a higher level of safety.”

Those ‘controls’, Brittin refers to are controls within GDN that allow advertisers to determine where their ads appear and around what content. These controls, including domain and app exclusions and negative targeting, have been heavily promoted by activist groups such as ‘Stop Funding Hate’, which is encouraging brands to pull advertising from certain areas of the mainstream media, but they are far from flawless.

Cleaning up its act

shutterstock_466247168

Google has insisted that it is doing what it can to address the issue of ads appearing alongside inappropriate content.

“Within 24 hours, 98% of flagged content is reviewed, but we can go further and faster, and expedite more in that respect,” Brittin stated.

“On YouTube we took 300 million videos out of ad monetisation because they weren’t appropriate for advertising last year, and 100,000s of sites out of the AdSense network. So we’re always looking at how we can make that safer and clearly we need to do more.”

But how and when Google will clean up its ad network to a level that it appeases advertisers remains to be seen. Google, like many social ad platforms such as Facebook, sees itself as a technology provider rather than a media organisation, and this definition is being repeatedly tested as governments look to further regulate digital advertising content and placement. Those platforms want to avoid the levels of regulation that governs media companies, but they also need to secure the trust of their advertisers.