fbpx
Blogs & CommentFeatured

Google has been explaining how it is tackling problems with advertising and content. But the lines are not always clear

Last month, Google called together 18 journalists from across Europe and the Middle East to its European headquarters in Dublin, Ireland.

The official reason for the congregation was to give the press some background on Google’s ad offerings, with a specific focus on brand safety. Back in March, the search giant had come in for unpleasant criticism when it turned out that the ad-placement algorithms on its YouTube video streaming platform had been juxtaposing wholesome family brands with terrorist recruitment videos. The company is keen to show how it has tackled the many issues this raised.

So the meeting was also effectively a war briefing to give an update on the ongoing struggle between Google and ‘bad actors’, as its management likes to call those seeking to misuse its services.

However, the company doesn’t just struggle with ad fraudsters, copyright thieves, sellers of illegal or immoral services, posters of criminal and upsetting content and their ilk. It also has to deal with more innocent offenders. The ads and content that don’t fit within the guidelines, the accidental clicks, the badly written program that generates false traffic without malice, and so on.

Andres Ferrate, Google’s chief AdSpam advocate (pictured, centre), says: “Google is investing in defending our ad systems against invalid traffic because we are at this interesting intersection of advertisers, publishers and users. And being at the intersection of all of those stakeholders, we believe that trust is the standard type of currency with which we are transacting.”

Google has numerous platforms and vehicles through which it interacts with those three parties. The three it focused on most at the Dublin meeting were its search advertising services, its Google Display Network and YouTube.

There is often overlap between users, advertisers and publishers. Indeed the three can easily be the same person. For example, a marketing manager who puts a video of his product on YouTube, searches Google for rival brands and buys space through the DoubleClick ad exchange.

While Google’s response to bad actors needs to be firm enough to discourage them, the war they are fighting needn’t be to-the-death, and the platform’s responses need to be measured and proportionate, generally stopping short of drastic measures such as criminal proceedings and exclusion from further use of its services.

Although the team in Dublin admit that a largely automated system, overseeing an almost inconceivable amount of data, will never be perfect, it shouldn’t happen again that brands’ ads will appear next to terror videos.

Measures to increase security include more sophisticated labelling of content. With 400 hours of YouTube videos being uploaded every minute, Google handles too much information for humans to oversee everything. But for 15 years it has been using machine learning and artificial intelligence to monitor the internet, including its own properties.

Not only is that AI constantly improving, but Google is also continuously re-evaluating its policies on what is acceptable content. Jessica Stansfield, EMEA head of global product policy (pictured, right), is keen to emphasise that constant evaluation doesn’t necessarily mean constant change; Google doesn’t necessarily keep shifting the goalposts, but it is always asking itself whether the goalposts are in the right place.

It has tightened up the defaults of what an ad can be served against when it is uploaded. Advertisers are free to add or exclude categories they would like to associate with or avoid (think sports, news, graphic violence, religion…) but if a video is posted unlabelled, the default setting is that no ads will run against it. Nor will ads be served now on YouTube channels with fewer than 10,000 views, meaning that video posters must have at least bit of a track record before they can start monetising their content.

It can be difficult to categorise videos, as the same footage can be interpreted in a lot of different ways. While a brand might be happy to associate with wholesome videos of football, motor racing and other sports, they might be less happy to find themselves next to the same footage edited to show only the most violent tackles and dangerous crashes. Even the most unsavoury content might be acceptable in the context of a snippet in a news bulletin.

It pays, of course, for brands to go into Google’s dashboards and fiddle with their settings to make sure they are appearing next to just the right videos, and not the wrong ones. But Google has made moves to set the defaults tighter. For people posting videos, it is also worth using tags and descriptions to give them more context so that they are not accidentally tarred with the wrong brush for want of better descriptions and understanding.

Among the bad actors Google targets are those selling morally dubious goods and services. It has banned payday loans and pharmaceutical advertising, for example.

And at the same time it is tackling ad fraud. Stephan Loerke, CEO of the World Federation of Advertisers, the trade body for markers, who had been flown in from Brussels by Google, says the industry has only started to pay attention to the problem comparatively recently.

“If you just look at ad fraud, how come no one has ever talked about ad fraud until one year ago?” he asks. “Ad fraud is anything between 10 and 30 per cent of client spend, and funnily it just didn’t make headlines. And why? Because in fact there is only one loser in the ecosystem of ad fraud, and that’s the client. If the client hasn’t the necessary set up, hasn’t the necessary knowledge, no one – and certainly not partners in the ecosystem – will be raising this. That’s the simple reality.”

He adds: “What’s particularly worrying is, given the way ecosystem partners are remunerated, ad fraud actually benefits more legitimate ecosystem partners than criminals. That’s simply the arithmetic of it.”

March’s YouTube scare proved to be a wakeup call, he said, and now marketers are looking more closely at what happens with their advertising online.

However, chasing ad fraudsters can be “like having a game of chess against an opponent who is constantly changing the rules” Says Ferrate, who describes himself and his team as “warrior scientists”.

Bad actors are always testing what they can get away with, says Stansfield.

“People really do like to play the line, or figure out the line of what our policy is,” she says. “So we are always trying to make sure that we are aware of that and that when people get too close to the line or go over the line we are taking consistent action there. That is difficult. It’s not an easy discussion always.”

Google is also giving more control to the users themselves. In some ways its opening up control of how our data is used is preparation for the European Union’s General Data Protection Regulation (GDPR), which will come into play in May.

The platform lets users fine-tune their settings through “My Account”, which it introduced in 2015. This gives powerful and flexible control over what data is gathered by Google and how it is used. About half of all users who visit it do make tweaks, says Matt Brittin, president of EMEA business and operations at Google. He doesn’t say what percentage of users visit it in the first place.

Google also tries to explain how our data is being used, and what the benefits can be. And that seems to almost be the company’s new mantra: explain, explain, explain. The Dublin war briefing was to explain to journalists – and therefore to the wider marketing community – what steps it has been taking. It wants to explain to users what data it would like to use, and why. And even the bad actors deserve a chance to rectify their wrongdoings, as their failings are not always through malice but through ignorance.

The WFA’s Loerke says: “We think that the consequences need to be proportionate for websites. If a website has an ad format that, for some reason, is in breach, it can’t be that this website gets blacklisted and therefore is potentially put at risk in terms of economic viability. So we think there need to be thresholds, there needs to be the ability to give time to correct, and ideally we think that it should be only the ad formats that are not in compliance that are filtered out, rather than the entire website.”

Stansfield echoes this sentiment. She says: “This is something we recognised in the last year, especially with our publishing clients. There was kind of a black-or-white decision, which we don’t always like to have, of what we could do for their sites. If there was any kind of hateful content or derogatory content we can only really issue them a warning or take down their site. So we decided that we need more flexibility in our enforcement as well, where we rolled out page-level policy actions that enabled us to remove individual violating pages. We could demonetise those pages only but allow the rest of the site to be maintained.”

There is a constant ballet going on between users, advertisers and publishers, with Google trying to make sure no one trips over the others’ toes.

Google must keep evaluating, keep explaining and keep evolving. Digital isn’t standing still, so approaches to bad content, advertising and data shouldn’t either.