Humans are the biggest problem with digital advertising

The digital advertising industry has a big problem; one which creates a perverse incentive to churn out articles at a ridiculous rate, promoting quantity at the expense of overall quality. That needs to change.

That one big problem is really a thick knot of other little problems, all entwined and making it harder to unravel altogether. Over on the Monday Note, Frederic Filloux is in the middle of grappling with one of those issues – that digital ads are too often sold at an undifferentiated price regardless of the quality of the articles against which they’re sold. It’s a topic he takes extremely seriously, stating:

“The more I get into these issues, the more I’m convinced that betting on quality, finding ways to assign a higher economic value to good journalism is the only way to save it.”

In the first part of his examination of how the industry can untangle that knot, he looks at the factors that caused publishers to neglect accurate tagging of their content to reflect the quality of each individual article, listing ten ‘stated signals’ that could be managed from a publishers’ CMS to do just that. The second part instead looks at ‘inferred signals’ that could be examined by a third party like Google in order to determine the quality of that content.

He argues that such a step is necessary to counter the “absurdity” of modern digital advertising – and he’s absolutely right. The issues facing publishers are existential, and by this point the inevitable effects of that perverse advertising model (fake news, increased uptake of adblocking software, whatever-we’re-calling-clickbait-this-week) are so well known that publishers are setting up entire teams to deal with each of those new heads of the advertising hydra.

Filloux is arguing that many of those problems could be solved if publishers and advertisers alike could reward ‘quality’ content, and his examination of what criteria can be used to measure ‘quality’ is ambitious, laudable and very smart. But, though it’s very cynical, there’s one huge impediment to any implementation of those criteria: Publishers can’t play nice with one another.

Humans all the way down

There have been some huge strides made in algorithmically determining ‘quality’ of news content, to some extent. Over the weekend, a London hackathon produced the fantastically-named Not Impressed, a tool designed to automatically measure the trustworthiness of news sites in a way that manually-upated blacklists could never hope to keep up with. TechCrunch’s Josh Constine explains:

“If you see a low Alexa rank, negative sentiment analysis, high bounce rate, domain in the middle of nowhere, and a clickbait alert, you could infer that a link contains fake news.

“Notim.press/ed’s team is also working on a Chrome plugin that puts a truthfulness score right on Google search results so you know if something’s likely fake before you click.”

But, as we’ve seen with Facebook’s very own fake-news problem, algorithms are far from infallible. And publishers historically have a hard time accepting qualitative measurements from third party algorithms – just look at how up-in-arms we get when Google doesn’t sort our own content onto the first page of its search results.

But there’s a much bigger problem. Algorithms are ultimately fallible because they are devised by people, who are unfortunately fallible when acting with good intentions but often scarily competent when they’re behaving badly. And historically publishers have absolutely not accepted anyone but themselves making qualitative assessments of their articles, through a combination of pride, desire to remain independent and fear of losing cachet and revenue as a result of any ensuing censure. In the UK some papers – rightly or wrongly – won’t sign up to joint press standards boards for those reasons.

In the second part of his examination, that of the ‘inferred signals’, Filloux acknowledges that some of the ex-post criteria by which a standardised measure of ‘quality’ could be derived, noting that:

Public Interest Level can only be evaluated by a human. It is an important element, a key differentiator for a story to emerge from the static of commodity news”

and

“The Subjective Signals bucket involves editorial judgements in the literal sense. Again, I don’t see algorithms able to make a cold-blooded evaluation of how well a story is written, balanced, etc.”

Additionally some of the earlier criteria, such as the Publication- and Author-Quality-Score measurements, rely on other human judgements like Pulitzer awards. Those are absolutely useful for highlighting the prestige of a piece of content and Filloux is open about the fact that the strength of the criteria lies in their combination, that no one score on its own is especially useful, but ultimately all the criteria come down to human judgements.

There are ways to ameliorate those problems: Getting enough publishers, advertisers and third parties involved would reduce the risk of any gaming of the system, smooth out the overall quality curve and providing a more accurate mean in terms of article quality.

But I honestly can’t see a notoriously fractured sector agreeing en masse to a third party qualitative assement, even one in which they have a personal representative if it came to devising and updating those criteria. Any overall qualitative assessment will, quite by design, reveal the quality of a piece of publishers’ content relative to its own other articles and very possibly compared to its competitors’. That’s not a zero-sum game and, naming no names, some of the bigger publishers rely on churning out articles of dubious veracity and quality in order to sell ads against the huge audiences attracted by that strategy. 

Also, if only the majority of publishers and advertisers sign up to any qualitative measurement scheme there will still be those who instead choose to go around and continue to create and fund fake-news. The perverse incentive to pump out low quality news would remain, and while the as-close-to-objective-as-possible measurement would certainly help publishers internally in funding the truly valuable pieces of journalism they produce by valuing them appropriately, that huge overall problem would remain.

It’s very likely that Filloux has some great ideas in mind for how those issues could be addressed, and I have to stress that at no point does he claim this thought experiment is anything other than a work in progress, nor that it is intended to fix the industry or do anything other than reward good quality journalism. Here at TheMediaBriefing we’ll be reading the rest of his entries on this topic with great interest and suggest you do too.

Cynically, though, I can’t help but worry that it’s just not in human nature to vote for something that’s in the good of the majority if it means incurring personal expense. The digital advertising industry needs fixing – it just seems unlikely that publishers and advertisers will ever unite to do something about it. Such a change will have to be forced upon them, and there will be as many losers as winners if that happens.

By |2016-12-05T00:01:00+00:00December 5th, 2016|Analysis|Comments Off on Humans are the biggest problem with digital advertising

About the Author: