Is Technology Smart Enough To Fix The Fake News Frenzy? By John Naughton

The debate about “fake news” and the “post-truth” society we now supposedly inhabit has become the epistemological version of a feeding frenzy: so much heat, so little light. Two things about it are particularly infuriating. The first is the implicit assumption that “truth” is somehow a straightforward thing and our problem is that we just can’t be bothered any more to find it. The second is the failure to appreciate that the profitability, if not the entire business model, of both Google and Facebook depends critically on them not taking responsibility for what passes through their servers. So hoping that these companies will somehow fix the problem is like persuading turkeys to look forward to Christmas.

What we learned in 2016 was the depth of the hole that digital technology has enabled us to dig for ourselves. We’re now in so deep that we can barely see out of it. Liberal democracy could be facing an existential threat, for it’s not clear that it can endure if its public sphere becomes completely polluted by falsehoods, misapprehensions, ignorance, prejudice, conspiracy theories and hatred.

Dubious online content is produced, ultimately, by people and human beings are devious and creative

In that sense, we are confronted by the question that obsessed the young Walter Lippmann in the early decades of the 20th century: was it possible for a complex, industrialised society to remain a democracy when the vast mass of its citizens were unable to comprehend the decisions that had to be made by government in their name?

Rereading Lippmann, particularly his Public Opinion and The Phantom Public, is a sobering experience at the moment. If he was deeply pessimistic about the prospects for democracy a century ago, imagine what he would have thought about our current condition. For Lippmann, the problem was just that the average citizen couldn’t comprehend the complexities of public policy. 2016 taught us that many citizens have no inclination even to make the effort to understand, while the internet has enabled them to crowdsource their indignation and incomprehension, with impressive political effects. See, for instance, the “Pizzagate” conspiracy theory which falsely connected some Democratic party members with a child-sex ring. Despite being widely debunked and described by the police as “fictitious” it was still believed by 9% of registered voters.

But if the technology got us into this hole, might it not help us climb out of it? Could there be a technical fix for, say, the fake news problem? The prospects aren’t great. The basic reason is that dubious online content is produced, ultimately, by people and human beings are devious and creative, so detecting false or misleading posts would require smarter AI systems than currently exist. (For an illustration of the scale of the challenge, see Donald Trump’s campaign tweets.)

That’s not to say that attempts at algorithmic quality control aren’t worth trying – just that they might produce counterintuitive results. In the 1990s, when Google’s co-founders came up with the PageRank algorithm for determining how web pages should be ranked, they created an automated version of human peer review: the value of a page was determined by the number of other pages that linked to it. But if many of those web pages are meretricious, then the quality of the assessment is flawed, which is how Google got into the mess so graphically chronicled by my Observer colleague Carole Cadwalladr in these pages a few weeks ago.

To illustrate how difficult getting at the “truth” can be, consider science, which, after all, represents the most  serious attempt our culture has made to achieve accuracy and dependable knowledge. Peer review is one of the central pillars of this enterprise, but it turns out that that has weaknesses. “Statistical mistakes are widespread”, says one survey, and peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate.

This gloomy verdict is confirmed by Statcheck, a program written by a Dutch researcher and employed to conduct examine statistical inferences drawn in scientific papers. In 2015, the program took less than two hours to read through more than 30,000 papers published in eight respected journals between 1985 and 2013. Its conclusion: about half of those papers contained a statistical error.

What this suggests is that even in an area of human activity that is professionally committed to getting things right, accuracy can be hard to achieve and truth even more elusive.

The most one can claim for scientific knowledge is that it is provisional and subject to revision, but at least it is supported by evidence that can be sceptically assessed by others. Not much of that applies in public policy and none of it at all in politics. Which is why we’re in this mess.

TheGuardian

END

CLICK HERE TO SIGNUP FOR NEWS & ANALYSIS EMAIL NOTIFICATION

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.