According to the Washington Post, by July of this year the President of the United States had made 20,000 false or misleading claims during his four years in the Oval Office – described by the reporters as a “tsunami of untruths”1. In the Trump era, factchecking has gone mainstream, becoming a central part of how people consume the news, with a 200% increase in the number of sites offering this service worldwide since he took office.
Meanwhile, increasingly accessible AI and machine learning technology are introducing new ways of manipulating reality. Deepfake video and audio present distortions of the truth, in which people can be shown to say and do things that never took place. In September, a Guardian op-ed written by an open-source AI language generator, demonstrated how indistinguishable automatically generated content will soon be from human prose.
As these tools spread and improve, the public will have to contend with a world where it is increasingly difficult to tell the difference between AI and human-generated content, between the words said by a person and those digitally put into their mouth. Getting to the “truth” will require the ability, whether from man or machine, to identify misinformation, disinformation or obfuscation.
And yet even before the potential impact that these more futuristic tools may have in terms of addressing large-scale deception, we already live in a world where fiction travels fast.
Even without the role of bots, fake news stories spread six times faster on Twitter than real news, indicating an element of human nature that is drawn to the novel, the controversial. According to the Center for Humane Technology, the propensity for misleading or sensationalist news to be retweeted and spread online, is exacerbated by the business model of technology platforms, in which advertisers pay for our attention, captured most abundantly when we are depressed, shocked and polarised. The result is that as we have moved our lives increasingly online, we have created a world where groups of people experience different universes of truth and information, with increasingly little shared reality.
Some early steps have been taken to address this. In January, Facebook announced a ban on manipulated photos and videos, and has looked to accelerate the development of tools to spot these. Meanwhile, Twitter introduced labels for misleading, disputed or unverified information relating to Covid-19, even flagging Donald Trump for the first time in May, and is trialling crowdsourcing for rooting out propaganda and misinformation.
However, this crisis of truth not only is the concern of technology and media companies, it has potential consequences for all aspects of society.
In 2020, the devastating wildfire seasons in Australia and California, and warnings about accelerated ice sheet loss, have given us some of the clearest signals yet of the disruption already being caused to the planet’s climate system. And yet, the most engaged with climate change content of the year is a lie from a known conspiracy site.
When Facebook launched its Climate Science Information Center in September, a space that aims to provide access to high-quality, factual content on climate change, it was criticised by groups including Greenpeace and Friends of the Earth, for doing nothing to address the mass spread of climate disinformation on its platform.
Business has a role here. InfluenceMap reports that a substantial 30% share of influential companies are still directly undermining climate policy through lobbying activities, while 90% of the world’s largest industrial companies retain links to trade groups that oppose it. Despite a series of fanfared announcements during 2020, closer examination suggests that none of the plans announced by the oil majors comes close to aligning their actions with the urgent 1.5oC global warming limit outlined in the Paris Agreement.
If companies are to be believed on their sustainability and low-carbon-future credentials, they need to demonstrate consistency between their words and actions, as well as an understanding of the aggregate impact of their activities that extends beyond individual gestures. Nothing undermines confidence and credibility more than conflicting messages.
ESG is fundamentally based on the need for transparency around the non-financial risks, opportunities and impacts of companies, seeking to uncover the truth about these aspects in a way that enables comparison and decision-making by investors and other stakeholders. However, it is valid to question how effectively inch-thick CSR reports, simplified sustainability metrics and black box ratings achieve this.
Analysing a database of 2019 corporate reports, Datamaran finds that on topics such as climate change and human rights, generic descriptions – “fluff” – overwhelmingly outnumber specific disclosures of the actions that are being taken to manage these risks. 2020 has given us several examples of ESG dissonance – in which there seems to be a reality gap between the on-paper performance of a company and the truth about its practices (see Boohoo.com). Despite ever increasing data availability, trust in company sustainability reporting remains low.
Reporting frameworks are not an exact science, and while shading mainly between the lines, a company may be able to paint a range of different self-portraits, some more revealing or flattering than others. Despite the boom in data vendors and tech-savvy ratings agencies, there remain substantial areas of inconsistency between competing approaches, and abundant warnings of green, SDG, social and other “washing”.
In response, there is industry momentum for greater standardisation and simplification of reporting methodologies, and emphasis on metrics that capture the materiality of environmental and social impacts in comparable financial terms. For companies, the extra effort involved in measuring the true impact of activities, underlines the importance of distinguishing and prioritising the most material aspects, while independent verification may provide necessary weight to the veracity of non-financial data.
While deepfakes may not yet have convinced us that Boris really wishes you’d voted for Jeremy, and AI copywriters still have a limited, surrealist grasp of reality, the signs are that, even without new technology, trends for polarisation, predominance of personal belief over expert fact, and distrust of traditional sources of information, will continue. If companies are to establish a deeper truth, one that is more resilient to potential misinformation risks and cuts through stakeholder cynicism, they must focus on the aspects that they can control: understanding the real impact of their activities, being transparent about the right things, and matching their words with consistent action.
1 For balance, a 2017 New York Times study found that in his first eight months in office, Trump told blatant lies (this is a slightly different definition that does not consider claims that are just misleading) at more than 50 times the rate of his predecessor, Barack Obama, who was guilty of a total of 18 over his eight-year tenure.