Ethics in tech

Jan 21, 2019 | Monthly Briefing

Ethics in tech remains a massive global topic, with every business now arguably a technology company. The challenge is how companies can benefit from technologies such as Big Data, AI and blockchain and apply them ethically to maximise their positive value for society?

“Have you tried switching it off and on again?” – so goes the common refrain for solving a technology problem. You could be forgiven for hoping this would fix the massive social challenges that have emerged from the digital revolution. If only it were that simple. Technological development continues at a rapid pace and problems ranging from data privacy abuse to fake news misinformation, mental health concerns to AI-induced job losses, are unresolved. In 2019, ethics in tech will remain a massive global topic.

Media coverage may have focused on the tech giants of Silicon Valley, but companies in every sector should take note. The same systems and platforms are becoming mainstream in how any business operates. Big Data, AI and blockchain are just a few examples of the tech developments companies are looking towards to drive efficiency and growth. Arguably, every company is now a technology company.

The challenge is how companies can benefit from such technologies and apply them ethically to maximise their positive value for society.

One approach has been to develop guiding principles for the ethical development and use of tech. Microsoft’s AI Principles, Accenture’s data-focused Principles for Digital Responsibility and inventor of the World Wide Web, Tim Berners-Lee’s Contract for the Web are just a few examples.

Ethical frameworks around tech are welcome, but they are problematic. The speed of technological development has outpaced our understanding of its impact, so a company looking to use such principles as an ethical guide for tech may find that they do not cover the full suite of technological systems or range of implications. What’s more, some impacts from tech require making value judgements that are not universally agreed. Self-driving cars, for instance, will require programming to determine whose welfare is more important in the event of a collision. This draws parallels to the classic moral dilemma of the trolley problem. To such ethical conundrums, there are no definitive answers. Agreeing on a defined set of moral principles to guide all companies and technologies is difficult.

We believe that every company needs to do some more fundamental thinking to define an approach that works for them.

It starts with agreeing the business’ role and place in society. At Corporate Citizenship, we work with the lens of rights, responsibilities and aspirations. In the context of ethics in tech, this means thinking about:

  • What are your aspirations as a company for your role in the world?
  • What are your rights when it comes to achieving that aspiration through technologies?
  • What responsibilities do you have with the technologies you are using to achieve these aims?

Answers to these questions are unique for each organisation we work with. This is the starting point for conducting a tech-focused materiality exercise, mapping the technology in development or use across different departments to understand its positive and negative impacts. It requires bringing together difference voices – including Chief Technology Officers and specialists in business ethics for example – to explore and prioritise the issues of technologies in your business.

Demonstrating to stakeholders that you have thought about the impacts of your tech use is becoming increasingly vital. Corporates should be confident in their right to innovate and use technologies to drive growth. However, that right is balanced with responsibilities. Using a tech materiality map can inform the practical action required for companies to become the corporate citizen they aspire to be.