Log 1: What Are Digital Ethics?
09. Dec 2021Designing ethical data solutions is about a lot more than data ethics. It is also important to consider ethics when working with other digital concepts such as algorithms, nudging and A/B testing
By Peter Svarre
This article is published in connection with The Digital Ethics Compass project.
When in 2018 it was revealed that the company Cambridge Analytica had harvested personal data from millions of Facebook users, it marked the end of 20 years of technology optimism and the start of a new conversation wherein digital technology is no longer unconditionally good – its dark side was also revealed.
The conversation has largely been about the use and misuse of personal data, and there has really been progress made in terms of both legislation and ethics when it comes to data. In Europe, we now have the GDPR, and in Denmark, we have a data ethics council, and virtually all companies today know that dealing properly with personal data is just as important as the company’s other CSR policies.
When talking about digital ethics, you will therefore quickly also find yourself dealing with data ethics. But digital ethics is about more than people’s personal data. In a broad sense, digital ethics is also about how the digital revolution has created new tools, interfaces, and business models that change the relationship between people, companies, and public authorities.
It can easily become quite overwhelming and therefore in The Digital Ethics Compass project we have chosen to divide the concept of digital ethics into four different areas.
- Data ethics is about proper (and legal) conduct when collecting and using personal data. Behaving in accordance with data ethics is about protecting the human need for privacy in a digital world where data is becoming increasingly valuable. Poor data ethics can in the worst case result in people feeling like they are being monitored and deprived of control of their own lives. And from the perspective of the companies, the long-term consequences of poor data ethics is that customers lose faith in companies and digital services in general.
- Algorithm ethics is about algorithms increasingly being used to make decisions that have consequences for people’s lives. It might involve a loan application being rejected by an algorithm or a self-driving car that gets into an accident. Poor algorithm ethics can in the worst case create a world where humans are subject to unfair, inhumane, or incomprehensible decisions that they cannot subsequently get an explanation for. In the worst case, poor algorithm ethics can shift the balance of power between companies and consumers to the advantage of the companies, and this can result in inefficient markets, monopolies, and dissatisfied consumers.
- Nudging ethics is about digital designers becoming better and better at using behavior-regulating methods to manipulate people’s minds. And if you combine the knowledge of behavior with algorithms and enormous amounts of data, you can become so good at nudging users that it becomes a matter of hyper nudging, where one can basically remotely control consumers to do certain things and think certain thoughts. Nudging is often highlighted as a technology that helps people make better, healthier, and wiser choices such as a sign by an elevator reminding people that they can take the stairs when there is a wait, but nudging can also be used to solely promote a company’s narrow objectives. In the worst case, poor nudging ethics can result in digital addiction and the manipulation of users, which in turn can result in passivity or digital acts of rebellion where people disconnect from the digital world entirely.
- Testing ethics is about how most digital designers and product developers today work with A/B testing or other types of tests where one is constantly optimizing the digital interface by live-testing different solutions on the users. These kinds of tests can be perfectly fine if one just tests different colors, fonts, or images on a website, but once you begin testing things that have an impact on the users’ moods, health, finances, or anything else important, you are moving into an ethical grey zone. The most familiar example of this is when Facebook changed the newsfeed for a couple of million people so that some saw more negative updates and others saw more positive updates, and then Facebook observed how it impacted people’s moods by looking at their own updates. The conclusion, unsurprisingly, was that people’s moods were in fact impacted by their newsfeeds. In the tech world, there is carried out millions of such tests each day without anyone asking for permission. Poor testing ethics can in the worst-case result in human catastrophes in the form of people losing their pension savings, self-driving cars getting into accidents, or thousands of people feeling tempted to commit suicide.
As mentioned, there is already a strong focus on data ethics, and here there has been implemented quite a bit of legislation and ethical guidelines. There is also a growing focus on algorithm ethics, but all the same, there are still many companies that do not consider the use of advanced algorithms as a potential ethical problem. And many companies can get away with some pretty problematic use of algorithms without getting attention from authorities who either lack the capacity or qualifications to pursue the cases. Nudging is often considered to be an unconditionally positive science, and there are very few digital designers that take ethics into account when reaching for the behavioral toolbox. And today, A/B testing is practically used by all digital companies without the slightest consideration for the ethical challenges.
In The Digital Ethics Compass project, we strive to put the focus on all of these four areas so that the conversation is not exclusively about data ethics.
Can’t get enough of design and innovation? We hear you. And we have you covered.
Sign up for our newsletter to get the latest from our world delivered straight to your inbox.
Sign up for the ddc newsletter