Why Ethical Design should always be your standard Code of conduct

Remember: You are responsible for what you put into this world

Ill: Soulpancake

What’s the use of making a lot of money, if the success and riches are built on manipulating people, making them unhappy? Is it possible to thrive in a company living off design that makes users spend more money than necessary on all their buys? Or do you believe we have moral obligations, as humans, to always create good for the world? If you think we do, can we work together to establish Ethical Design as a new normal for the industry?

Our generation could be remembered as conscious designers with high integrity and strong values, who believe that product design always should be ethically evaluated before finalized and launched. Or we could be doomed to finish in design history as people who were always willing to compromise, do what it takes to secure the profit, and keep the job. Which one do we prefer?

The questions have served me well as reminders, nudging me into deeper reflection over our work, helping me to realize that the end result of our theoretical design processes will always be not just a product but something that is supposed to be used, for a purpose. And if that purpose, or the consequences of the use of the product, could be detrimental for the Ecosystem, for Animals, for the Human being, for the Planet, for Democracy, Free Speech, or the Integrity of individuals, we as designers should be able to know where to draw the line, and feel confident enough about our personal code of conduct to say “No, I will not do this work”.

Stop! This is my Design limit. Photo: Nadine Shaabana / Unsplash

When the Ethics class started, I began reading the texts that were provided by our teacher, Sonja Rattay. I quickly realized that I had blinded myself to the fact that design is daily trying to trick me into buying things that I do not want and do not need, through using various behavior-influencing designs and technologies. I started to take extra notice of the annoying “dark design patterns” on the net, and the further we got into the readings I kept getting more and more frustrated, asking myself: “How come that professional designers accept to be part of these shady practices?”

Screenshot from “The Dark (Patterns) Side of UX Design”.

I started to realize that the answer to why is most likely that sites operating within these frameworks are not primarily built by designers, but by marketing professionals who know which buttons to put where, and in what color, to make users click on them and behave in a way that is good for the business. Just look at Amazon’s site – what looks like a complete disaster from a designer’s perspective, turns out to be a wet dream for the business manager.

This also made me reflect on why the industry felt it had to invent CXers, the customer experience specialists, who are part of the marketing teams. Isn’t this simply, rather than anything else, a reaction to the fact that UXers, who are taught to defend and empathize with the user, tend to oppose design patterns that are not serving the user?

Screenshot from an abonnement service with a large Swedish media company, that annoyingly forced me to click five times on small grey rows of text below flashy green cta-buttons to finish my abonnement. Shady.

So how do we manage to keep our standards high as UXers and Designers? Or at least high enough to refuse to make unethical choices when designing? Can we afford to walk out of companies that will not allow ethical reviews of the products we are asked to design? Can we mobilize support from Peers and Industry leaders who give us the strength to make those choices? And in that case, how do we get ready to define our own, personal, Ethical Codes of conduct?

Or as Mike Monteiro puts it in his book “Ruined by Design”:

“Design is a discipline of action. You are responsible for what you put into the world. It has your name on it. And while it is certainly impossible to predict how any of your work may be used, it shouldn’t be a surprise when work that is meant to hurt someone fulfills its mission. We cannot be surprised when a gun we designed kills someone. We cannot be surprised when a database we designed to catalog immigrants gets those immigrants deported. When we knowingly produce work that is intended to harm, we are abdicating our responsibility. When we ignorantly produce work that harms others because we didn’t consider the full ramifications of that work, we are doubly guilty.

The work you bring into the world is your legacy. It will outlive you. And it will speak for you.”

Mike Monteiro

Never doubt that you can make a difference with your design decisions, no matter how small. Photo: Tran Mau Tri Tam

Two interesting Swedish publications on ethics in the last few years, in small formats, are “Digital Care” by Per Axbom (designer and expert in accessibility and usability), and “Humans and AI” by Daniel Akenine (CTO with Microsoft) and Jonas Stier (professor at Dalarna High School). Both books deal with ethical and juridical questions, and the authors reflect on how our society will be impacted by the fact that some people can buy and use advanced tech easily while others can not.

Per Axbom also speaks about inclusion, and how important it is to realize that this is yet another level of design, compared to accessibility and usability.

Screenshot from another lecture on design with Hyper Island.

The books reflect around how we, from our privileged position as designers, can decide to stop taking part in creating services where people are subjected to injustice or exclusion or are nudged to buy things they do not need, through “dark design patterns”. Per Axbom gives practical advice and puts emphasis on “cognitive bias” and “confirmation errors” — that we tend to see, confirm and reproduce what we already know and believe — as one of the most important reasons for design gone wrong.

One example of a design that has not been made inclusive enough, is the digital Bank-ID which is currently up for debate in Sweden. Certain parts of the population, mainly those with disabilities and the elderly, have been excluded from using services they need because they could not get or use a Bank-ID, for instance to take use of possibilities to book vaccinations against Covid-19.

Books by Daniel Akenine, Jonas Stier, and Per Axbom. Photo: Aminata

Another very interesting publication is the yearly “Design in Tech Report”, by John Maeda, which tells us about the trends that will have the biggest impact on design in the nearest future as well as the most critical challenges facing design. At present, the biggest challenges are diversity in both design and tech, and design influenced by our increased use of artificial intelligence.

You will find more interesting links and facts, plus suggestions on how to work with sustainable Ethical Design on the webpage designethically.com.

Screenshot from the Design in Tech Report 2018, by John Maeda.
Video presentation of the “Design in Tech Report 2018”.
Video presentation (the short version) from the Design in Tech Report 2020, by John Maeda.

Big data is another topic that we have only talked superficially about. Most companies never made an ethical analysis on how to handle user data from their services, websites, and apps when they once started gathering them with the help of Google Analytics. So what about today’s situation? Do we really fulfill rules and regulations and ethical promises to our users if, and when, we transfer their data to large third-party organizations? Or do we continuously expose them to the risk that their data is being used for marketing or other purposes? Since a majority of companies and businesses in the world today use either Google Analytics or Adobe Analytics, often in combination with other services like Adjust, Branch, or AppsFlyer (built to take users directly to your app from social media) the answer is unfortunately that we don’t really know. We are in the hands of the providers and have to trust that they act ethically.

But there are alternatives. One software that has been developed through crowdsourcing and has recently become much more competitive, with many new functions, is Matomo, which allows you to save all the analytics data on your own servers. A feature that allows no one else access to your users' data. As a result, many authorities in Sweden use Matomo today to be sure to fulfil the rules and regulations of GDPR, the general data protection regulation of the EU.

Now, how do we navigate all this? It’s easy to get overwhelmed, but in short we need tools, when navigating the web and our design models, to help us make conscious and informed decisions about design and stay on the right course with our products. We also need to have some knowledge of ourselves and our personal values and ideas.

Business explanatory models often present us with an overlay of values, where laws, rules, industry guidelines, and standards are shown to interact and interweave with employers’ codes of conduct and our personal beliefs and morals. But what these models often forget to tell us is that there are actually new, smart, and modern methods that we can use for analyzing and defining the ethical stack and impact of our products.

We all carry multiple identities with us through the world, as well as a multitude of systems of standards, morals, and ethics, and these keep playing into our every action and decision. The differences and the borders are not always easy to define, personal morals are not necessarily applicable when deciding what is ethical, and vice versa.

Venn-diagram over influences when taking decisions in business and design.
This classic Venn-diagram shows the values that go into business decisions – but forgets to tell you that there are now several modern and simple methods to evaluate the complex consequences and ethical impact that our products may have on the world. Ill: Aminata / Miró

Ethics are already embedded in most of our decisions. As described in the Venn-diagram above, they are made from our core values. But a conscious discussion on ethics, where we get to think clearly about rights, obligations, and responsibilities, takes more. To be successful in ethics, we need to agree on using specific frameworks in our design processes that can help us evaluate and shape our services in a truly ethical manner.

A new, very interesting, and easy-to-use toolkit, primarily the Ethical Stack of the VIRT-EU ethical framework, was presented to us by Anneli Berner, principal investigator at the Copenhagen School of Interaction Design (CIID), during one of the lectures. The framework and the tools in the kit have been translated from theoretical models into practical tools, primarily to help teams “navigate the ethical landscape of designing and building IoT products and services”. But this framework, with readymade models and sheets to fill in online, is definitely helpful for anyone who aims to design and build products or services while reflecting on its impact and ethics.

The team started their discussions on a future product, using the Ethical Stack of VIRT-EU, by layering the product on five levels. Ill: The VIRT_EU project.

Traditionally we design to fulfill practical and juridical demands (duty ethics) and the possible and foreseeable consequences of using our product (consequential ethics). The VIRT-EU project proposes the use of another, more complex method, to be able to really confront the ethical dilemmas our product and the use of it could contain.

The Ethical Stack, and the other VIRT-EU tools propose a method in which we can think of ethics as “values in action”. Through melding different perspectives or “lenses” this tool helps us to take informed looks at our products, and ourselves as individuals, from different perspectives. Doing the work we need to ask ourselves questions from out of three basic perspectives, virtue ethics, the capabilities approach, and care ethics: “Is this a good product, does it do good?” (virtue ethics). “What technical and societal limitations do exist and how could they influence both production and use” (capabilities approach). And finally, if we listen to the users’ experiences and needs and ask ourselves “what should we care about?” and, “how do we [even] begin to care?”, then what will our response be?(care ethics).

A great deal of philosophy has gone into the design of the evaluation models, and one of the websites that the creators point out as very interesting is the Stanford Encyclopedia of Philosophy. To make a round of analysis will take you around 2,5–3 hours, or more if you want to.

Finally, what did I conclude from all this? What purpose do I want to serve? And how? The easy answer would be that I am of course one of those who want to do good in the world, and that I’d rather work with public service and empathy for the user than designing shady patterns for profit only.

But my biggest takeaway from our ethics module is realizing that it’s not enough to say “I am conscious about accessibility, inclusion, and ethical principles”. It takes much more.

To really and genuinely be able to look past internalized bias and individual ideas about “good design”, and make a difference when participating in the design of new products, it’s necessary to cooperate as a team. Only when we take an organized and methodical look together at the product or service that we are designing, through other lenses than the ones we normally use, will we be able to say that we are actually trying our best to do good in the world.

UX Designer & UX Writer, with some Frontend skills. Journalist. IP/Author's Rights. Tech. New Digital Media. Networking across the Globe. Ex board mbr RSF/SE.