Designing for Trust: Why UX Ethics Matter

When business goals set the pace, ethics in design often gets left behind. Most designers start out ready to solve problems, but few are taught how to push back when something feels off. If we want design to build trust instead of just chasing clicks, ethics needs to be part of the conversation from the very beginning.

After all my years in design, first in architecture and then in UX, I am still amazed by how differently people define “good design” – and how those definitions shift depending on one’s vantage point. To most people, good design is simply about things looking great. To users, it’s also about how well something works. To managers, good design is whatever delivers results and meets business goals. And to designers… well, that is a bit of a complicated story.

Designers themselves, one might think, would have the broadest and most nuanced understanding. After all, we are trained to balance aesthetics, usability, and business needs. Yet even within our own ranks, there is a persistent blind spot: the ethical dimension of design. Too often, design ethics is reduced to questions of professional loyalty – protecting client secrets, honoring NDAs, or avoiding plagiarism – while the deeper ethical questions, those that concern how our work shapes users’ autonomy, wellbeing, and trust, are seldom given the seriousness they deserve. Sometimes this happens because, as humans, we shy away from difficult conversations or ethical disputes. Sometimes it’s due to a misplaced sense of “professional loyalty” that discourages us from questioning our bosses’ or clients’ priorities. Sometimes it’s because we don’t think it matters – after all, there are more than enough designers on the market for our bosses to replace us with someone who won’t care, so the only result will be us losing our job. And sometimes, quite simply, it’s because we were never taught to think about these issues in the first place.

Design is not just what it looks like and feels like. Design is how it works

Design is not just about how things look or how efficiently they function – helping us work more efficiently, travel more comfortably, or even make better coffee.

Design is also about how products impact users, shape their behavior, direct their choices, and encode values – often invisibly. All too often, ethical questions get lost beneath the surface of usability, desirability, and business metrics. When success is measured by clicks, time spent, and money earned, the ethical dimension is easy to overlook or rationalize away. So when we talk about “how it works,” we should also be asking: on whom, and to what end?

The original ethos of UX, rooted in Don Norman’s human-centered design, was about putting people first – not just as users, but as individuals with their individual needs, vulnerabilities, and rights. Norman’s vision called for designers to solve real problems, think in systems, and make sure that products serve the broader good, not just business or technological progress. UX was meant to bridge the gap between user needs and business goals, but as digital products evolved into gazillion-dollar ecosystems obsessed with growth-at-all-costs, the balance somewhat shifted.

The greatest danger of manipulation is that it can become invisible, normalized, and woven into the fabric of everyday life.

In a world where digital products have grown more sophisticated and business models more aggressive, the focus has shifted from serving users to serving metrics. The original human-centered spirit of UX is now often overshadowed by commercial imperatives. According to many studies, a lot of UX designers feel pressured to prioritize “business impact” over user wellbeing. The tools of persuasion – once used to gently guide users toward value – have morphed into instruments of manipulation. Dark patterns, those “design choices that trick users into doing things they didn’t mean to do,” now generate billions in inadvertent subscriptions and purchases. The very skills that were supposed to make technology humane are now being used to exploit.

What makes this ethical erosion even more troubling is that it isn’t incidental, but systemic. In most organizations, product roadmaps rarely reference ethical design principles, while KPIs for user engagement and monetization are prioritized on regular basis. We have developed a professional environment where designers are highly skilled at optimizing user behavior for business goals, but rarely equipped – or empowered – to recognize and address the moral consequences of those optimizations. When success is measured by how effectively interfaces extract attention, data, and dollars, even well-intentioned designers can find themselves complicit in coercion. The “triple constraint” of product development – speed, scope, cost – rarely includes ethics as a fourth pillar, and so the cycle continues.

The consequences of this metric-driven myopia are no longer abstract. When Amazon’s 2023 Prime cancellation flow required users to navigate 17 screens – a digital obstacle course the FTC later deemed “designed to frustrate escape” – it wasn’t an anomaly but a blueprint for how far companies will go to retain users, regardless of the ethical cost. Amazon’s internal code-name for the flow, “Iliad,” was telling: a reference to an epic journey, and a clear signal that friction was by design. The process played on loss aversion, distraction, and cognitive overload, using every psychological lever to keep users from leaving, and stood in stark contrast to Amazon’s one-click checkout, so celebrated for its frictionless efficiency.

Europe’s Digital Services Act now categorizes such designs as “illegal dark patterns,” punishable by fines up to 6% of global annual turnover. These new regulations expose a painful paradox: the very psychological insights that once made UX a respected discipline – Fogg’s behavior model, Hick’s Law, cognitive load theory – are now being weaponized against users. The DSA’s explicit prohibition of dark patterns, and the legal actions already underway against major platforms, signal a growing recognition that manipulative design is not just a business tactic but a societal problem. The message is clear: platforms should be held accountable not only for what their users do, but for how their design choices shape their users’ actions.

Ethics is knowing the difference between what you have a right to do and what is right to do.”

The ethical aspect of design shouldn’t be about what we can get away with, but about what is right. That line isn’t always clear, especially in a world that rewards quick wins rather than long-term expectations. It is very easy to justify manipulative patterns by pointing to positive business metrics, but we have to ask ourselves: designing for the metrics, are we genuinely helping users, or just squeezing value out of them? The consequences of neglecting the responsibility for users’ wellbeing are visible everywhere, and they are symptoms of a broader system that prioritizes engagement and revenue over user wellbeing. When companies deliberately complicate the process of cancelling a subscription, when interfaces are engineered to keep people engaged far beyond their intentions, when users need to enter billing information to start a free trial period – these are all examples of design choices that may deliver business results in the short term, but lead to a gradual erosion of trust. These are not isolated lapses, but signs of a broader pattern in which business goals are routinely placed ahead of user interests, normalizing practices that ultimately undermine the very relationships companies should be depending on.

The psychological mechanics behind these patterns are well understood: reciprocity, scarcity, social proof, loss aversion. What began as benign nudges – like a thank-you messages for user actions – has metastasized into “confirmshaming” pop-ups that exploit social compliance instincts. Casino-inspired mechanics like variable reward schedules – once confined to slot machines – now dictate when dating apps display potential matches or e-commerce sites flash “limited stock” alerts. The human toll is increasingly recognized: numerous studies have found that problematic or excessive social media use is strongly correlated with higher rates of anxiety, depression, and other psychological distress among heavy users of these platforms. We have learned to mint money from compulsion, and too often, we choose to do exactly that.

Technology challenges us to assert our human values, which means that first of all, we have to figure out what they are.

This is not simply the fault of individual designers, the problem is systemic. Product roadmaps are filled with KPIs that reward attention extraction and conversion, while ethical considerations are rarely even mentioned. Most organizations have no process for evaluating the moral impact of design decisions, and few designers are given the authority to push back when lines are crossed. Even when designers sense that something is wrong, they often lack the support or the language to make their case.

One of the most overlooked roots of this problem is education. Most UX bootcamps and degree programs focus on usability, research, and aesthetics. Ethics, if it appears at all, is treated as a side note – a single lecture or a vague admonition to “do no harm.” The messy, real-world dilemmas – navigating business pressure, resisting manipulative design, advocating for user dignity – are rarely discussed in depth. As a result, new designers enter the field with strong technical skills but little preparation for the ethical challenges they will face. They may recognize when something feels off, but without frameworks, vocabulary, or institutional support, it’s difficult to resist the pressure to conform.

The consequences of this gap in education are very real. New designers enter the field without the tools that would help them recognize when their work crosses a line. Without the vocabulary or confidence to push back, they may find themselves pressured to implement dark patterns, or to optimize for engagement at the expense of user wellbeing. The result is a profession that too often confuses compliance with ethics, and business loyalty with moral responsibility.

Meanwhile, the tools at our disposal are growing more powerful—and more dangerous. Artificial intelligence can now personalize nudges, test hundreds of variants, and optimize for engagement with ruthless efficiency. The same technology could be used to detect and flag manipulative patterns, to enforce transparency, or to measure the ethical impact of our work. But unless organizations choose to set those boundaries, the default will always be to optimize for what’s easy to measure: engagement, clicks, revenue.

The real question is not whether machines think but whether men do.

The arrival of AI in design is a double-edged sword. On one hand, it allows for unprecedented personalization and efficiency. On the other, it can scale manipulation to levels never before possible. AI can identify moments of vulnerability, tailor messages to exploit them, and do so invisibly, at scale. The European AI Act’s prohibition on “subliminal manipulative techniques” is a recognition of just how urgent and complex these questions have become. But regulation alone cannot solve the problem. The real work must happen within the profession itself.

What would it take to make ethics as real and as natural a part of our daily decision-making as any business KPI? First, we need to embed ethical reasoning into every stage of design education, hiring, and practice. That means case studies, open debate, and real-world dilemmas – not just slogans or checklists. Second, we need to pair every business metric with a human one: not just “Did users convert?” but “Did they feel respected, informed, and in control?” Third, we need to empower designers to speak up and to give them institutional backing when they do. And finally, we need to recognize that the real impact of our work is not just what users do, but who they become.

Not everything that counts can be counted, and not everything that can be counted counts.

Some problems can’t be solved with an algorithm or a checklist. Design is not neutral; it shapes habits, beliefs, and social norms. It can reinforce power imbalances or foster inclusion, erode trust or build it. As technology becomes more pervasive and persuasive, the stakes only rise. If we want to build a future where people trust the products they use – and the people who make them – we need to treat ethics not as an afterthought, but as a central measure of our success. The challenge is not technical, but moral. It is about having the courage to ask, at every stage: Who benefits? Who is at risk? And what kind of world are we designing?

The line between persuasion and manipulation in UX is rarely clear, and the pressure to deliver business value often pushes designers into ethical grey zones – sometimes knowingly, sometimes simply because nobody is asking the right questions. As long as metrics are rewarded over meaning, and as long as ethical questions are treated as optional rather than essential, these patterns will keep repeating themselves.

But there is nothing inevitable about this. We do have the ability to challenge business-as-usual, to push back when asked to cross a line, and to insist that ethical considerations are built into both our process and our definition of success. This is not about grand gestures or heroics; it’s about making ethics what it should be: a normal and expected part of the job, just like usability or accessibility.

If we want our field to be respected – and if we want to respect ourselves as professionals – we need to start treating ethical choices as seriously as business ones. And if we expect things to improve, we cannot wait for change to come from elsewhere. It begins with each of us, in the moment we choose not to look away from the next ethical dilemma we are facing.