Wave of Noise: Design
AI Strategy for Digital Product Organizations
Read how design rides the wave of noise. “Generative Artificial Intelligence” promises to change the world unlike anything we have seen in our professional lives. This is part two of our article series that outlines the approach we at DK&A are taking and the general, strategic position we have defined for ourselves and our clients.
The ongoing AI revolution will be won or lost by designers.
The responsibility of design
Designers consider themselves the ambassadors of the end users, those who end up interacting with the products and services their organizations provide. We design the bridge between a customer’s desires or requirements and a company’s propositions. And given the potential impact on a customer's life, this responsibility isn’t small either.
There are professions more harmful than industrial design, but only a very few of them.
Harm by design
Everything is a lie
Mike then goes on to prove his point with painful examples — often involving real harm to real people. Harm inflicted by design, so to speak.
We are wielding a powerful double edged sword. Both Mike and Victor, decades apart, emphasize especially the tremendous impact design and designers can have on their environment by delivering products at scale.
And that’s what digital product development is all about: scale. So it’s worth pausing and remembering that our decisions have real world consequences and may end up making or breaking lives. Poorly conceived designs can wreak havoc, but well-validated and creative solutions have the power to positively transform lives.
AI as UX revolution
Design in the age of AI
The advent of generative AI technologies such as GPT 4, Llama-2, Falcon, and diffusion models like Midjourney and Stable Diffusion, adds new dimensions to our responsibilities. In fact, at this moment in time it’s reasonable to believe that AI has the power to fundamentally transform how we communicate, work, and even live.
There’s precedence: The iPhone and smartphones in general have reshaped customer expectations toward digital experiences far beyond mobile phones. Large language models (LLMs) like OpenAI’s GPT will very likely change how we interact with information across all digital channels and possibly even introduce new ones along the way.
How exactly this will all play out will be for us designers to shape but taking clues from the smartphone revolution we can say with some certainty that success will be defined by the experiences we end up delivering to our customers.
Today’s “AI” revolution is more of a UX revolution than an AI revolution.
The paradigm shift introduced by the advent of generative AI has already started to reveal a new set of challenges to tackle.
A mismatch of expectations
One of the first obstacles we encounter lies in the discrepancy between customer expectations and the way LLMs truly operate.
From childhood, we are used to understanding the world in binary terms — black or white, yes or no, right or wrong. As children, we are prone to perceive our surroundings, actions, and experiences in stark contrasts of black and white, or right and wrong. This cognitive inclination stems from our need for clear, straightforward categorizations in our formative years, which helps us make sense of the world around us.
Adults know albeit not necessarily intuitively understand that the real world is more like a spectrum of grays and as such a lot closer to the way AI operates. Rather than delivering binary output, AI navigates the complexities of probabilistic scenarios, delivering outcomes that may fall anywhere within a spectrum of possibilities — just like the real world. This can be challenging for those accustomed to definite answers. As designers, our task is to bridge this gap and create interfaces that help users understand and navigate these more complex, non-binary results.
But there’s more. Customers today inhabit a world where actions and reactions are linear and finite: customer journeys tend to have clear beginnings and ends, press a button twice, and you'll get the same result twice. However, neither necessarily the case with LLMs.
LLMs are non-deterministic models. The same prompt submitted to ChatGPT will very likely generate different results at different times. While customer journeys are typically designed following a linear model, conversational interactions enabled by generative AI often adopt a more complex, graph-like or tree-like structure. Our methods designed to capture and describe traditional, transactional interactions will have to evolve to serve this new reality.
UX veteran Jakob Nielsen describes what’s become known colloquially as “prompting” — the first new interaction model in more than 60 years — as “Intent-Based Outcome Specification”.
The user tells the computer the desired result but does not specify how this outcome should be accomplished. Compared to traditional command-based interaction, this paradigm completely reverses the locus of control.
Do what I mean, not what I say is a seductive UI paradigm — […] users often order the computer to do the wrong thing. On the other hand, assigning the locus of control entirely to the computer does have downsides, especially with current AI, which is prone to including erroneous information in its results. When users don’t know how something was done, it can be harder for them to identify or correct the problem.
A lack of insight
And if things do not go as planned, if intent hasn’t been translated properly into instructions, new challenges emerge.
Machine learning models are opaque in ways that make HTTP status codes look like street signs. Inside the neural networks powering LLMs are a complex web of interconnected artificial neurons, their functionality dictated by the weight and structure of their connections. They adapt and learn from vast quantities of data, creating models that can make highly accurate predictions. Understanding how these models arrive at a particular outcome can be highly challenging however. To try and read the state of a neural network is akin to trying to decipher an alien language: the words and structure are visible, but the meaning is elusive. Understanding how they arrive at the wrong outcome is equally difficult.
Ethics and privacy
Let’s conjure Mike Monteiro and the impact of Design at scale once more.
The creation and deployment of AI systems like ChatGPT involve processes that have considerable implications for individual privacy and societal norms. During the training phase, AI systems rely on vast amounts of data, some of which could be sensitive or personal. It is not uncommon for AI to be unintentionally taught to perpetuate harmful biases or to invade users’ privacy by extrapolating from this data. Even during inference, when AI systems are predicting outcomes based on the learned model, they can sometimes reveal sensitive information, or make decisions that might be considered unfair or discriminatory.
For designers, these challenges necessitate a rethinking of traditional design ethics. Designers, the ambassadors of humanity as it were, must approach AI with a keen awareness of these potential pitfalls. A vital starting point is understanding the nature of the data used in AI training and the implications of its usage. Ethical considerations must be integrated into every step of the design process, from the initial ideation phase through to the final product deployment. This could mean insisting on transparency about the source and nature of training data, or ensuring that AI systems provide clear and easily understandable explanations for their predictions and actions.
Designers can also play a crucial role in mitigating these challenges. By advocating for a user-centric approach designers can ensure AI systems are built not only with technical efficiency but also with respect for privacy, fairness, and inclusivity. They can introduce privacy-enhancing technologies or techniques such as differential privacy during the design process. They can also ensure that AI systems are tested against a diverse set of scenarios to minimize bias and ensure fairness. Designing with privacy and ethics in mind doesn't just mitigate potential harm—it can also build trust with users and foster a more responsible AI landscape.
Latest at this point it should become clear why this period of technological change must be lead by design.
The perils of natural language
People became familiar with LLMs through ChatGPT, with many attributing the real revolution to its conversational interface. However, “the prompt” — the user input guiding the model and seeking a response — comes with its own set of challenges. This practical shortcut shows its limitations in real-world applications, such as interaction with diffusion models for image generation.
As Yann LeCun, Meta's Chief AI Scientist, puts it,
Language is an imperfect, incomplete, and low-bandwidth serialization protocol for the internal data structures we call thoughts.
[Language] forces us to discretize the space of concepts into words, can help some forms of reasoning.
But the vast majority of our knowledge, skills, and thoughts are not verbalizable.
Language is an imperfect, incomplete, and low-bandwidth serialization protocol for the internal data structures we call thoughts.
— Yann LeCun (@ylecun) March 6, 2021
Human language — even when it's our mother tongue — has exceedingly low throughput compared to most artificial protocols. It is ill-suited for controlling complex systems due to its limitations and high context-dependency. Words are interpreted based on social and cultural context. Any input method using natural language will inevitably grapple with these established expectations. Currently, systems like Midjourney or Dall-E fall short of even basic standards. Midjourney’s documentation candidly states,
The Midjourney Bot does not understand grammar, sentence structure, or words like humans.
When planning for short- to mid-term use or integration of any AI tools designers will have to consider these limitations. Midjourney, for instance, offers an experience more akin to a command-line interface (CLI), using flags and properties like weights, which has triggered a number of workarounds to be developed off-platform as CLIs are generally not easily discoverable or intuitive, especially when delivered via a platform like Discord.
Can AI be empathetic?
Feels just like love
When asked for their superpower designers point at empathy — the ability to understand and share the feelings, perspectives, and experiences of others. Empathy enables designers to build experiences that are not just functional, but also profoundly human-centered, accessible, and inclusive.
While this may be true, it turns out that there’s a lot of components that might make a recipient rate a transaction as “empathetic”, which are mimicked just as well or even better by a machine.
During early 2023 researchers at the University of California in San Diego randomly selected 195 patient questions from Reddit's r/AskDocs social media forum (patients ask, medical professionals answer), had them answered by human doctors and ChatGPT, and the answers evaluated by a team of licensed health care professionals.
The results are striking: The quality of responses given by ChatGPT was rated 3.6 times higher than those by physicians (22.1% vs 78.5%) and 9.8 times more empathetic (4.6% vs 45.1%).
The results might easily feel overwhelming at first but they also show us a clear path forward: By supporting professionals with powerful AI tools in a transparent, intuitive fashion designers will transform entire industries by delivering far superior customer experiences.
ChatGPT, LLMs, AI — Truly empathetic or not, technology is tireless, unconstrained by availability, and nearly infinitely scalable. By understanding these properties, the underlying nature of the technology, the shortcomings of its modalities — designers will be enabled to create more human-centered experiences than ever.
At DK&A we are training a new generation of “AI Designers” as part of our Usable AI team to help clients unlock tremendous opportunities.
The toolkit in their disposal introduces several new instruments to our clients:
Sense-making: As technology progresses, there’s an increasing need for translators who can bridge the gap between complex data science, leading AI providers, and end consumers. Designers naturally assume this mediating role.
Research: Aligning customer expectations with swiftly evolving technology demands a thorough understanding of both domains. Our designers apply both qualitative and quantitative research methods (such as interviews, diary studies, observation, and surveys) to perpetually refine how AI technologies like LLMs can augment or transform a customer journey.
Concept design: Equipped with an in-depth comprehension of technology, customer needs, and business imperatives, our designers utilize our upcoming methodology to create Usable AI to craft concepts for improved transactional UI, an entirely conversational approach, or a hybrid experience.
Prototyping: The ability to fail early, safely, and economically has consistently been a key determinant of a product’s success. The rise of AI technologies doesn’t alter this reality — in fact, non-deterministic systems like LLMs necessitate even more real-world experimentation and validation.
Roadmap: Numerous companies are likely currently considering or actively experimenting with AI integration and LLMs like OpenAI's GPT. What differentiates long-term winners from companies that squander resources on fruitless experimentation is the capacity to sustainably integrate novel experiences with their brands over time. A team of AI designers, composed of strategic, business, service, and experience designers, can guide the development of a long-term vision.
Ethics and inclusivity: To adhere to present and future regulations, and cater to an increasingly socially conscious customer base, a Usable AI design process must contemplate the ethical implications of AI. Moreover, it must guarantee the product's accessibility and inclusivity for all users.
The noise of unrelenting hype, amplified by social media and doom-sayers, might obscure a truth that’s been valid yesterday and will be valid tomorrow: your customers decide what matters, their needs, concerns, and pains are the only north star you need to deliver value. AI might get you there faster, might open new avenues of approach, and even springle a slight dusting of magic.
Or to return to the theme of the series Sun Tzu’s “The Art of War”:
In the midst of chaos, there is also opportunity.
This is part 2 of a 3-part series on our strategic position regarding recent developments in Generative Artificial Intelligence. Read part 1 “Organizations” on our blog, the upcoming part 3 will discuss our take on key questions regarding Software Development and Technology, respectively.