On Trust: Why AI Needs Data Visualization
To most of us, data seems like it is fact.
Whether presented through spreadsheets, databases, dashboards, or interactive visualizations, data comes in the guise of objectivity and truthfulness.
In reality, it is everything but that.
Data is not fact. Rather, it is a reflection of human perception, human understanding, and human agency. Every dataset carries the remnants of countless decisions: what to measure, when to measure it, how to frame it, what to include or exclude. These decisions are made according to particular perspectives and goals.
Since AI is the product of data, it too is a mirror of human perception, understanding, and agency. The models we train, the outputs they generate, the recommendations they make—all of these inherit the biases, assumptions, and limitations embedded in their training data.
Therefore it plays by the same rules as data does. And this realization brings a critical insight: In designing for AI, we need to apply an approach derived from data visualization in order to create reliable interfaces and outcomes we can trust.
Data Collection
Let’s start with an analogy to photography. A photo is seemingly objective. The camera captures light as it exists in the world. However, the more we study a photo, the more we realize that the photographer has made choices at every step of the way. From the choice of subject, to the time a photo is taken, to the way it is composed, deciding what is in the frame and what isn’t, to the technique used—which lens or filters are employed, whether it’s color or black and white—each decision shapes the final image.
Consider how we might capture an idyllic nature scene, deliberately omitting the factory just out of frame. Or how we photograph a beautiful sunset over a city skyline, waiting for just after the clouds part from a day of heavy rain. Or how we might shoot an apartment for rent with a wide-angle lens to make it appear larger than it actually is.
Likewise, how data is collected matters profoundly. We introduce bias in the way we measure or quantify things and turn them into data. In the data world, we talk about volume, velocity, variety, and veracity—the four V’s that characterize any dataset. But beneath these technical attributes lie human choices.
What we decide to measure—why this metric and not something else?—already carries an inherent bias, much like a photographer’s choice of subject. Bias also comes from the moment or time interval selected to capture data, just as a photographer chooses the decisive moment to press the shutter. Bias is introduced by what types of data we decide include and what we leave out, mirroring how a photographer frames a scene. And bias can be introduced in the process of data collection itself, through the tools and methods we employ, much like how different lenses or filters alter the captured image.
Data Interpretation
Next, there is the process of data interpretation. Here, too, there are choices made all along the way. This is analogous to an art critic analyzing a photograph—going beyond what was captured to understand what it means.
We start by looking at what we can see in the photo. What elements are visible in the image itself? Perhaps we see two people sitting at a table in a room, one of them is holding a document. These are the raw data, the observable elements.
Then, we examine how the visible elements relate to each other. By studying the expressions on their faces, the tension in their postures, we might deduce that the two subjects are having an argument. That is pattern recognition, finding meaning in relationships.
Next, we try to understand the photographer’s intent. What goal did they have in mind when they took the photo? In our example, perhaps they wanted to show a negotiation, to capture a moment of human conflict or resolution.
Finally, we think about how well the photo captures the intended outcome. What could have been done differently or better? This critical analysis helps us understand both the strengths and limitations of what we’re seeing.
Analyzing a dataset is remarkably similar. We start by examining what is in the data—studying the fields or column headers, counting rows, understanding linked tables and relationships. Next, we look for patterns through sorting, grouping, searching for specific occurrences or outliers. Finally, we infer insights or stories that tell us more about what the data represents.
Throughout this process, we’re making choices about what we choose to focus on, what type of analysis to conduct, and what conclusions to draw. Just as two critics might see different stories in the same photograph, two analysts might extract different insights from the same dataset.
Data Communication
Lastly, there is how we choose to represent a dataset and communicate the stories within it. This is analogous to a curator exhibiting a collection of photographs. The curator, aware of the content and interpretation of the photos in the collection, displays them in a certain way to guide, inform, and inspire the viewer.
Data representation includes making data tangible to an audience through writing, verbal description, visualization, or other means. The choices here are as consequential as those made in collection and interpretation. How we depict data, what stories we surface, what visual choices we make to highlight certain aspects—all of these change the way data is perceived by the audience.
A simple line chart versus a complex network diagram tells different stories about the same underlying data. Color choices can emphasize or diminish certain patterns. Scale can make differences appear dramatic or negligible. Even the choice of whether to show data as absolute numbers or percentages fundamentally alters how an audience understands the information.
The Architecture of Trust
It is then clear that there is an element of agency—human or otherwise—all the way up the stack, from Data Collection, to Data Interpretation, to Data Communication. Because of this, there is space for an agent to steer the outcome, depending on their motives. This leaves a central question: How can we trust any outcome we are presented with?
The answer is authority.
To build trust, we need an authoritative perspective—whether it is the photographer, the critic, or the curator. The authoritative perspective is one that we hand power and agency to, not blindly, but based on demonstrated competence and alignment with our values.
This is no different than elsewhere in life. We trust a person because of our positive interactions with them over time. We trust a brand because its message appeals to us and we have had consistently positive experiences with it. We trust a news publication because its content resonates with us and we find its reporting reliable.
However, trust is fragile. It is hard won and easily lost. Trust has to be earned—it is never a given from the onset. Trust is built over time—it must be continuously present across every touchpoint or interaction. And trust can be lost in an instant, though it takes a long time to regain.
For digital product experiences, designers are the authority when it comes to building trust. This is both a privilege and a responsibility. Building trust is the one thing we as designers have fully in our control, regardless of the underlying technology or business constraints.
As designers, our highest leverage tool is craft. When crafting digital product experiences, we are trying to win over an audience. Winning over an audience begins with first impressions and extends to an entire user journey, where trust is built or lost every step of the way. Trust is therefore all about consistency. Trust equals consistency over time.
The AI Authentication Problem
AI, like data, has an authentication problem.
When it comes to diffusion models, it is already nearly impossible to tell if a photo or video is a deepfake or AI generated. The technology has advanced so rapidly that our traditional markers of authenticity are no longer reliable indicators.
Language models hallucinate—generating plausible-sounding information that has no basis in reality—or produce false answers because they’re optimized to be helpful, presenting everything with the same confidence as verified facts.
And more generally, generative AI outputs are by definition indeterminant. The same prompt can produce different results, making consistency—that fundamental building block of trust—inherently challenging.
How can we trust the veracity of AI-generated content? How can we trust the provenance of images, video, and other media? How can we trust AI agents to produce the right output consistently over time?
Data is the fuel for AI — therefore, we can learn from the field of data visualization, which has grappled with these questions of trust and authenticity for decades. The principles that guide effective data visualization can serve as a foundation for building trustworthy AI interfaces.
Principles of Trustworthy AI
Below are eight principles that data visualization designers have been using for decades, applied to designing AI-native experiences that build and maintain trust.
1. Define Clear Purpose
Every effective data visualization serves a clear, specific purpose—whether it’s revealing trends, comparing categories, or highlighting outliers. Applied to AI, this means designers must first define what the AI system is meant to accomplish and what value it should provide to users. For instance, a medical diagnosis AI should clearly indicate it can identify patterns in health data but cannot replace professional medical judgment, while a writing assistant should show it excels at grammar and structure but may need human oversight for factual accuracy. Every element of a visualization should be designed to help people understand or act on information. Similarly, every element of an AI interface should be designed to help users understand the AI’s capabilities, limitations, and current state.
2. Show the Data
Let the data itself stand out prominently to show patterns, trends, and outliers. Every element in the visualization should earn its place by contributing to understanding the data. Applied to AI, this means that we need to make the AI’s reasoning visible, not only its outputs. Users should see how the AI arrived at its conclusions—what data it considered, what patterns it recognized, what assumptions it made. An effective visualization focuses the viewer’s attention on the data itself, so that people can form their own conclusions from the evidence presented. An effective AI interface exposes enough of the underlying reasoning that people can form their own assessment of the AI’s reliability and relevance.
3. Maximize Data Density
Present the maximum amount of relevant data at a time without overwhelming the viewer. The best visualizations maximize the amount of data while maintaining clarity. Applied to AI, this means showing as much information about the AI and its operations without overwhelming the user. This may include confidence levels, data sources, potential biases, and alternative interpretations. The most effective AI interfaces will demonstrate transparency in every state and interaction.
4. Facilitate Comparison
Comparison is fundamental to interpreting data—whether it’s comparing data across categories, time, or conditions. AI interfaces should allow users to compare AI suggestions against their own judgment, human-generated alternatives, or historical results. Provide side-by-side views of different outputs, show how they change with different prompts, and make it easy to assess AI performance over time. By making these comparisons users should have the ability to quickly evaluate whether the AI’s output aligns with their expectations and needs.
5. Enable Macro/Micro Exploration
Any visualization should facilitate multi-level exploration. Start with a broad overview of the entire dataset, giving users a sense of overall shape, scale, and patterns. Then, let users zoom in on items or ranges of interest and filter out uninteresting data. Finally, present detailed information for specific items of interest. The same principle applies to AI interfaces. Start with clear, actionable AI outputs. Then, allow users to dive deeper into the AI’s reasoning, explore edge cases, and understand limitations. Finally, provide access to underlying data sources and detailed explanations. This approach ensures that the right level of information is presented at the right time.
6. Ensure Data Integrity
Represent data truthfully and proportionally. Avoid deceptive visuals, so that what a viewer sees reflects what the data actually means. Choose appropriate visual encodings that maximize accuracy. This also applies to AI. Never imply certainty where none exists. Show confidence intervals and highlight potential biases. Make data provenance clear and cite sources. When the AI makes mistakes or produces unexpected results, these should be visible and correctable.
7. Use Human-Centered Metaphors
Metaphors can reinforce the meaning of a visualization, guide user expectations, and foster emotional or narrative connection. When chosen thoughtfully, they make abstract concepts more tangible and memorable. Similarly, employ metaphors that help people intuitively understand what the AI is doing and why. These metaphors should set appropriate expectations, while avoiding anthropomorphism that overpromises. Human-centered metaphors make AI behaviors predictable and help users develop accurate mental models.
8. Maintain Simplicity
Ensure that every element serves a clear purpose and that the visualization is as minimal as it can be without sacrificing clarity or completeness. AI interfaces follow the same principle—they should be as self-explanatory as possible. This doesn’t mean hiding complexity—you can show complexity in a simple way if done well. It means showing AI capabilities in the most intuitive way possible. Controls should be obvious, and feedback should be immediate.
These principles directly address the AI authentication problem by providing a framework for transparency, verification, and human oversight. Just as data visualization solved the challenge of making complex data trustworthy and actionable, these principles ensure that AI serves human needs while maintaining the transparency and accountability necessary for building trust. They position designers as the mediators between machine intelligence and human agency.
Trust is the Currency
Applying a data visualization approach to human-AI collaboration is firmly rooted in the belief that machine intelligence is there to serve humans, not the other way around. This human-centered perspective shapes every design decision, from the smallest interaction to the overall system architecture.
Using these principles ensures that designers maintain the authority in crafting AI-native user experiences that put people back in charge. We become the translators between machine intelligence and human needs, the architects of trust in an uncertain landscape.
In the era of AI, trust is the currency. Without it, even the most sophisticated AI systems will ultimately fail to create lasting value. With it, we can build experiences that augment human capability while at the same time respecting human agency.
And with an understanding of the principles underlying effective data visualization, designers have both the tools and the authority to build trust. We’ve solved these problems before in the realm of data. Now we must apply these hard-won lessons to the frontier of artificial intelligence.
The path forward isn’t about creating AI that perfectly mimics human activity and output. It’s about building systems that acknowledge their constructed nature, reveal their workings, and empower humans to make informed decisions.
This is our charge as designers in the age of AI: to be the photographers who frame the scene honestly, the critics who interpret with real human insight and judgement, and the curators who present with purpose and conviction. In doing so, we will create the foundation of trust that will enable true human-AI collaboration.
Thanks to Albert Shum for his perspective on trust as a design principle, and to Jude Sue whose inspiring Config presentation reinforced the importance of this theme.