Designing for the Feeling of Intelligence: What AIX can Teach us About Trust, Presence, and Design in Healthcare

When we talk about AI in healthcare, we usually focus on performance. How accurate is the system? How fast? But we often skip over a more subtle question: how does it feel to use?
Artificial Intelligence Experience, or AIX, is about how people experience AI systems. It’s the emotional, cognitive, and behavioral layer that sits between the algorithm and the human. In healthcare, that layer matters more than we might think. This post looks at how design can shape not only what AI does, but how it feels to interact with, and why that feeling matters.
Introduction: Intelligence has a Feeling
Interacting with AI in healthcare isn’t always what people expect. Maybe it’s a chatbot that gives vague advice. Maybe it’s a risk score that appears next to a patient’s name. Maybe it’s a tool that helps write clinical notes. These aren’t just technical systems. They create experiences, ones that come with emotion, judgment, and interpretation.
This is what AIX is about.
Not the system itself, but the experience of using it.
Not the output, but the feeling it leaves behind.
And in a field like healthcare, where trust is essential, that feeling carries weight.
What is AIX?
AIX stands for Artificial Intelligence Experience. It describes how a person experiences a system powered by AI, including:
- How much they understand or trust it
- What they assume about its limits
- Whether they feel supported or dismissed by it
AIX is not just about interface or aesthetics. It’s about how design decisions shape perception. It’s about how tone, timing, and context influence trust.
In healthcare, this can affect how a patient responds to advice, how a clinician integrates an AI tool into their decision-making, or how a family interprets a recommendation. The experience is never neutral.
Healthcare Examples are Everywhere
Some AI tools are obvious, like virtual assistants or symptom checkers. Others are more subtle, like risk prediction scores that are embedded in the workflow or algorithms that suggest diagnoses.
Each of these tools creates a different kind of relationship with the person using it. A chatbot might feel overly friendly. A dashboard might feel cold or confusing. A scan result might feel final, even if the AI is unsure. In each case, the design shapes how people feel about the tool and whether they want to rely on it.
Trust is a Design Decision
We often talk about trust as if it’s earned through accuracy and that’s partly true. But trust also comes from how something is presented, whether it admits uncertainty or it speaks in a tone that fits the moment.
People don’t just evaluate the information. They evaluate how it was delivered. Was it too confident? Too vague? Too fast?
These moments shape the user’s belief in the system. In healthcare, that belief is tied to safety. It can influence whether a patient follows up or whether a clinician chooses to rely on the tool in a tight moment.
Behavioral science tells us that small design cues (like offering explanations, or showing that an answer could be wrong) can go a long way in building trust. That’s part of AIX too.
What does AI Look Like to a Human?
Every AI system has a personality: it might act like a helpful assistant, a cold expert, or a neutral observer. That personality shapes how people interpret the system, and in a clinical setting, this matters. If the AI feels like a black box, users might feel powerless. If it’s too friendly, it might not be taken seriously. If it speaks like a human, but can’t understand human needs, it creates a disconnect.
Designers have to decide what kind of presence the AI should have, and that means making the interaction feel coherent, respectful, and understandable.
Responsibility, Blame, and the Emotional Weight of AI
AI can shift responsibility. If something goes wrong, who gets blamed? The clinician who trusted the tool? The patient who followed it? The developer who built it? These questions are ethical, but they’re also emotional. When people don’t understand how a system works they carry stress, hesitation, or guilt.
Good AIX makes accountability visible. It makes it clear what the AI is doing, what it isn’t, and what kind of decision is being asked of the human. That clarity builds both confidence and safety.
Designing AIX with Care
So what does good AIX look like in healthcare? It’s not about fancy animations or natural language, but presence. It’s about designing the AI in a way that feels responsible, understandable, and human-aware.
Here are a few simple principles:
- Let people see how the system reached a conclusion
- Show uncertainty when it exists
- Match tone to the seriousness of the moment
- Make it easy for the user to interrupt, override, or ask questions
- Don’t hide the AI. Don’t perform magic. Invite people in.
Conclusion
Most people won’t remember the name of the algorithm that served them, but they’ll remember how it made them feel.
Did they feel respected? Rushed? Reassured? Overwhelmed?
In healthcare, those feelings matter because they shape choices, behaviors, and relationships. That’s why we need to stop treating AI like a machine to be optimized, and start treating it like a relationship to be designed, because in the end, what AI does will always matter. But how it feels will determine whether people trust it at all.
Footnotes
- IDEO + Microsoft. Human-AI Design Guidelines: Designing for the Human Experience. https://www.microsoft.com/design/inclusive/human-ai-guidelines
- Patel, V. L., Shortliffe, E. H., Stefanelli, M., et al. (2009). The coming of age of artificial intelligence in medicine. https://doi.org/10.1016/j.artmed.2009.07.002
- Ehsan, U., Liao, Q. V., Muller, M., & Riedl, M. O. (2021). Expanding explainability: Towards social transparency in AI systems. https://doi.org/10.1145/3411764.3445515
- Hoffman, R. R., et al. (2018). Metrics for Explainable AI: Challenges and Prospects. https://arxiv.org/abs/1812.04608
- O’Neill, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. https://weaponsofmathdestructionbook.com
- Jarrahi, M. H. (2018). Artificial Intelligence and the Future of Work: Human-AI Symbiosis in Organizational Decision Making. https://doi.org/10.1007/s12599-018-0549-4
- Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. https://doi.org/10.1016/S2589-7500(21)00019-9