Fieldnote from HTAi 2025: What AI Won’t Solve in HTA

Fieldnote from HTAi 2025: What AI Won’t Solve in HTA

AI is quickly becoming a staple in health technology assessment. Forecasting, evidence synthesis, signal detection: all are being optimized by algorithms. But the more we automate, the more we notice what can’t be automated: trust, participation, lived experience. This fieldnote captures my reflections from the AI and Patient Experience (PED) workshops at HTAi 2025, and explores a simple truth: that in health systems, what matters most isn’t always what machines are best at.


HTAi 2025 was filled with tech optimism. You could feel it in the air: AI models for literature screening, generative tools to support appraisal, algorithms that promise speed, scale, and reduced bias, and these are real advancements.

But as I sat through the workshop on Patient Experience Data (PED), Co-led by Hayley Chapman (PFMD) and Neil Bertelsen (PCIG), I noticed something else.

We still don’t have models that can answer questions like:

  • Why didn’t this patient feel heard?
  • Why did they drop out of care?
  • What was it like to live with this decision?

That’s not because AI isn’t powerful.

It’s because some parts of evidence are still human.


In the AI-focused session, speakers from Latin America and Europe shared how local HTA bodies are adopting machine learning. The use cases were impressive. NLP for early-stage assessments. Predictive scoring for risk-benefit.

But I kept thinking: where do values enter the equation? And who sets the boundaries for what’s “relevant data”?

In contrast, the PED workshop was slower. Patient advocates spoke about emotional burden, narrative evidence, and the lack of integration between experience and evaluation. Their presence was quiet but firm. You couldn’t automate what they were bringing to the table.

It struck me that the future of HTA might not be one future. It might be two realities, running in parallel:

  • One that’s optimizing what’s measurable
  • Another that’s still fighting to be seen

If there’s one thing AI can’t solve, it’s meaning. Meaning comes from context, conversation, and interpretation. And in HTA, meaning is what gives evidence its weight.

So yes, let’s use AI where it fits, but let’s not mistake fit for fullness.

The work of making healthcare more human still belongs to us.

Footnotes

  1. HTAi 2025 Conference Agenda. Artificial Intelligence and HTA: Challenges and Perspectiveshttps://htai.org/annual-meeting/2025-program
  2. PFMD. (2024). Patient Experience Data: A Global Guidehttps://patientfocusedmedicine.org/ped-guide
  3. Gerke, S., Minssen, T., & Cohen, I. G. (2020). Ethical and legal challenges of AI in health carehttps://www.healthaffairs.org/doi/10.1377/hlthaff.2020.00144
  4. Blease, C., Kaptchuk, T. J., Bernstein, M. H., & Mandl, K. D. (2019). Artificial Intelligence and the Future of Psychiatry: Insights From a Global Physician Surveyhttps://doi.org/10.1016/j.jpsychores.2019.05.008