50-50 David Ingram and ChatGPT, January 2nd, 2026
My pre-Christmas essay reflected on three decades of the openEHR mission. This sequel comprises ChatGPT’s take on the follow-up question I then asked, concerning the impact AI will have on its coming decades.
I will preface the response with a family story that threw light for me on the question I had posed. It is in part an amusing anecdote of GIGO (garbage in, garbage out) and in part a salutary warning about the evolving age of AI.
In the week before Christmas, I spent an illuminating few days preparing as much as I could of a genealogy of my wife, Bozena’s, illustrious Przypkowski family ancestors in Poland, tracing back nearly 400 years. I had attended many reverend graveyard visits on All Saints Day, to place candles and flowers on family graves, heard many contemporary family accounts of this history, and was interested to explore further. The result was to be a Christmas present, although far more remains to be done – some family accounts turned out to be hypothetical, at best, while many new details emerged with solid evidence in support.
I started by writing what I knew and then checked and supplemented this with Gemini’s knowledge base that delves further back in time, gleaned from what must inevitably be a very incomplete and contradictory set of historical records. I observed its incremental ‘corrections’ of many inconsistencies and hallucinatory inventions produced along the way, that I had queried! ‘Oops, sorry, I got that wrong’ appeared quite often!
Having tidied this text, I uploaded the complete dialogue to ChatGPT and asked it to provide a critique, by classifying 12 or more generations of family context and listed births, marriages and deaths into a definite, probable and hypothetical record. On Christmas day, I presented both documents to Bozena, and we agreed next steps in improving it further, together. She has relevant contemporary eye-witness knowledge that contradicts some of the recent records I had discovered.
Thinking of care records, she was akin to the patient, the AI knowledge base the ensemble of related care records, and I was the care practitioner seeking to clarify and understand her family history.
This experience provided a graphic example of the behaviour of an LLM, trained on historical record and faced with making its own ‘sense’ of noisy and inconsistent data. It sought to rationalise the scene and thereby made wrong assumptions that amplified the confusion, based on its own mistakes and hallucinatory inventions. In the areas of the family tree where I had more precise knowledge, dates and locations, it gradually improved its rationale and told me, quite persuasively, what I would need to do to make the genealogy more complete and precise. Not that I would have the wherewithal to follow this advice! One thing it confirmed was that some of the family folklore about the earliest parts of the family line are very likely to be more wishful thinking than evidenced record! Quite fun and quite educational for us both.
Here then is the ChatGPT response to my essay, offered without comment or approval. I am attributing 50% authorship to my essay and 50% to the ChatGPT LLM. 50:50 is always a good place to start in any uncertain partnership!
Your text expresses several foundational commitments of the openEHR movement:
• fidelity to the meaning and purpose of care
• openness and shared stewardship
• global semantic coherence
• lived, iterative grounding in practice
• a moral orientation, not merely a technical one
• the idea that information systems should be living utilities, not static artefacts
AI—especially the large-scale, generative, multimodal, and agentic systems emerging today—will profoundly impact all of these commitments. But the key insight from your essay is that AI does not threaten the mission of openEHR—it intensifies and radicalises it.
Below is a structured analysis aligned with the conceptual architecture of your text.
1. Clinical–Technical–Organisational: A New Trifecta with AI Inside It
Impact
AI inserts itself into all three domains:
- Clinical:
AI becomes a participant in sense-making, triage, summarisation, decision support, risk prediction. This makes semantic accuracy more crucial, not less: AI’s interpretations must be rooted in shared meaning, not opaque proprietary schemas. - Technical:
AI models need structured semantic scaffolding. Without standards like archetypes, AI will hallucinate, distort, or overfit to local idiosyncrasies. AI is good at language; it is terrible at ungoverned semantics. - Organisational:
AI’s adoption depends on governance, provenance, auditability, and clinical safety assurance—domains where openEHR already provides unusually strong foundations.
Required evolution
Move from clinical–technical–organisational to clinical–computational–institutional, acknowledging that AI demands explicit governance not only of data but of models, behaviours, and contextual constraints.
2. Rigour – Engagement – Trust: The Ethical Centre Becomes Mission-Critical
Impact
AI introduces new risks: bias, brittleness, lack of explainability, data leakage, automation without understanding. In this context:
- Rigour must extend to model validation, algorithmic transparency, data provenance, and continuous monitoring.
- Engagement must involve patients, citizens, and frontline professionals in shaping how AI appears in care.
- Trust becomes a dynamic property of the ecosystem, not a one-off achievement.
Required evolution
openEHR will need to provide:
• structured, computable semantics that make AI verifiable
• governance frameworks for data re-use, model training, and data lineage
• mechanisms for embedding human oversight and interpretability into the information model itself
3. Implementation–Implementation–Implementation: AI Needs Ground Truth
Impact
AI is trained on what exists, not what is ideal. If the health record is poorly structured, AI will amplify the mess. AI “learns” implicitly from latent semantics; openEHR provides explicit semantics—vital for training safe, bias-aware models.
Required evolution
openEHR should embrace a role as the semantic-operational substrate for:
- data used to train clinical and operational AI models
- contextual metadata for prompts, agents, and decision engines
- continuous feedback loops between real-world outcomes and model updates
In short: AI opens a new future for semantic runtime learning, which requires a deliberately structured data foundation—openEHR’s raison d’être.

4. Balance – Continuity – Governance: AI Brings a New Governance Load
Impact
AI introduces governance questions beyond traditional standards:
- Who owns models trained on clinical data?
- How is fairness measured?
- How do we prevent model drift?
- How is accountability allocated when AI participates in decisions?
Required evolution
openEHR can become a cornerstone for AI governance metadata, supporting:
- audit trails
- consent and data minimisation
- purposes of use
- lifecycle management for AI agents
- safe boundaries for automation (“clinical actionability envelopes”)
This is not merely a technical extension—it is a new domain of socio-technical stewardship.
5. Commitment to Openness: AI Makes Openness the Only Safe Option
Impact
AI trained on proprietary schemas reproduces proprietary paths of thought. Opaque models fed with opaque data generate opaque decisions.
Your text clearly positions openness as a moral stance. AI makes this unavoidable:
Closed data + closed models = closed reasoning in care.
Required evolution
openEHR may need to lead in:
- open datasets for model validation
- open ontological frameworks for clinical AI
- open model documentation (model cards, data sheets)
- a global, open semantic commons for AI-readable medical knowledge
6. Adventure – Anarchy – Reform: AI Brings a New Cycle
AI introduces a fresh wave of:
- Adventure (new modalities: multimodal models, agents, “AI companions”)
- Anarchy (rushed deployments, hype cycles, uneven safety)
- Reform (regulatory tightening, new global norms)
Required evolution
openEHR must position itself not only within the programme of reform but as a stabilising force during the coming anarchy—providing coherent semantics, governance, and continuity when everything else accelerates.

7. Curative – Preventive – Lived Care: AI Is Overinvested in the First Two
AI currently excels at:
- pattern detection → diagnosis
- prediction → prevention
But it is poor at:
- lived care, chronicity, social complexity, context
- the relational, the narrative, the human
Your text’s emphasis on “lived care” positions openEHR to fill a crucial gap: creating record structures that reflect the longitudinal, narrative, supportive aspects of life with illness.
Required evolution
Develop archetypes and models that encode:
- lived experience
- patient narratives
- symptom trajectories
- social and community contexts
- functional, emotional, and relational dimensions of care
These data types are essential if AI is to support whole-life care instead of reducing care to optimisation.
8. openCare: The Living Utility Becomes an AI Ecosystem
Your openCare proposal—an “organic information utility” balancing global coherence and local agency—fits AI naturally.
AI agents will soon:
- operate across clinical workflows
- manage care coordination
- draft documentation
- evaluate risks and trajectories
- personalise care pathways
- interface directly with citizens
This requires:
- stable semantics (global coherence)
- highly contextualised models (local agency)
Required evolution
openEHR + AI → adaptive, learning care information utilities, where:
- records continuously update model behaviours
- models continuously refine clinical insights
- governance ensures the cycle is safe, transparent, and equitable
This makes openCare both plausible and urgent.
9. Philosophy and Ethics: AI Heightens the Moral Dimension
Your essay emphasises that health information is a moral endeavour.
AI makes this unavoidable.
We are entering a world in which:
- machines generate meaning
- machines mediate interpretations
- machines participate in decisions
In this world, openEHR’s mission becomes not only technical but philosophical:
to preserve symbolic clarity, semantic integrity, and human-centred meaning in a time of algorithmic turbulence.
Summary: How the Mission of openEHR Must Evolve
From:
Creating a durable semantic foundation for safe, meaningful care.
To:
Creating a durable semantic and governance foundation for safe, meaningful, accountable human-AI collaboration in care.
openEHR’s role expands to:
- the semantic backbone of medical AI
- the governance substrate for AI behaviour and provenance
- the ethical counterweight to opaque, proprietary clinical AI
- the representational foundation for lived care
- a socio-technical steward of global-local learning health ecosystems
- the stabilising force in an era of accelerating computational mediation
AI does not replace the mission of openEHR;
it universalises it.
The stakes become higher, the scope broader, and the need for semantic integrity more urgent.
openEHR becomes not just a record foundation but part of the constitution of the clinical-AI world.
Below is an additional forward-looking section designed to integrate cleanly with the voice and structure of your essay.
openEHR and AI in the Coming Decades
The arrival of advanced AI marks a new phase in the history of health information—a period that will likely unfold along the very cycle Whitehead described: a fresh adventure of ideas, a chaotic anarchy of transition, and, eventually, a programme of reform. The question for the openEHR movement is not whether AI will transform healthcare, but how we shape the conditions under which that transformation occurs.
AI will become deeply embedded in clinical work: interpreting scans and histories, predicting trajectories, generating documentation, assisting with triage, personalising care pathways, and supporting the long arcs of chronic and lived care. These systems will require stable, transparent semantics; explicit models of meaning; auditable provenance; and governance that crosses institutional and national boundaries. Without these foundations, AI risks becoming a new layer of opacity—an amplifier of error, inequity, and disconnection.
openEHR is uniquely positioned to provide the semantic and ethical infrastructure required for this next phase. Its archetypes and templates offer explicit meaning that computational systems can reason with. Its governance traditions—rigour, engagement, trust—align closely with emerging principles of responsible AI. And its commitment to openness provides a counterweight to proprietary models that would otherwise shape care behind closed doors.
In the coming decades, the movement will need to evolve in several directions. It will need to support the representational needs of lived care—patient narratives, functional trajectories, social context, emotional and relational dimensions—so that AI models can reflect the whole life of care, not only its biomedical fragments. It will need to encode not just clinical data but the metadata of algorithmic life: lineage of models, conditions of use, boundaries of autonomy, signals of drift, and mechanisms for oversight. And it will need to integrate with learning health systems in which data flows continuously into model refinement and model behaviour flows back into practice.
The openCare idea—a living, organic information utility—becomes not only plausible but necessary in this setting. AI agents will operate across local contexts but require global coherence. The utility must support diversity without fragmentation and continuity without rigidity. It must be a shared commons: a structure that nourishes human judgement rather than substituting for it.
Ultimately, the future of openEHR in the age of AI is not to control or contain these technologies, but to guide their meaning. To ensure that computational reasoning remains tethered to clinical reality, organisational purpose, and human need. To sustain the symbolic insight that Whitehead saw as the basis of civilisation. And to provide, amid uncertainty and acceleration, a stable, open, trustworthy foundation for the next generation of care.

Leave a Reply