I have just had the privilege of joining a delegation of Digital Leaders AI Experts for a two-day visit to Lithuania, led by Lord Ranger of Northwood and kindly sponsored by InSoft and Informed Solutions.
The programme brought together leaders from government, industry and academia. It included a reception at the British Ambassador’s residence and a full day of discussions and presentations at the Martynas Mažvydas National Library of Lithuania.
There were some fantastic talks. And frank conversations. But what struck me most wasn’t a particular technology or breakthrough. It was something far more fundamental. Trust, not technology, will define AI’s future.
Trust is the transformation
Across conversations, panels and presentations, a consistent theme kept surfacing. AI success in the public sector is not about models or tools. It’s about trust, governance, leadership and people.
We often talk about adoption as the goal. But adoption is the easy part. Legitimacy is where things get difficult.
If systems can’t withstand scrutiny. If they aren’t transparent, explainable, and properly governed, they simply won’t scale. In that sense, trust isn’t a by-product of transformation. It is the transformation.
Lithuania’s pragmatic approach
One of the most impressive aspects of the visit was Lithuania’s maturity in how it’s approaching AI. Especially given that its population is just 2.8 million. There’s a clear focus on building capability, not chasing hype:
- €130 million committed through LitAI – a state-of-the-art centre dedicated to developing and applying AI technologies.
- 120 terabytes of clean, structured government data.
- Investment in sovereign compute.
- Regulatory sandboxes operating alongside live delivery.
What stood out to me was the alignment. Policy, infrastructure and implementation are being developed in parallel, not in isolation. That kind of coordination is still relatively rare, and it’s what enables progress beyond experimentation.
Augment before you automate
Another theme that came through was the principle that AI should augment humans before it automates systems.
This is an important counterbalance to much of the current narrative around AI replacing jobs.
In practice, the most effective applications we discussed were those that enhanced human decision-making, by improving speed, accuracy and insight, rather than removing people from the process altogether. In this way, AI is amplifying intelligence, not replacing it.
Why good AI still fails
One of the more candid observations during the visit was that many AI initiatives don’t fail because of technology. They fail because no one plans for scrutiny.
In the public sector, especially, innovation often gets stuck at the governance stage. Something that works in a controlled environment can quickly unravel when exposed to real-world complexity, regulation and accountability.
I also came across an interesting testing philosophy: deliberately training AI on bad data. Not just teaching systems what “good” looks like but helping them recognise what bad looks like too. It’s a simple idea, but one with real implications for robustness.
Context matters more than content
A key technical insight that stayed with me is this: context matters more than content.
Whether we’re dealing with regulation, policy or public services, the challenge isn’t just the information itself, it’s the metadata around it.
Timing, jurisdiction, interpretation. These are the factors that determine whether a response is useful. AI systems that ignore context risk being technically correct, but practically wrong.
Bringing society with us
Finally, it’s clear that AI transformation is as much social as it is technical.
Emotional context plays a significant role. Fear, trust and confidence all shape how AI is perceived and adopted. There are also generational differences at play. Many older generations view AI through a science-fiction lens. Younger generations are rapidly adopting it, but often without the guidance or training needed to use it responsibly.
If there’s one takeaway here, it’s that society needs to be taken with AI, not dragged along behind it.
Final thought
For me, the Lithuania visit reinforced a simple but important point:
The future of AI won’t be defined by the systems we build — but by the trust we create around them.
And that requires more than technical excellence. It requires leadership, governance and a deep understanding of the human context in which these systems operate.
If you have a question for Ant or the Triad team, please get in touch.

