Legitimacy is the real moat in the public sector
The frameworks that organise commercial AI strategy do not translate cleanly to government. Why legitimacy is the real moat in the public sector.
The same forces restructuring commercial markets are restructuring the public sector. The engineering moat is gone for government too. AI has democratised capability inside and outside the agency. Citizens expect services that respond the way their favourite app responds. Ministers expect briefing packs that read like the strategy decks of the best consulting firms. Internal staff can increasingly do more alone, with AI assistance, than small teams could accomplish a year ago.
The framework that organises how to think about AI in commercial markets translates to public institutions, but it does not translate cleanly. Several things shift in important ways. Ignoring those shifts produces strategy that sounds reasonable but does not survive contact with the realities of public administration.
The first shift is what counts as the moat.
In a commercial setting, the strongest defences are regulatory insulation, proprietary data entrenchment, distribution and workflow breadth. In a public institution, those moats exist, but they sit underneath a more fundamental one. Legitimacy. The accumulated public trust that citizens place in the institution's right to act. It comes from statutory basis, from perceived fairness, from accountability mechanisms that work, from the institution's behaviour under pressure and from the quality of the people who lead and serve in it over time. It cannot be automated, procured or bought.
Every other moat the institution has, including its data holdings, its regulatory authority, its specialist expertise and its citizen relationships, is valuable only to the extent that legitimacy holds. A tax office with legitimacy can enforce compliance. A tax office without legitimacy produces a tax strike. A police service with legitimacy can investigate. A police service without legitimacy creates enforcement gaps. A policy department with legitimacy can advise ministers candidly. A policy department without legitimacy gets politicised and hollowed out.
The practical implication is that every AI decision either reinforces legitimacy or erodes it. An AI system that makes citizens feel seen, understood and fairly treated reinforces legitimacy. An AI system that feels opaque, alienating or unfair erodes it. The difference is rarely in the AI model itself. It is in how the institution deploys it, explains it, governs it and owns accountability for its outcomes. The question for every deployment is whether it strengthens or weakens the public's trust in the institution's right to act.
The second shift is how the institution hears its market.
Commercial firms get clean signals from customers through pricing behaviour and repeat purchase patterns. Public institutions do not. There is no price. Voting happens too rarely and bundles too many issues. Surveys can be gamed. Consultations skew toward the articulate and organised. The feedback environment is harder to read than commercial markets, and AI is changing it in two specific ways.
Citizens are increasingly comparing their experience with government to their experience with commercial services. The citizen who uses an AI-powered banking app, a conversational health service or a smart government portal in another jurisdiction comes to their own government with implicit expectations. When those expectations are not met, the gap becomes visible. The risk is legitimacy erosion. Citizens who experience frictionless service everywhere else and confusing service from their government draw conclusions about whether the government can deliver. Those conclusions translate into political pressure, workforce departures and eventually mandate loss.
The third shift is what shipping fast actually means.
The commercial principle "move fast and break things" is malpractice in a child protection agency or a payments system. Public sector failure is more visible, more consequential and more legitimacy-damaging than commercial failure. The response is not to abandon velocity, because slow public sector AI adoption has its own failure mode. Citizens lose trust that the institution can adapt. Political pressure to "do something" increases, often resulting in rushed adoption of the wrong technology for the wrong reasons. Adjacent institutions develop capability that makes the public institution look obsolete.
The right framing is shipping real capability within plausible governance. Something real every quarter. Not theatre, not demonstrations, but actual capability that changes how the institution does work or delivers services. Iterate on the capability and the governance together. The institutions getting this right in 2026 are not the ones with the most sophisticated AI strategies. They are the ones with the most frequent live AI deployments, because those deployments teach the institution things that cannot be learned from strategy alone.
Deliberate pace is not cautious pace. An institution that spends three years designing a perfect AI governance framework before deploying anything has spent three years letting citizens conclude that the institution cannot adapt, vendors lock in outdated procurement and adjacent institutions develop capability that makes theirs look obsolete. That is not risk minimisation. That is risk maximisation with extra steps.
The fourth shift is data sovereignty.
Commercial AI frameworks treat data governance as a compliance consideration. For public institutions it is more fundamental than that. A commercial firm that loses its data loses its business. A public institution that compromises its data holdings compromises the foundation of citizenship records, tax obligations, benefit entitlements, criminal records, health histories and the evidentiary basis for legal proceedings. The consequences of a serious breach are multi-generational.
This means AI deployment that involves public sector data needs to navigate sovereignty questions with specificity. Where the data is processed. Where it is stored. Who has access under what legal framework. Whether the AI model learns from the data in ways that create dependency. Whether the AI outputs could be subpoenaed in one jurisdiction and affect citizens in another. Institutions that understand their data holdings deeply and can negotiate AI contracts with specificity about data handling are in a stronger position than institutions that treat sovereignty as a legal team problem.
The fifth shift is the work that should not be automated.
In commercial markets, judgement work is the work AI is not yet ready to do. In public institutions, some judgement work should not be automated even when it can be. A decision about whether to remove a child from their family is not improved by being made faster. A decision about whether to prosecute a corporation is not improved by being algorithmic. A decision about whether to deploy force is not improved by being efficient. A welfare interview where the citizen needs to feel heard. A parole assessment where the case officer's judgement is the point. The teacher who notices the student is struggling. The nurse who catches the symptom that the algorithm missed.
In each case, the human is not an inefficient delivery mechanism for a service. The human is the service. Public institutions are more concentrated in this category of work than commercial firms recognise. The frontline social worker, the case officer, the community health nurse and the policy adviser whose value is the institutional memory she carries are the relational core of public service work. As AI commoditises everything that can be commoditised, the relational core becomes more valuable rather than less, because it is the thing citizens actually trust the institution to deliver.
The sixth shift is what the second business looks like.
Commercial firms talk about second businesses as additional revenue streams that monetise the installed base. Public institutions are not trying to monetise. They are trying to deliver public value. The equivalent is the value an institution can create beyond its core statutory mandate, using the data, insights and capability it has accumulated. The tax office's core function is to collect tax. Its second business is the economic data that can inform policy across government. The health system's core function is to deliver health services. Its second business is the epidemiological intelligence that can inform public health policy, urban planning, education and social services.
Most public institutions under-invest in their second business because the incentives are not aligned. Data that could inform policy across government is siloed within single agencies. Insights that could improve public services beyond the originating agency are locked in operational reporting that nobody else reads. AI is making the friction technical no longer. The friction is now political, cultural and organisational. The institutions that surface this and resolve it will be more useful to the citizens they serve than the institutions that do not.
These shifts are not abstractions. They are the difference between a public sector AI strategy that works and one that produces compliance theatre. The commercial principles are real, but they need to be reframed honestly for the realities of public administration. The right framing is defence first, transformation second. Defence because public institutions have obligations to legitimacy and accountability that private firms do not face. Transformation because an institution that defends without adapting becomes irrelevant to the citizens it exists to serve.
The choices made by current leaders about how their institutions adapt to AI will shape whether those institutions remain trusted for the generation that follows. This is an institutional moment, not a technology one.
How does your organisation actually score?
Get a free AI Disruption Snapshot in five minutes — no credit card.
Analyse my organisation →