What remains defensible in an AI-restructured market
The engineering moat is gone. Four moats remain — and they are not equal. A practical guide to defensibility in an AI-restructured market.
The engineering moat is gone.
For twenty years, vertical software companies defended themselves through technical complexity. Deep engineering talent, long roadmaps, compounding codebases that were genuinely hard to replicate. AI has dissolved that moat. A team of six with current AI tools can rebuild most functional products in nine months. The thing your competitors used to need years to copy can now be reproduced in a quarter or two.
This is the structural shift most leaders have only partially absorbed. Two others matter as much.
The services layer is up for grabs. Your customers spend a small fraction of their budget on software and a much larger fraction on the work that software supports. A company might spend ten thousand dollars a year on accounting software and one hundred and twenty thousand on the accountant who closes the books. AI-native challengers are not building better accounting software. They are closing the books. They are capturing the larger budget by selling outcomes rather than tools.
The relational layer is growing. As commodity production gets cheaper, spending reallocates toward goods and services where the human is part of the value rather than an inefficiency to be automated away. Care, hospitality, education, craft, therapy, advisory work and personal services. This sector grows as the commodity sector commoditises, precisely because the human involvement is the product.
These three forces operate at once. Most strategy frameworks address one. The work of building or defending a business in 2026 is integrating all three.
The first integration question is what defends a business when software itself is no longer defensible. Four moats remain, and they are not equal.
Regulatory insulation is the strongest. Compliance workflows create switching costs that have nothing to do with product quality. Once a vendor is embedded in a regulated process, ripping them out means re-attesting, re-auditing and re-certifying downstream processes. The buyer is not paying for software. The buyer is paying for the accumulated paper trail. This moat compounds because rules get more complex rather than less, and it is the hardest moat for an AI-native challenger to replicate because regulatory surface area is administrative rather than technical.
Proprietary data entrenchment is the second strongest. Once a vendor sits between the customer and the customer's own data, ripping them out means rebuilding the data model. The product is not the moat. The data structure is. This moat is strongest when the data is customer-specific, when accumulation compounds over time and when the vendor's schema becomes how the customer thinks about their own business.
Distribution is third. Channel relationships, partner networks, captive audiences and content authority. Distribution takes years to build and is hard to displace once established. AI does not replicate distribution overnight because relationships are relational.
Workflow breadth is the weakest of the four. Integration across a customer's processes erodes fastest as AI enables targeted competition. A workflow-breadth-only play is a melting ice cube, because the breadth is exactly what an AI-native challenger can slice into with a focused, better-executed alternative. Workflow breadth only works when combined with one of the other three moats.
Every product needs at least one moat besides workflow breadth. The preference order when designing or deepening moats is regulatory first, data second, distribution third and workflow fourth. And moats buy time, not the decade. An incumbent with moats but no shipped AI is a dead incumbent walking. The moat holds the revenue for a few quarters or years. Eventually the AI-native challenger ships enough functional AI that the buyer crosses the switching cost gap, and the moat does not save you. The winning formula is moat plus functional AI shipped at pace.
The next test is what to build. The principle is simple. Sell the work, not the tool.
Every product decision should be tested against this question: are we selling an outcome the customer would otherwise pay a human or service to produce, or are we selling access to software the customer uses themselves? Selling the work captures the work budget. Selling the tool captures only the tool budget. The autopilot position involves selling the work. The customer buys an outcome. Pricing anchors to the displaced cost rather than to software benchmarks. The comparison class is consulting, outsourcing or service provider rather than software vendors. The default position should be autopilot. Every product should be able to describe its deliverable in a single sentence. If the answer is "access to the platform," that is a copilot position and needs explanation.
Within the work to be sold, intelligence work and judgement work behave differently. Intelligence work looks like writing code, drafting standard documents, synthesising data, generating reports, producing calculations from rules and processing applications against defined criteria. The rules are complex but they are rules. AI does this well now. Judgement work looks like deciding what to build, when to ship, what to invest in, who to hire and what the board should care about. This requires experience, taste and instinct. AI does not do this autonomously yet, and at the senior tier may never do it well enough to displace humans entirely.
The autopilot sweet spot is intelligence-heavy work that is already outsourced. If a task is outsourced, the buyer has accepted it can be done externally. There is an existing budget line. The scope is defined. Substituting a human service with an AI service is a vendor swap rather than a reorganisation. Outsourced legal work, tax advisory, healthcare revenue cycle, accounting and audit, claims adjusting, procurement support, market research and technical writing all fit. Insourced work is harder because it requires organisational change. Judgement-heavy work is harder because AI is not ready. Outsourced-plus-intelligence gives both momentum and defensibility.
The third test is pace. Speed matters more than unit economics in this cycle. Customers switch on features and AI-readiness rather than on margins. AI-native competitors often have worse gross margins than software incumbents, not better. Inference costs have not collapsed. Burning venture cash to subsidise unit economics is a bridge, not a business model. Incumbents should be winning on P&L. They are losing on product velocity. If you have moats, your job is velocity. Do not optimise for margin at the expense of shipping pace. If you are AI-native, your job is to ship fast enough to accumulate the moat that makes your unit economics eventually work. If you have neither moats nor velocity, you lose.
The buyer is telling you what to do. In most regulated and enterprise contexts, buyers do not want to replace the incumbent. They want the incumbent to ship AI. Recent research on enterprise CFO preferences found that more than three-quarters of finance leaders want to layer AI from new vendors onto existing systems rather than replace the system of record entirely. The incumbent wins if they ship AI. The challenger wins only if the incumbent does not.
The fourth test is what to leave alone. Some work is relational. Automating it does not improve it. It destroys what made it valuable. The patient who needs to feel heard. The student who needs to feel seen. The diner who came for the chef's presence. The advisory client who needs judgement from someone they trust. In each case, automation reduces the cost without preserving the value, and the buyer responds by paying less or going elsewhere to a competitor who keeps the human in.
A major coffee retailer recently tested aggressive automation in its stores. Customers responded to the dehumanisation by staying less, returning less often and spending less per visit. The retailer reversed course and hired more baristas because the relational element, including the handwritten note on the cup and the person who remembers your order, turned out to be part of what they were selling. This is not nostalgia. It is the economic structure that emerges when basic commodity production becomes effectively free.
The test for whether work is relational is whether automating it would destroy the value or just reduce the cost. If automation destroys the value, the work is relational and should not be automated even when it can be. If automation reduces the cost without destroying the value, the work is commodity work. Automate it.
The integrated position pulls all of this together. Ship AI fast enough to defend the moat. Target the work layer rather than the tool layer. Preserve or charge premium for the relational layer where the human is part of the value. An AI-native challenger with no moat gets commoditised. An incumbent with moats who does not ship AI gets disrupted. A pure-autopilot play in a relational market gets out-priced by a human who charges more because the buyer actually prefers the human.
The leaders who will look smart in five years are the ones who can hold all three of these tensions at once. They are shipping AI faster than their incumbent peers. They are targeting outcomes rather than tools. They are protecting the human element where the human element is the product. The question worth asking, before any AI investment or product decision, is which of those three positions the move strengthens, and what it would take to strengthen the other two at the same time.
How does your organisation actually score?
Get a free AI Disruption Snapshot in five minutes — no credit card.
Analyse my organisation →