Investing with Discipline in the Age of AI

Investing with Discipline in the Age of AI

10 mins to read

Nikhil Punwaney
Nikhil Punwaney
Franco Danesi
Franco Danesi

In early February, nearly a trillion dollars was wiped from global software and services stocks in a single week. The immediate cause was a set of AI plugins covering contract review, compliance tracking, and routine professional workflows. But that was just the match. The kindling had been building for months: early evidence that simply building bigger models was delivering diminishing returns, and a growing unease that the businesses most exposed were ones whose core product was a workflow AI could now replicate for a fraction of the cost. The market had been pricing in a clean story, and suddenly the story looked complicated.

The selloff wasn’t irrational, but it wasn’t particularly precise either. What got sold was "software." What actually warranted scrutiny, was a specific kind of software: businesses where the core product is a workflow that AI can now approximate at a fraction of the cost. That is a real and important threat. It is not the same for every company.

The companies least exposed share a common trait, their value is not in the workflow. The value is in the data they have accumulated, the regulatory relationships they hold, and the technical complexity they took years to build. Simply put, how deeply embedded they are in the processes that enterprises cannot afford to disrupt. If your product touches financial flows, sits inside a compliance framework, or has become the system of record for something critical, the switching cost isn’t a feature, it’s the business.

AI either shrinks a market, expands one, or redistributes value within it

Sometimes it shrinks. Efficiency gains reduce the need for human labour, and overall spending contracts. Legal research is a clear example, AI does not grow the market for legal research; it compresses the billable hours inside it. Good for clients but, challenging for anyone who built a business around charging for those hours.

Sometimes it expands. AI unlocks latent demand that could never be served economically before. SimScale, a cloud-native engineering simulation platform in our portfolio, is a live example of this. Sophisticated physics simulations were previously only affordable to the largest engineering firms. AI-assisted simulation is now making that possible for mid-market manufacturers who could never have justified the cost before. These markets were always large in theory, AI is making them large in practice.

And sometimes it redistributes. The overall market stays roughly the same size, but who captures value changes. Value migrates from human labour to software, and then from software to whoever sits closest to the underlying data. BeZero, which has built the world's leading dataset for carbon credit quality ratings, is well-positioned here. As climate markets mature and AI tools make ratings analysis more accessible, the defensibility lies in the proprietary data and the trust institutions place in it, not in any single workflow. The businesses that built pricing models around counting employees are under pressure. The ones charging for outcomes delivered rather than seats occupied tend to look more resilient.

There is no demand for average, the product that is better will win overall. In an environment where anyone can build, winner-take-all dynamics intensify.

We apply this framework to every business we look at. "AI is a tailwind for this market" is not an analysis, it’s a starting point. The question is, which of these three things:  ‘shrinking the market’, ‘expanding it’ or ‘redistributing who captures it’, is AI doing here, and who benefits?

The AI stack has three rough layers, and they are not created equal

At the top are the foundation models: the large AI systems being built by a handful of well-capitalised organisations, spending at a scale that has consistently surprised even the most optimistic forecasters. This is a genuine arms race, and the capital required to stay competitive at that layer grows every year.

At the application layer, a few businesses will prove truly durable, but most will not. As the cost of replicating application logic continues to fall, defensibility at this layer becomes harder to maintain. In most categories, the best solution eventually dominates the market. Lower building costs invite more competitors and create more options for users but, that abundance doesn’t spread demand evenly. Instead, it concentrates it. People have always gravitated toward the best choice; what’s new is that they can now identify it instantly. As a result, winners capture a larger share, and second place matters less than ever.

The layer that interests us at Molten the most is the connective infrastructure in between. When any organisation deploys AI agents at scale, a predictable set of problems emerges: How do you control what the AI is permitted to do? How do you verify it behaved correctly? How do you manage the flow of sensitive data through these systems without creating security or compliance exposure? How do you defend against a vastly expanded attack surface, where AI agents can be manipulated, impersonated, or exploited in ways traditional security tooling was never designed to handle? How do payments and transactions work when the counterparty is an autonomous system rather than a person?

These are not hypothetical future problems. They are the practical bottlenecks that engineering teams across industries are encountering as they move past proof of concept. Binalyze, which enables organisations to identify and contain cyber breaches in hours rather than the weeks typical of the current industry standard, is one example of the kind of infrastructure that enterprises are reaching for as their AI deployments increase the attack surface. Form3, which processes payments infrastructure for some of Europe's largest banks, is another: as AI agents begin to initiate transactions autonomously, the reliability and auditability of the payments layer underneath become more critical, not less.

The companies solving these problems are building into the current, urgent demand.

A tailwind hiding in the economics

Early in the current wave, most of the expense was in training. Building a capable AI model required enormous upfront compute, but once built, serving responses to users was relatively cheap. That assumption has since broken down. The compute required every time someone actually uses an AI model now accounts for somewhere between 80 and 90 per cent of total AI compute costs over its lifetime. Unlike traditional software, where you build once and scale cheaply, AI costs grow with every query. More sophisticated models burn more compute per interaction.

This creates a durable structural opportunity. Any company helping enterprises use AI more efficiently, route workloads more intelligently, or reduce unnecessary compute is building into a cost problem that only gets larger as adoption grows. This is not a niche infrastructure play, it’s plumbing for an increasingly expensive building.

The same principle extends further along the technology horizon. When a genuinely new computing paradigm approaches commercial viability, the software that makes it reliable and usable tends to be built before the hardware is ready, by a small number of teams with deep technical leads. Riverlane, which is building the error correction stack that will be required for quantum computers to operate reliably at scale, is an illustration of this dynamic: the software problem needs to be solved before the hardware is commercially ready, and the team that has the deepest technical lead when readiness arrives is the one that gets designed in. The gap between technical readiness and commercial deployment is exactly where durable infrastructure businesses get founded, not just in quantum.

Evaluating businesses in this environment

Traditional metrics, particularly early-stage revenue, remain important, but for infrastructure businesses, position in the stack often matters more than revenue at Series A. We saw this clearly in the last major platform cycle, many of the companies building the data, API, and identity layers of modern software looked revenue-light early on. What they had was something more valuable, because the rest of the stack depended on them. Thought Machine's approach to core banking, rebuilding the infrastructure that financial institutions run on rather than layering on top of it, reflects this same logic. The depth of integration is the moat.

For AI infrastructure businesses, we now weigh a few signals more heavily. Strategic partnerships tell us more than headline numbers at early stages. If a company at the layer above or below has made a genuine commitment to this company's success, that reveals something about where value is accumulating. Community and practitioner adoption matters because technical users tend to adopt before enterprise procurement, and that curve is a leading indicator of commercial traction. And pricing architecture is a direct test of the thesis. A company charging per outcome or per agent task is aligned with how AI actually delivers value. A company charging per seat is a traditional SaaS business with AI features added, and deserves to be evaluated as one.

None of this replaces financial discipline, entry price, gross margin, and the path to commercial scale are still central. What we are doing is calibrating those to what the evidence actually shows for this category, rather than applying a template from a different era.

The gap between adoption and deployment is where we invest

Most enterprise organisations are stuck right now, with executive confidence in AI at a record high. Actual AI in production, at scale, in core workflows, is far more limited. The gap between intent and deployment is not a sentiment problem. It is a technical and organisational one: security, governance, reliability, and integration into existing systems.

Naval Ravikant, co‑founder and chairman of AngelList, once captured the founder’s mindset perfectly: “No entrepreneur is worried about AI taking their job. They have a product to build, a market to serve, a creativity to realise.”

The founders we want to back share that same conviction. They see AI not as a threat but as a powerful ally. Their focus is forward‑looking, building what the AI stack will need three to five years from now and laying the foundations others will depend on. Most of these companies are emerging quietly, far from the noise of the hype cycle, but they’re building with deep intent.

Europe is particularly well placed for this moment. Its engineering culture runs deep, its regulatory frameworks create fertile ground for security and compliance innovation, and its builders have long preferred substance over spectacle. That combination makes this an inflection point, not a headwind. For the right founders, what lies ahead isn’t resistance, it’s momentum.

Molten’s approach

At Molten, we take a diversified and disciplined approach to investing. We recognise AI’s transformative potential, but approach it with rigour, backing durable innovation and scalable business models rather than hype. Our 80+ direct investments span four core sectors, Cloud, Enterprise & Saas, Deeptech & Hardware, Consumer technology, and Healthtech, with exposure across fintech, cybersecurity, quantum, energy and climate, spacetech, and cloud and software. With £1.4bn in gross portfolio value and £800m across key subsectors, no single theme dominates before it proves out. We see AI as an accelerant, not a threat, and our diversified strategy ensures resilience through market cycles.