The AI Market VCs and Policymakers are Missing

For AI to fulfil its valuation expectations, investors should treat systemic risks not only as governance imperative, but also as investable opportunities. A new market is emerging around the scaffolding that enables AI to be used responsibly at scale, and it remains yet underestimated by both investors and policymakers, writes Paul Fehlinger in new pieces for New Private Markets and ImpactAlpha.


A wry disconnect is emerging in the AI landscape today. The public narrative and much of the capital flow remain centered on the race to build ever-larger models, with ever more compute and data. It is an impressive contest with generational financial upside (with, as even an increasing number of investors argue, a non-negligible risk of over-valuations characteristic of a bubble set to burst), and it captures headlines. But speak with the people responsible for adopting AI at scale such as the CIO of a major hospital system, the chief risk officer of a multinational bank or people on the street, and the questions sound very different. They are not only asking how powerful AI can become. They are asking under what conditions it can be used safely, reliably, and in ways that preserve accountability, choice, and human agency.

This tension is the starting point for a piece I just published together with Johannes Lenhard, the CEO and co-founder of ReframeVenture, the largest VC/LP community on responsible investment, in New Private Markets, ahead of the AI plenary session we both spoke at the Private Equity International Asia Summit in Singapore. Investors are beginning to register the risks built into today’s AI  models: the concentration of power in a handful of actors, opaque system behavior, and a growing dependence on stacks that people cannot control. And yet, even as these concerns surface, the incentive structures that shape private markets continue to reward speed, scale, and market capture. We have seen versions of this logic before in Web 2.0, but with AI, the consequences run deeper, because this isn’t just about platforms or advertising. This is about the systems through which knowledge, healthcare, finance, and even public decisions will increasingly flow.

The companion piece I wrote in ImpactAlpha looks at the other side of that story: the emerging market opportunity. If the first wave of AI investment that we are currently in is about performance at the model layer. The next wave will be about adoption. The real frontier is not simply what AI can do, but under what conditions it can be deployed at scale in real institutions, under regulatory scrutiny, with verifiable trust, and without forfeiting control over data and its uses. That is where a new investable market is forming: identity and data agency layers; provenance and attribution systems; portable and interoperable data architectures; governance mechanisms embedded into the technology itself rather than bolted on afterwards.

Where the new Investable Growth Layer Is Emerging

We’ve seen this pattern before. After the financial crisis, new regulatory requirements initially appeared as a compliance burden. Yet they ultimately catalyzed the rise of regtech: a resilient, investable layer of financial infrastructure that absorbed complexity and enabled innovation. A similar trajectory played out in cybersecurity, as companies learned to handle data responsibly at scale. In each case, what began as a constraint became the foundation for the next wave of market growth. In my report for the Finnish Innovation Fund Sitra, launched both at Slush and in Brussels along EU Commission Executive Vice President Henna Virkkunen, I argued that Europe should similarly lean into this dynamic by helping to catalyze the enabling tech infrastructure in which trust and innovation reinforce one another rather than compete.

With AI now poised to underpin productivity and decision-making across every sector, the scale is categorically different. This enabling layer will not sit at the periphery. It will constitute an entirely new addressable market.

This is not only a strategic consideration for asset owners. There is also a clear entry point for founders and for regulators. Entrepreneurs building in the AI space will increasingly find the strongest demand not only at the model layer, but in the scaffolding that allows organizations and individuals to use AI without compromising on data agency or accountability. For entrepreneurs, the most durable opportunities will emerge not only at the frontier of model performance, but in the architecture that allows enterprises and individuals to use AI on their own terms — without losing control of data, agency, or accountability. And regulators, often cast as the actors who slow innovation, have the opportunity to do something very different. By defining clear guardrails for how AI should be adopted — focusing on transparency, interoperability, and user agency — they can help create the incentives for this new enabling layer to develop. Rather than relying solely on prescriptive rules, regulators can treat the responsible-use infrastructure as a market to be grown. In doing so, they can encourage competition, widen entry points for new firms, and make responsible AI less about compliance overhead and more about the natural economics of adoption.

Resilient Enabling Infrastructure and Hyped AI Valuations

This shift matters for investors because it determines where durable value will accumulate. If adoption stalls due to lack of trust, today’s AI valuations become fragile. This is where LPs start to worry especially: it is, after all, their capital at work. But if enabling infrastructures mature, we see the formation of a long-duration growth wave — one capable of supporting not only the tech sector but healthcare systems, industrial production, financial intermediation, public services, and democratic stability. It is a structural economic transition, not a sector trend. And the investors who update their underwriting now, treating agency, interoperability, and governance not as externalities but as competitive variables, will be positioned where the market is moving, not where the hype currently sits.

These two articles, one tracing the misalignment of incentives, the other outlining the emergent investable layer, are meant to be read together. They describe the same inflection point from two vantage points. The future of AI - and the valuation of AI companies - will not be determined solely by how intelligent models become, but by whether the humans and institutions using them retain the ability to act with agency and trust. 

Next
Next

Should All VCs Now Have Policy Roles?