At this year’s CISO Village Summit, I had the chance to sit down with Dylan Patel, founder of SemiAnalysis and one of the most plugged-in voices in AI infrastructure. It wasn’t a formal interview, more of a conversation, one I’d been looking forward to. Dylan’s work tracks everything from GPU shipments to global energy consumption and chip geopolitics.
I went in, curious about the future of large models and what he’s seeing from his vantage point. I left thinking a lot more about the systems underneath them.
One data point Dylan shared seriously caught my attention. Last month, Anthropic projected that by year-end, 90–95% of code in certain environments could be AI-generated. That shift has huge implications, not just for productivity, but for how we manage and secure IT operations.
It hints at a future where ticket management and even parts of security response at the SOC become fully automated. The rise of AI-generated code underscores just how reliant organizations are becoming on these systems, turning AI infrastructure from a technical conversation into a strategic one.
That level of reliance on AI systems raises new questions for CISOs around cost, control, and continuity.
Here are three takeaways that stuck with me from our conversation:
The Power Problem Is Bigger Than We Think
The numbers are eye-opening. By some estimates, AI data centers could consume 10% of the U.S. power grid by 2030. That’s up from about 2% today. And this is happening in the shadow of grid growth, which has been relatively stagnant for decades. Some major deployments are resorting to diesel generators just to meet short-term capacity needs.
We talk a lot about scaling AI, but maybe not enough about whether the real-world infrastructure can keep up, or what it will cost when it does. For CISOs, that means not just tracking performance, but budgeting for the true operating expense of AI: power, redundancy, and the overhead of keeping critical systems safe under growing demand.
If you’re a CISO, this isn’t just a facilities issue. Power constraints downstream could impact inference uptime, vendor SLAs, and recovery time in ways we haven’t fully modeled, potentially leading to unexpected budget or continuity risks.
Taiwan’s Role in AI Is a Strategic and Uncomfortable Dependency
This one isn’t new, but Dylan put some hard numbers around it sharing that more than 90% of advanced AI chips are manufactured in Taiwan. If something disrupts that pipeline, the timeline to rebuild capacity elsewhere could stretch multiple years.
While few may be raising alarms, it’s difficult to overlook the concentration risk. Any disruption could have ripple effects across global supply chains and platform availability.
For security leaders, they might want to start asking more questions about where their AI capabilities are physically rooted. Not every model or platform is transparent about the stack underneath and the supply-chain vulnerabilities it may introduce in the case of emergencies.
AI Is Becoming a Global Infrastructure Play
In recent months, we’ve seen the Middle East emerge as a key player in large-scale AI infrastructure deals, introducing new geopolitical and financial dynamics into where and how AI is built. At the same time, U.S. export controls on advanced chips are beginning to impact companies like Nvidia, signaling a more assertive stance on the global flow of strategic technology.
The scale of investment from hyperscalers, sovereign funds, and private equity is staggering: tens to hundreds of billions in planned capex, often linked to long-term data center buildouts and GPU commitments.
One thing Dylan said stuck with me: “The most profitable companies in history are borrowing money to fund this.” That underscores both the urgency and the stakes.
For CISOs, this raises a strategic question: How does this capital intensity reshape the economics of AI-driven platforms? If cost overruns, supply chain constraints, or GPU pricing volatility trickle down into your vendor relationships, it may alter how services are priced, delivered, or even sustained.
This infrastructure rush may also reshape who controls the stack—and who secures it. Some of these emerging AI ecosystems may need to be treated more like cloud providers: deeply integrated, often opaque, and mission-critical.
An Optimistic Look Ahead
As AI systems scale, the elements that enable them, power, silicon, physical distribution, and geopolitical policy, are converging into critical inputs that influence enterprise security posture.
For CISOs who understand this early, this means expanding the aperture of risk. The reliability and resilience of AI-driven platforms will be shaped by factors often outside the direct control of IT or security teams and understanding where dependencies exist whether in power grids, chip supply chains, or regional partnerships, will be essential to future-proofing both operations and strategy.
In this context, AI security must evolve beyond protecting data and models. It must encompass the upstream infrastructure decisions that increasingly define performance, trust, and risk.
Partner & CRO