It is postulated that AI’s rapid growth is constrained by its massive energy consumption. Training large models like GPT-3 can use over 1,000 MWh – enough to power 130 homes for a year – while a single AI query can consume 10x more energy than a traditional web search. Data centres, which already account for 1-2% of global electricity use, face rising demand as AI adoption expands. With AI and the wider digital economy poised to drive data centre demand by 160% by 2030, and as AI applications increasingly become embedded in daily life, solving AI’s energy consumption problem will be crucial. Without breakthroughs in energy efficiency or clean energy scaling, power grid limitations, costs, and environmental concerns may curb AI’s growth in the short to medium term.
Driving efficiency through innovation
Such breakthroughs are already occurring. For instance, on the software front, DeepSeek’s models, such as DeepSeek-V2, achieve competitive performance while optimising efficiency through techniques like mixture-of-experts (MoE) architectures, reducing energy use compared to dense models of similar scale. On the hardware front, companies like Groq have developed LPUs (Language Processing Units) that deliver 10x more energy efficient inference than GPUs for certain AI workloads. Meanwhile, NVIDIA’s H100 Tensor Core GPU drastically reduces energy consumption while multiplying computational power compared to previous generations, showcasing how specialised chips can curb AI’s energy demands.
Other paradigm shifts in AI infrastructure are on the horizon. To battle the enormous cooling needs of data centres, Microsoft’s Project Natick demonstrates the feasibility of underwater data centres, showing that submerged servers in airtight pods had 1/8th the failure rate of land-based data centres due to stable, cool conditions, while running entirely on renewable energy. Meanwhile, space-based data centres are being explored by the European Space Agency (ESA), with studies suggesting that orbital computing could leverage near-unlimited solar power and extreme cooling in outer space conditions. These radical approaches could redefine AI’s energy footprint by tapping into previously untapped environments.
Unlike previous innovation cycles, which were ultimately constrained by human demand and, therefore, restricted in terms of energy use, the growth of AI presents a fundamentally different challenge. The latest generation of AI systems can act as ‘virtual humans,’ capable of generating their own demand, output, and interactions. These systems don’t just scale linearly with human needs; they scale with their own generated tasks and interactions with other AI agents, making it far more difficult to forecast the limits of AI scaling and usage. Is this an infinite proliferation scenario, or are there limits of some sort beyond energy constraints that can help us understand the saturation point for AI use?
Conceptualising limitations on AI
Beyond boundaries
The future of AI will not be shaped solely by what is technically possible, but by how we understand and respond to its limits. This analysis has outlined a critical distinction between practical constraints, such as energy consumption, data availability, and computational efficiency, and theoretical constraints rooted in the physics of information, computational theory, and economic exhaustion. While practical constraints may appear urgent, many are already being actively addressed, which suggests that AI’s physical and economic bottlenecks are solvable.
By contrast, theoretical limits represent enduring boundaries that cannot be engineered away. Similarly, economic saturation may signal a plateau unless future systems can expand into new problem domains or exhibit forms of autonomous goal-setting beyond current applications.
For investors, researchers, and policymakers, this dual-lens framework is not merely diagnostic, it is strategic. Understanding what kinds of constraints are mutable versus immutable can guide smarter investment in AI infrastructure, help governments assess AI’s alignment with their policy priorities, and steer researchers towards long-term sustainability.
From limits to limitless
Ultimately, this framework forces a more fundamental question: what is the purpose of continued AI advancement? The shape of that future depends not just on technical progress, but on whether our ambitions remain within the bounds of theoretical possibility, and whether we are willing to navigate the practical challenges to get there. The limits of AI are not ceilings to fear, but tools to define a more intentional and accountable trajectory. Clarifying them brings us one step closer to designing an AI future that is not only powerful, but purposeful.
At Access Partnership, we help governments and businesses navigate the complex intersection of emerging technologies, infrastructure demands, and regulatory frameworks. Whether you’re shaping national AI strategy, investing in sustainable innovation, or preparing for the next wave of AI-driven transformation, our global experts can help you anticipate constraints, unlock opportunities, and move forward with confidence.
To find out how we can support your AI ambitions, please contact [email protected].