Back
13 October, 2025

Building the hybrid future of AI

AI has emerged as a pivotal general-purpose technology. Generative AI alone is expected to facilitate a 7% increase in global GDP, equivalent to around USD 7 trillion, while lifting productivity growth by 1.5 percentage points over the next decade.

Consumers and businesses are already well aware of the benefits of AI. At least three-quarters of consumers across the globe are already using generative AI tools, primarily for translation or text and image generation, while 88% of C-suite executives surveyed globally say that helping their businesses speed up AI adoption is a top priority in 2025.

The data center-based AI ecosystem can’t meet this demand alone

This increased AI adoption is driving an exponential growth in the total compute required. Compute to run generative AI applications alone could grow 125 times between 2024 and 2030, mainly due to the rising number of use cases and users.

Access Partnership estimates that another 130% of data center capacity across the globe is needed to meet overall demand for AI inference by 2030 – a potential shortfall of approximately 18.7 QFLOPs of AI inference compute capacity.

With current gaps in investment, power, and resources, this will be difficult.

  • Investment gap: Meeting demand purely through data centres would require at least USD 2.8 trillion of infrastructure investment on top of what is already being committed.
  • Power bottlenecks: Data centers already draw up to 10% of a city’s electricity. Across the entire U.S., data centers could triple current usage rates from 4% of the country’s total electricity in 2023 to 12% by 2028, putting additional stress on existing power networks.
  • Water stress: Cooling hyperscale data centers consumes vast quantities of water, raising sustainability concerns especially in water-scarce regions.
  • Regional inequities: Compute deserts could continue persisting. By 2030, Sub-Saharan Africa and Latin America could face compute gaps of 98% relative to demand – meaning that regional supply of AI compute could be completely overwhelmed by demand.

A complementary path to distribute AI workloads is needed

This is where on-device AI comes in, which involves running (typically) lightweight AI models within a device itself rather than in centralized server hosts. On-device AI is expected to be more power and water efficient, while also offering additional benefits such as customisation, efficiency, and security – all of which can complement existing data center infrastructure.

  • Running AI on devices instead of centralizing compute workloads on data centers is estimated to reduce total power consumption by around 90% and increase water efficiency by 96%
  • On-device AI enables devices to process user data in real-time without relying on internet connectivity. Consumers would be able to use AI for real-time translation, photo editing, text generation, or even to solve specific problems instantly without the worry of data transmission vulnerabilities or network downtimes.
  • Manufacturing plants that integrate robotics and sensor-based hardware would become more efficient as on-device AI processors circumvent network outages and reduce the latency of large data transmissions.
  • Sensitive sectors like financial services can leverage on-device AI to process sensitive data and tasks locally, ensuring close alignment with data privacy laws. This allows them to verify customer identities as part of Know Your Customer (KYC) requirements or even detect fraudulent transactions and unauthorized log-in attempts at greater speed and with greater security.

The path towards diversifying the AI ecosystem

The future of AI will be hybrid. Access Partnership expects data centers will continue to play a central role in training and hosting the largest and most complex models. However, inference – the day-to-day workload of billions of users and enterprises – will increasingly shift to devices and edge infrastructure.

Policymakers can promote a hybrid AI architecture by:

  1. Supporting research and innovation in hybrid workload optimization;
  2. Building a robust on-device AI ecosystem; and
  3. Developing standards, safety, and trust frameworks for on-device AI.

The success of fully leveraging AI will hinge not on building more data centers, but on distributing AI across billions of connected devices. This shift will allow AI to scale sustainably, equitably, and securely – positioning on-device AI as the cornerstone of an everywhere, always-on AI future.


Contact us

Need a problem solved?

Our dedicated experts, located around the world, are here to help.