Revolutionizing GPU Power Delivery: The Role of Supercapacitors
- Charlie Karu

- Nov 17, 2025
- 4 min read
Updated: Jan 1
Understanding the Challenge: Fast Transients at the Die
A GPU doesn’t draw power smoothly. Instead, it behaves like a machine repeatedly stamping on the accelerator pedal. In AI workloads, thousands of cores wake up simultaneously, triggering steep current ramps.
When this happens, three things occur:
The Current Demand Increases in Microseconds
AI accelerators regularly see jumps such as:
Inductance Slows the Delivery
Even a few nanohenries in the PDN prevent current from rising quickly enough.
Voltage Droop Appears at the GPU Die
At core voltages around 1.0–1.2 V, a droop of only 30–50 mV can cause frequency clipping or instability. These transient spikes—lasting from microseconds to milliseconds—are now one of the primary constraints on GPU design and data-center reliability.
Why the PDN Is Failing to Scale
A complete GPU PDN includes:
The voltage regulator module (VRM)
Power stages and inductors
PCB planes and vias
Banks of MLCCs
Package connections
On-die capacitance
Every section adds resistance and, more critically, inductance. Even 1–2 nH between the VRM and the die can destroy transient performance. MLCC banks—often in the thousands on high-end boards—are now constrained by:
Limited PCB real estate
Mechanical cracking
High ripple stress
Steep derating under DC bias (up to 80% loss)
Rising cost and supply volatility
This is why GPU designers have started to adopt supplementary energy reservoirs closer to the die, including supercapacitors.

The Appeal of Supercapacitors at the Chip Level
Supercapacitors offer several advantages over MLCCs for handling transient loads:
High Energy per Volume
This is useful for microsecond-to-millisecond events.
Very Low ESR
Tab-and-foil construction enables milliohm-class impedance.
Thin, Flexible Pouch Formats
These can sit inside PCB cavities, at the package edge, or under the socket.
No DC-Bias Derating
Capacitance remains stable even under full rail voltage.
Making Supercapacitors Work at GPU Rail Voltages
GPU core rails typically operate at around 1.0–1.2 V, which is too low for traditional two- to three-volt supercapacitors. Two design paths exist:
Option A: Low-Voltage Pouch Supercapacitors
These are specifically engineered for chip-level PDNs (nominal around 1 V). This is the direction many emerging suppliers, including Volfpack, are taking.
Option B: Intermediate DC-DC Stages
Using charge pumps or inverting regulators to translate the operating voltage. AI servers such as NVIDIA’s GB300 NVL72 use hybrid supercapacitors for rack-level transient shaping, signaling that component-level adoption is not far behind.
Worked Example: Calculating Capacitance Needs
How much capacitance is needed to handle a GPU power spike?
Scenario:
Rail voltage (V): 1.0 V
Idle current (I_idle): 80 A
Peak current (I_peak): 280 A
Spike duration (t): 0.001 s (1 ms)
Allowed voltage droop (dV): 0.05 V (50 mV)
Step 1: Calculate the Current Step
Step 2: Capacitance Needed
Use:
Substitute:
Step 3: ESR Requirement for Ripple Control
Voltage ripple due to ESR:
Rearrange for ESR:
For a 25 mV ESR ripple target:
This is far beyond the capability of MLCC banks but achievable with advanced pouch supercapacitors.
Placement and Packaging: The Inductance Battle
Supercapacitors only work if they are placed with minimal inductance. Key practices include:
Mounting close to GPU power pins
Using wide copper planes
Minimizing via length and via count
Embedding devices in PCB cavities
Integrating pouches at the package perimeter
Compared with MLCC banks, thin supercapacitors allow placement strategies that maintain extremely low inductive paths.
Thermal Performance: Keeping It Cool
Because supercapacitors have low ESR, resistive heating is modest. Their large surface area helps conduct heat into:
The PCB
The cold plate
Adjacent thermal paths
This makes thermal control simpler than traditional electrolytic or polymer capacitors.
Implications for the Future of GPU Power Delivery
As AI racks scale towards 70–100 GPUs per cluster, the transient load on each die becomes one of the primary system-level design challenges. Supercapacitors provide a new local energy reservoir that complements traditional capacitors.
Benefits include:
Reduced voltage droop
Improved clock stability
Better utilization of transient power budgets
Smoother rack-level energy demand
Lower stress on VRMs and upstream PSUs
In the long term, supercapacitors will be a key enabler of multi-storey PDNs, chiplet-based GPU architectures, and sustainable AI power designs.
Conclusion: The Future is Bright with Supercapacitors
The integration of supercapacitors into GPU power delivery systems is not just a trend; it's a necessity. As we push the boundaries of AI and computing power, these innovative solutions will pave the way for more efficient, reliable, and sustainable energy use in technology.
So, are you ready to embrace the future of energy storage? With supercapacitors, we can drive down global energy expenses and combat climate change, making clean power accessible to everyone.
References
NVIDIA Developer Blog – How GB300 NVL72 Provides Steady Power for AI
TrendForce – GB300 Ignites Supercapacitor Adoption
Skeleton Technologies – Why AI Data Centres Need Supercapacitors
IEEE – Multi-Storey Power Distribution Networks for GPUs
Passive Components Europe – Supercapacitors for AI Transient Spikes
IEA – Energy Demand from AI
R&D World – The Energy Paradox of AI
Flex CESS – Capacitive Energy Storage Systems
BisResearch – Supercapacitor Market Forecast 2025–2035
10. NVIDIA – Building the 800 VDC Ecosystem for Scalable AI Factories



Comments