
Academic researchers introduced GPUHammer, a new attack method that brings Rowhammer-style memory exploits to graphics processing units (GPUs), successfully demonstrating it on NVIDIA’s RTX A6000 graphics card. The exploit targets memory vulnerabilities in DRAM and is the first known instance of a Rowhammer attack affecting GPU memory.
Last week, a research team from the University of Toronto publicly demonstrated the attack, confirming that “GPUHammer is the first Rowhammer attack on GPU’s,” according to their peer-reviewed study. These types of attacks exploit physical weaknesses in DRAM chip modules where data is stored and enable hackers to alter or corrupt data stored in memory.
In response to the demonstration targeting NVIDIA GPUs, NVIDIA recommended mitigation strategies for customers of its NVIDIA RTX A6000 GPU line. The GPU used in the attack demo, the NVIDIA A6000 GPU with GDDR6 Memory, is known for its high-performance computing capabilities.
A new era in Rowhammer-style attacks
Historically, Rowhammer-style attacks have focused on CPU memory, but the implications of GPU-based memory corruption are severe, particularly in AI and shared cloud environments.
Rowhammer exploits the physical structure of DRAM by repeatedly activating a single row to cause charge leakage, leading to bit flips in adjacent memory rows, the researchers explained. GPUHammer applies this principle to graphics memory, compromising the integrity of critical workloads processed on the GPU.
The attack’s success suggests that DRAM vulnerabilities once limited to CPUs may now pose risks to GPU-based systems, which are often used in multi-tenant environments and cloud-based AI training pipelines.
While years of defense research have focused on CPU attacks, GPU vulnerabilities have been viewed as less of a threat; however, the attacks on exposed memory integrity could now enable malicious actors to interfere with other users’ GPU data.
NVIDIA responds to GPUHammer demo
In response to the findings, NVIDIA issued a customer advisory recommending the activation of Error Correction Codes (ECC) at the system level as a precaution. NVIDIA stated: “Risk of successful exploitation from RowHammer attacks varies based on DRAM device, platform, design specification, and system settings.”
To enable ECC, NVIDIA advises users to run the command “nvidia-smi -e 1.” ECC status can be verified with “nvidia-smi -q | grep ECC.” These steps are intended to help prevent flip attacks, although the researchers found that GPUHammer was able to bypass some existing mitigations such as target row refresh.
A threat to AI integrity
While GPUHammer operates at the hardware level, its effects on AI systems could be far-reaching. The researchers showed that a single-bit flip was enough to reduce the accuracy of a deep neural network model from 80% to 0.1%.
The attack on GPU-level faults can have strong effects on the integrity of AI models, as models have become highly reliant on GPUs to perform parallel processing and execute other computationally demanding tasks.
Malicious users in shared GPU environments, such as cloud ML platforms, could use GPUHammer to silently interfere with neighboring workloads. These types of attacks can corrupt AI model parameters or degrade interference accuracy without requiring direct access to the victim’s code or data.
Furthermore, the severe threats posed toward low-level memory corruption has serious implications for autonomous systems, edge AI, and other areas where silent failures may go undetected.
How to mitigate risks like GPUHammer
The researchers urge organizations to reassess their hardware security postures and incorporate GPU memory integrity checks into their existing adult frameworks. Given the growing reliance on GPUs for AI workloads, ensuring memory isolation and hardening physical memory protections will be key in preventing future Rowhammer-style exploits.
Missed what NVIDIA CEO Jensen Huang said this week at the Beijing Expo about China’s AI trajectory? Read our coverage of Huang’s take on China’s AI momentum and geopolitical implications.