Deloitte Innovation Day

As part of Deloitte Innovation Day, IITB Trust Lab hosted an intensive technical workshop for employees of Deloitte, focused on one of the most intriguing and persistent hardware exploits in modern computing: Rowhammer Attack. This was followed by a panel discussion on ‘Trust and Security in AI’.

Unmasking the Rowhammer Vulnerability

While we often think of cybersecurity as a software-level battle, the Rowhammer Attack shows us that the physical hardware—specifically the DRAM (Dynamic Random-Access Memory)—can be manipulated to bypass even the most robust software security layers.

To understand Rowhammer, we must first look at the relationship between the CPU and DRAM. In a typical architecture, the CPU acts as the high-speed processor while the DRAM serves as the massive, temporary workspace where data is stored for immediate access.

This relationship relies on a fundamental assumption of trust: that the CPU can read and write to specific memory addresses without affecting any of the data stored in adjacent locations. The system is designed such that electrical signals are neatly contained within their intended “cells,” ensuring that the billions of bits stored on a memory chip remain stable and isolated.

In today’s market, the primary metric for DRAM is density. Manufacturers are under constant pressure to cram more data into smaller spaces. However, this comes at a cost. DRAM uses tiny capacitors to store bits of data as electrical charges. As these capacitors are packed closer together, their electrical fields begin to interact.

However, as we push for higher performance and greater density, memory cells are packed closer together than ever before. This physical proximity creates a vulnerability where the electrical charges used by the CPU to access a specific row can “leak” into neighboring rows.

The Technical Workflow

Rowhammer is unique because it allows an attacker to change data in an address space without ever directly accessing it.

To trigger a bit flip, an attacker must ensure the CPU doesn’t just read from its fast internal cache, but actually “hits” the physical RAM every time. In a normal operating scenario, your computer is designed to be efficient. When you access data at memory address X, the CPU stores a copy in its internal caches. If you try to access X again, the CPU simply grabs it from its own internal cache. It never actually talks to the DRAM chip because the internal cache is thousands of times faster. For an attacker, the CPU cache is a problem—it acts as a “shield,” so it becomes necessary to flush the cache very time.

This is achieved using the following code snippet

loop:

    mov (X), %eax    ; Move data from address X to register
    mov (Y), %ebx    ; Move data from address Y to register
    clflush(X)       ; Remove the data at address X from all levels of the internal cache
    clflush(Y)       ; Remove the data at address Y from all levels of the internal cache
    mfence           ; Ensure memory operations are completed
    jmp loop         ; Repeat rapidly

 

To understand why we need to work with two addresses, we need to understand the architecture of DRAM. DRAM contains row buffers and row decoders.

The Row Decoder (The “Address Finder”)

DRAM is organized like a giant spreadsheet with rows and columns. When the CPU wants to read data, it sends a specific memory address. The row decoder acts as a translator. It takes the binary address from the CPU and “electrifies” the specific horizontal line (called the wordline) that corresponds to that address. Only one row can be active at a time. The decoder ensures that out of millions of rows, exactly one is selected to release its data.

The Row Buffer (The “Temporary Workspace”)

Data cannot be read directly from a memory cell because the electrical charge in a single DRAM capacitor is too weak to be sent across a long wire to the CPU. When a row is activated by the decoder, the entire contents of that row (often 4KB or 8KB of data) are moved into the row buffer. The row buffer contains “sense amplifiers” that detect the tiny charges in the capacitors and amplify them to a readable level. If the CPU needs more data from that same row immediately after, it can be read directly from the row buffer (a “row hit”) much faster than activating a new row (a “row miss”).

 

Therefore, to increase the chance of a bit-flip, attackers don’t just hammer one address. They use MOV on two different addresses (X and Y) that correspond to rows surrounding a “victim” row. By alternating MOV(X) and MOV(Y), they force the DRAM to constantly switch which row is in the Row Buffer. This rapid switching  creates the maximum amount of electrical “noise” for the victim row.

Countermeasures

As this vulnerability exists at the hardware level, fixing it requires a multi-layered defense strategy. Here are some of the countermeasures:

  1. Target Row Refresh (TRR): A hardware-level fix where the memory controller identifies “frequently accessed” rows and automatically refreshes the adjacent victim rows before a bit flip can occur.
  2. OS-Based Isolation: Software-level “sandboxing” that prevents untrusted code from sitting physically adjacent to sensitive kernel data in the RAM.
  3. Per-Row Activation Counters: A more granular approach where the hardware tracks how many times each specific row is activated, triggering a defensive refresh once a safety threshold is reached.

 

Panel DIscussion: Trust and Security in AI

Data privacy is never “solved.” In 2005, Netflix released an anonymized dataset of movie ratings, offering a $1 million prize for a better recommendation algorithm. However, the “anonymity” was short-lived.

Researchers demonstrated that by cross-referencing this “anonymous” data with public information on IMDb, they could de-anonymize users—uncovering names, political leanings, and private preferences.

So, if simple recommendation data could be reverse-engineered twenty years ago, the high-dimensional data used by today’s AI models presents an even greater risk.

This risk is exacerbated by the fundamental fact that in cyber warfare, the attacker only needs to find one vulnerability. The defender, however, must protect the entire surface area, 24/7. And AI only scales this problem.

Security Risks with AI

Security in AI goes beyond preventing hacks; it involves securing the logic of the system.

  • The Black Box & Explainability: Deep learning models often function as “Black Boxes.” We see the input and the output, but the internal “reasoning” is inscrutable. Without Explainability, we cannot verify if a model is making decisions based on merit or hidden flaws.
  • Halluicinations & Biases: AI is only as good as its training data. If that data contains historical biases, the AI will amplify them. Furthermore, “hallucinations”—where AI confidently generates false information—pose a direct threat to data integrity.
  • Differential Privacy: As a countermeasure to the Netflix-style breaches, the panel discussed Differential Privacy. This involves injecting “mathematical noise” into datasets so that an AI can learn general patterns without being able to identify any specific individual within the set.

 

AI for Good

For a modern enterprise, trust must be defined through two non-negotiable pillars: Transparency, because stakeholders must know when AI is being used and what data is feeding it; and Accountability, because there must be a clear human-in-the-loop responsible for the AI’s decisions, especially when things go wrong.

As organizations transition from viewing AI as a luxury to an operational necessity, the dialogue shifted from “what AI can do” to “how AI should be governed.” Despite the risk, the panel ended on a note of optimism. AI is not something to be shunned because it can be a tool for social good. From predicting climate patterns to detecting early-stage diseases, the potential is vast and must be embraced.