Deloitte Innovation Day

As part of Deloitte Innovation Day, IITB Trust Lab hosted an intensive technical workshop for employees of Deloitte, focused on one of the most intriguing and persistent hardware exploits in modern computing: Rowhammer Attack. This was followed by a panel discussion on ‘Trust and Security in AI’

Unmasking the Rowhammer Vulnerability

While we often think of cybersecurity as a software-level battle, the Rowhammer Attack shows us that the physical hardware—specifically the DRAM (Dynamic Random-Access Memory)—can be manipulated to bypass even the most robust software security layers.

To understand Rowhammer, we must first look at the “bedrock” of modern server infrastructure: the relationship between CPUs and DRAM.

In today’s market, the primary metric for DRAM is density. Manufacturers are under constant pressure to cram more data into smaller spaces. However, this comes at a cost. DRAM uses tiny capacitors to store bits of data as electrical charges. As these capacitors are packed closer together, their electrical fields begin to interact.

  • The Mechanism: Each bank of memory has a row decoder and a row buffer (which stores the most recently accessed data).
  • The Leak: When a specific row (the Aggressor Row) is accessed repeatedly and rapidly, the electromagnetic discharge causes neighboring rows (the Victim Rows) to lose their charge.
  • The Result: A “bit flip.” A value that was once a 1 becomes a 0. In a binary system, this can be catastrophic.

The Technical Workflow

Rowhammer is unique because it allows an attacker to change data in an address space without ever directly accessing it. During the workshop, we explored the specific logic used to bypass CPU caching and strike the DRAM directly.

To trigger a bit flip, an attacker must ensure the CPU doesn’t just read from its fast internal cache, but actually “hits” the physical RAM every time. This is achieved using the following loop:

loop:

    mov (X), %eax    ; Move data from address X to register
    mov (Y), %ebx    ; Move data from address Y to register
    clflush(X)           ; Flush cache for address X
    clflush(Y)           ; Flush cache for address Y
    mfence              ; Ensure memory operations are completed
    jmp loop            ; Repeat rapidly

This is getting more dangerous because we are “cramming” more transistors into the same space, the threshold for a successful attack is dropping. Modern chips require significantly fewer “hits” or repeat accesses to trigger a flip than they did just a few years ago.

Countermeasures

As this vulnerability exists at the hardware level, fixing it requires a multi-layered defense strategy. Here are some of the countermeasures:

  1. Target Row Refresh (TRR): A hardware-level fix where the memory controller identifies “frequently accessed” rows and automatically refreshes the adjacent victim rows before a bit flip can occur.
  2. OS-Based Isolation: Software-level “sandboxing” that prevents untrusted code from sitting physically adjacent to sensitive kernel data in the RAM.
  3. Per-Row Activation Counters: A more granular approach where the hardware tracks how many times each specific row is activated, triggering a defensive refresh once a safety threshold is reached.

Panel DIscussion: Trust and Security in AI

Data privacy is never “solved.” In 2005, Netflix released an anonymized dataset of movie ratings, offering a $1 million prize for a better recommendation algorithm. However, the “anonymity” was short-lived.

Researchers demonstrated that by cross-referencing this “anonymous” data with public information on IMDb, they could de-anonymize users—uncovering names, political leanings, and private preferences.

So, if simple recommendation data could be reverse-engineered twenty years ago, the high-dimensional data used by today’s AI models presents an even greater risk.

This risk is exacerbated by the fundamental fact that in cyber warfare, the attacker only needs to find one vulnerability. The defender, however, must protect the entire surface area, 24/7. And AI only scales this problem.

Security Risks with AI

Security in AI goes beyond preventing hacks; it involves securing the logic of the system.

  • The Black Box & Explainability: Deep learning models often function as “Black Boxes.” We see the input and the output, but the internal “reasoning” is inscrutable. Without Explainability, we cannot verify if a model is making decisions based on merit or hidden flaws.
  • Halluicinations & Biases: AI is only as good as its training data. If that data contains historical biases, the AI will amplify them. Furthermore, “hallucinations”—where AI confidently generates false information—pose a direct threat to data integrity.
  • Differential Privacy: As a countermeasure to the Netflix-style breaches, the panel discussed Differential Privacy. This involves injecting “mathematical noise” into datasets so that an AI can learn general patterns without being able to identify any specific individual within the set.

AI for Good

For a modern enterprise, trust must be defined through two non-negotiable pillars: Transparency, because stakeholders must know when AI is being used and what data is feeding it; and Accountability, because there must be a clear human-in-the-loop responsible for the AI’s decisions, especially when things go wrong.

As organizations transition from viewing AI as a luxury to an operational necessity, the dialogue shifted from “what AI can do” to “how AI should be governed.” Despite the risk, the panel ended on a note of optimism. AI is not something to be shunned because it can be a tool for social good. From predicting climate patterns to detecting early-stage diseases, the potential is vast and must be embraced.