As part of Deloitte Innovation Day, IITB Trust Lab hosted an intensive technical workshop for employees of Deloitte, focused on one of the most intriguing and persistent hardware exploits in modern computing: Rowhammer Attack. This was followed by a panel discussion on ‘Trust and Security in AI’
While we often think of cybersecurity as a software-level battle, the Rowhammer Attack shows us that the physical hardware—specifically the DRAM (Dynamic Random-Access Memory)—can be manipulated to bypass even the most robust software security layers.
To understand Rowhammer, we must first look at the “bedrock” of modern server infrastructure: the relationship between CPUs and DRAM.
In today’s market, the primary metric for DRAM is density. Manufacturers are under constant pressure to cram more data into smaller spaces. However, this comes at a cost. DRAM uses tiny capacitors to store bits of data as electrical charges. As these capacitors are packed closer together, their electrical fields begin to interact.
Rowhammer is unique because it allows an attacker to change data in an address space without ever directly accessing it. During the workshop, we explored the specific logic used to bypass CPU caching and strike the DRAM directly.
To trigger a bit flip, an attacker must ensure the CPU doesn’t just read from its fast internal cache, but actually “hits” the physical RAM every time. This is achieved using the following loop:
loop:
mov (X), %eax ; Move data from address X to register
mov (Y), %ebx ; Move data from address Y to register
clflush(X) ; Flush cache for address X
clflush(Y) ; Flush cache for address Y
mfence ; Ensure memory operations are completed
jmp loop ; Repeat rapidly
This is getting more dangerous because we are “cramming” more transistors into the same space, the threshold for a successful attack is dropping. Modern chips require significantly fewer “hits” or repeat accesses to trigger a flip than they did just a few years ago.
As this vulnerability exists at the hardware level, fixing it requires a multi-layered defense strategy. Here are some of the countermeasures:
Data privacy is never “solved.” In 2005, Netflix released an anonymized dataset of movie ratings, offering a $1 million prize for a better recommendation algorithm. However, the “anonymity” was short-lived.
Researchers demonstrated that by cross-referencing this “anonymous” data with public information on IMDb, they could de-anonymize users—uncovering names, political leanings, and private preferences.
So, if simple recommendation data could be reverse-engineered twenty years ago, the high-dimensional data used by today’s AI models presents an even greater risk.
This risk is exacerbated by the fundamental fact that in cyber warfare, the attacker only needs to find one vulnerability. The defender, however, must protect the entire surface area, 24/7. And AI only scales this problem.
Security in AI goes beyond preventing hacks; it involves securing the logic of the system.
For a modern enterprise, trust must be defined through two non-negotiable pillars: Transparency, because stakeholders must know when AI is being used and what data is feeding it; and Accountability, because there must be a clear human-in-the-loop responsible for the AI’s decisions, especially when things go wrong.
As organizations transition from viewing AI as a luxury to an operational necessity, the dialogue shifted from “what AI can do” to “how AI should be governed.” Despite the risk, the panel ended on a note of optimism. AI is not something to be shunned because it can be a tool for social good. From predicting climate patterns to detecting early-stage diseases, the potential is vast and must be embraced.