Artificial Intelligence Security Threats Spread Across Sessions and Users
AI Memory Attacks Spread Across Sessions and Users, and Most Organizations Aren’t Prepared
The increasing use of artificial intelligence (AI) in various industries has created a new attack surface that many organizations are not equipped to handle.
Agentic Memory Risks
Agentic memory, a persistent retrieval and instruction layer in AI systems, is becoming a critical component of decision-making processes. However, this also introduces risks related to memory reuse between tasks, sessions, or users, which can lead to a loss of trust in the system.
“Memory has taken on a new meaning in agentic systems, where it is not just a temporary storage of data, but a persistent retrieval and instruction layer that stores preferences, earlier context, summaries, workflow patterns, and learned behavior.”
Attackers can exploit this by altering what the model recognizes as legitimate context, creating a persistent control surface rather than a momentary state. This can lead to a range of consequences, including changes in behavior, decisions, and even system performance.
Mitigation Strategies
- Monitor the origins of AI memory
- Set expiration dates for AI memory
- Demand explicit authorization for AI memory access
- Treat long-term retrieved inputs as important operational data and implement real-time scanning during data transfers
- Maintain rigorous provenance tracking for all memory sources
- Establish protocols for the rapid quarantine of corrupted data
Prioritizing a separation of system instructions from user inputs can help safeguard against system hijacking via memory corruption. By adopting these measures, organizations can ensure the integrity of their AI systems and prevent potential attacks.
