Breaking In, Breaking Out: The Bug That Wasn’t a Bug
Marco Figueroa, a bug bounty researcher, found himself on an adrenaline-fueled hunt after noticing an odd error message from ChatGPT.
Was OpenAI’s sandboxed environment secretly vulnerable?
Excited to explore further, Marco prompted, prodded, and poked his way into what felt like a Linux terminal—listing directories, executing scripts, and uncovering curious files.
It was a sandbox playground that felt, to him, like a breach of boundaries.
Armed with his findings, Marco submitted his report, imagining the sweet taste of a $20,000 payout—only to hit a wall.
The bounty program’s fine print was clear: actions inside the sandbox, no matter how advanced, are “out of scope.”
What seemed like a bug was, in fact, a feature.
OpenAI designed the sandbox deliberately as a safe container for user interactivity—complete with tight guardrails to ensure code executions remain harmless.
The takeaway?
OpenAI’s bug bounty program isn’t about interacting within the sandbox but escaping it. Want to earn that hefty reward? Find a way to break out—anything less is just playing by the rules.