From vibes to vulnerabilities
2026-02-17 , Auditorium

I am not a vuln researcher and that's kind of the point, LLMs have come a long way in the cyberz. I tried to find a real RCE with Codex, I failed so badly that I accidentally learned how to find bugs in common projects with LLMs. This talk is about using AI to turn bad vibes into real bugs. Drawing on multiple CVEs across React, Node, Ollama, Wordpress, etc and other projects, I'll show how anyone with a little debugging and security knowledge can go from vibes to vulnerabilities


This talk starts from a failure. As a long time blue team practitioner with no vulnerability research background, I tried to use OpenAI Codex to find a real-world RCE that had just dropped—and got absolutely nowhere. What followed was confusion, false positives, and confidently wrong model output. But once I stopped treating Codex like a one-shot bug oracle and started using it as a deeply opinionated debugging assistant, things began to click.

The session walks through how I used “vibes” to find vulnerabilities: from being the annoying kid in the back seat asking “why?” a hundred times, to forcing the model to reason more deeply about code paths, assumptions, and edge cases until something real fell out. I'll walk through the pain and the pleasure of using LLMs for vulnerability discovery, including how this approach led to real findings across projects like React, Node, Ollama, Tethers Password manager, wordpress, supabase, etc.

We'll talk candidly about where models get stuck, how to work around refusals, why the dumbest ideas sometimes work best, and just how creative—and unhinged—you can get when you stop trusting the model and start interrogating it. I'll also show how this fundamentally changes offensive capability and why it feels like red teams are about to get a serious advantage.

The talk closes with a sober look at the current limitations, the risks, and the broader impact on the security community. The goal isn't to teach exploit development, but to show that with basic debugging skills and the right guardrails, AI can meaningfully assist in finding real vulnerabilities—and dramatically lower the barrier to entry for vulnerability research.

Andrew has been breaking, building, and defending things in infosec for over two decades (wow old). Starting at Paterva he spent 10+ years creating Maltego before moving to the US for security roles at BitMEX (IR), Robinhood (IR/D&R), Uniswap (Head of Security), and now Privy (Principal Security Engineer). He’s spoken at Black Hat, DEF CON, DSS, EthCC and countless others, teaching courses and drinking malibu on the way. When not thinking about security, he’s into cat memes, punk rock, and getting involved in just the right amount of unhinged shit to keep security interesting.

This speaker also appears in: