Co-Auditing With AI: Practical Techniques
2026-02-17 , Auditorium

Smart-contract security has reached a scale where purely manual review no longer keeps pace. However, fully automated AI auditors may miss context, intent, and threat models that experienced reviewers take for granted.
This talk showcases how one can integrate agent-style AI tools directly into their audit workflows to accelerate personal reasoning and amplify each auditor's individual expertise.


This talk shares how we use AI agents as co-auditors to amplify individual reviewers, not replace them. We focus on a practical framework built inside our audit workflow:

  • Tool-aware agents that can invoke hard technical tools - static analyzers, code mappers, and protocol helpers - rather than rely on raw text prompting.

  • Primer-driven behavior, where auditors define how the agent should think, interpret code, and execute repeatable methodology across domains like lending, staking, and accounting.

  • Artifact generation that persists throughout the engagement - from initial scoping and code mapping to end-stage issue drafting - allowing humans and the agent to build on shared context as the audit progresses.

The core idea is that these systems let auditors bring their own expertise into the workflow. Teams can create private primers, encode their specialties, and optionally share them with the broader community to lift everyone’s capability. The goal is a future where every auditor can use AI to multiply the value of their own judgment - without ceding control to automation.

A security researcher at Consensys Diligence, George is fascinated by Math, Technology, and their human aspects - privacy, game theory, digital identity, and so on. Eventually he found Ethereum, a promising world that fit his interests, where he focuses on audits to safeguard the future of finance.