Drew Stelly, PhD

Security researcher. Company builder. Industrial AI safety.
cd@stelly.org GitHub gulfcoastcyber.com
Research leadership · Advisory boards · Keynotes · Industrial AI safety consulting

Current Work

Founder of Gulf Coast Cyber, helping industrial operators adopt AI safely — security assessments, governance, training, and custom implementation. Founder of the Council for Industrial AI Safety (CIAS), an industry body setting governance standards for AI in critical infrastructure. Previously spent a decade at Lockheed Martin red teaming classified weapons platforms, followed by building the AI Red Team function at Thomson Reuters. NOVA Award recipient. Peer-reviewed author and active vulnerability researcher with confirmed findings in libarchive, FreeType, and DjVuLibre.

Before the Incident: Building an Industrial AI Safety Framework

Foundational white paper from CIAS on AI safety governance for industrial operators — because after is too late. Covers risk frameworks, operational boundaries, and incident response for ML systems in environments where failure means vapor clouds, not error messages.

Beyond security.txt: When the Reporter and the Triager Are Both AI

Vulnerability disclosure infrastructure was built for humans on both ends. That’s ending. This paper proposes vuln-intake.json — a machine-readable intake protocol for when AI agents are doing the finding and the triaging. Includes a working protocol spec, reference architecture, and analysis of the volume problem heading toward bug bounty platforms.

The End of the Human-in-the-Loop: Agentic AI and the Future of Vulnerability Research

35.7 billion test executions, three confirmed findings including an RCE, 15 domain-specific harnesses built in under 30 minutes — all on a single desktop workstation. How coordinated AI agents are changing the economics of vulnerability research, and what it means for industrial AI safety.

Projects

CIAS — Council for Industrial AI Safety
Industry council bringing operators, engineers, and regulators together to set AI safety standards for industrial environments. Eight founding operator seats, Chatham House rules.
Gulf Coast Cyber
AI safety and cybersecurity services for the industrial sector. AI vendor due diligence, security risk assessments, governance reviews, and custom AI implementation support — helping plants adopt AI without blowing things up (literally).
IntrudeIQ
AI-powered red teaming platform. Autonomous offensive security operations using coordinated AI agents — reconnaissance, exploitation, lateral movement, and reporting. Bringing agentic AI to the attacker's side of the house so defenders know what's coming.
Agentic Vulnerability Research
Research into using coordinated AI agent teams to automate the full vulnerability research pipeline — from target selection and harness generation through crash triage and exploit development. Less about the bugs found (though we found an RCE), more about proving the model. Targets include libarchive, FreeType, DjVuLibre, libtiff, libexif, LibRaw, and Poppler.
ShiftSage
AI-powered shift handover analysis for industrial operations. Ingests operator logs and flags safety risks that get buried in the noise of a 12-hour shift change.
ControlWitness
Decision audit trail and compliance engine for industrial control systems. Tracks who changed what, when, and why — aligned to ISA/IEC 62443.
AlarmIQ
Alarm management analytics for industrial facilities, built on ISA-18.2. AI-driven recommendations to cut through alarm floods and surface the ones that actually matter.
ProcedureWatch
SOP deviation detection for industrial operations. Monitors operator actions against standard procedures, distinguishes dangerous shortcuts from genuine innovations, and closes the feedback loop between the field and the procedure manual.
cyberaiguy.com
Adversarial AI/ML explained for security practitioners — how attacks work and what to do about them.

Professional Engagements

Available for keynotes, workshops, training delivery, and advisory engagements.
Book an Engagement →

AI Safety & Security Training

AI Safety for Industrial Operations — 5-day on-site intensive for engineers, operators, and HSE professionals. Covers AI fundamentals, the AI attack surface, data security, safe deployment in safety-critical environments, and organizational governance. Built from offensive security experience — we teach what actually goes wrong, not theory. Details →


Speaking Topics

The AI Attack Surface: A Red Team Perspective
How AI systems actually get broken. Prompt injection, data poisoning, adversarial ML, and AI-powered social engineering.


Before the Incident
AI safety governance for critical infrastructure. Frameworks, operational boundaries, and incident response — before regulators force your hand.


Agentic Vulnerability Research
35 billion test executions, three findings, one RCE. How coordinated AI agents are changing the economics of security research.


Shadow AI in Industrial Operations
Your engineers are already using AI without policies. What’s at risk and how to get ahead of it.


Securing Industrial AI Deployments
Practical guidance for bringing AI into safety-critical environments without introducing new risk.

Advisory

Available for advisory board roles and consulting engagements in industrial AI safety, AI security, offensive security automation, and vulnerability research programs.

Education

PhD, Engineering and Applied Sciences — University of New Orleans, 2019. Computer Science.
MS, Computer Science — University of New Orleans, 2015. Mobile Device Security.
BS, Computer Science — University of Louisiana, 2012. Artificial Intelligence and Cognitive Science.