The McKinsey AI hack isn’t an AI security story. It’s a regular security story that happened to hit an AI system. The fact that it’s positioned as an AI security story is the actual problem.
Last week, security startup CodeWall disclosed that its autonomous agent found exposed API docs for McKinsey’s AI platform Lilli, identified 22 unauthenticated endpoints, and achieved full database access in under two hours. The exploit was SQL injection. A bug class from 1998.
The headlines say “AI agent hacks AI platform.” That framing lets the real failure off the hook. Unauthenticated endpoints in production, field names concatenated directly into SQL, and system prompts stored alongside user data with write access. None of these are AI shortcomings. They’re the same problems we’ve been solving for two decades.
The data exposure is bad, but those system prompts were also writable. One UPDATE statement could silently change the advice 43,000 consultants receive. No code deploy, no security alert, and no forensic trace.
CodeWall sells offensive AI security tools, so this doubles as a product demo, so keep a skeptical eye on the numbers presented, but that doesn’t make the findings wrong. The real lesson is simpler than the coverage suggests. Every company racing to ship AI is building on the same web infrastructure that it never fully secured. The AI part is new, the vulnerabilities aren’t.


