Claude CodeSecure SDLCAI Security

I Use Claude Code Every Day. I Also Don't Fully Trust It.

3 min read

Let me be upfront: Claude Code is genuinely good. I use it constantly. It's changed what I can build in a weekend, and I won't pretend otherwise just to sound more credible.

But I look at every system the same way — including the ones I like. So here's how I actually think about AI coding agents from a security angle.

The agent reads everything

Point Claude Code at a project and it reads your source files, configs, environment files, git history. It needs that context. That same access also means a manipulated agent is looking at a lot of sensitive material.

Most people run it locally and assume they're safe. But consider: you ask it to review a dependency. That dependency contains a prompt injection payload designed to exfiltrate environment variables. Or a teammate commits a file with embedded instructions that subtly redirect the agent's behavior.

These aren't theoretical scenarios. They're just scenarios most people haven't tested yet.

It actually executes code

This is what makes it useful — and what raises the stakes.

In a regular chat, a bad answer is just text. You decide what to do with it. When an agent installs packages, modifies files, and runs scripts directly, the blast radius of a successful manipulation is much larger.

Claude Code has confirmation steps for destructive actions. Read them. Actually read them. Clicking through approval prompts without reading is the same as not having them.

The output becomes your codebase

When AI-generated code passes review and ships, you've accepted responsibility for it. Six months later, when there's a vulnerability — was it the agent? A human edit? A package installed during setup?

Review still matters. More so, actually — because the volume of code you can produce with an AI agent means per-line scrutiny drops if you're not paying attention.

What I actually do differently

Three things, nothing exotic:

Scope what the agent can access. It doesn't need my SSH keys or anything outside the working directory. Keep those separated.

Treat dependency installs as something to review, not approve automatically. Thirty seconds, every time.

Don't commit AI-generated code without reading it. "Claude suggested" is not a commit message.

None of this makes AI coding agents dangerous. It just makes using them carefully a habit rather than luck.


Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Let's talk if your team is adopting AI tooling and wants to think through the security side first.