Blog

Technical articles on AI agent security, threat research, and building open-source security tools.

Research

Docker Sandboxes Are Not Enough: Why AI Agents Need Static Analysis Before Runtime Isolation

Docker Sandboxes isolate AI agents at runtime. But a sandboxed agent running malicious skills is still a compromised agent. What runtime isolation misses and why static analysis needs to come first.

garagon·Feb 2026·12 min read
Read article →
Security

Your AI Agent Config is a Security Liability

MCP configuration files are the most dangerous files on developer machines. Hardcoded secrets, npx -y without version pins, Docker with --privileged, shell metacharacters in args. Seven risks with concrete fixes.

garagon·Feb 2026·13 min read
Read article →
Research

OWASP Agentic Top 10 Mapped to Aguara Detection Rules

Every risk in the OWASP Top 10 for Agentic Applications mapped to specific Aguara detection rules. 197 rules across 12 categories covering all 10 OWASP risks with concrete examples from 31,000+ scanned skills.

garagon·Feb 2026·14 min read
Read article →
Research

MCP Tool Poisoning: Beyond Descriptions — Full-Schema Injection Attacks Explained

Tool poisoning goes far beyond malicious descriptions. Every field the LLM processes is an injection point: parameter names, enum values, error messages, return values. A deep dive into the full attack surface.

garagon·Feb 2025·11 min read
Read article →
Guide

From SKILL.md to Shell: A Security Audit Guide for AI Agent Skills

A step-by-step methodology for auditing AI agent skill files. Hidden content, instruction overrides, credential patterns, external communications, and command execution — with detection examples for each.

garagon·Feb 2025·12 min read
Read article →
Security

npx -y Considered Harmful: Supply Chain Risks in MCP Server Configurations

Most MCP servers are installed via npx -y — auto-downloading and executing unverified code. Typosquatting, package takeover, dependency confusion, and postinstall scripts make this a supply chain nightmare.

garagon·Feb 2025·8 min read
Read article →
Research

We Scanned 28,000 AI Agent Skills for Security Threats. Here's What We Found.

31,000+ skills across 5 registries. 485 critical-severity findings. Prompt injection, credential leaks, supply chain attacks. The first large-scale security audit of AI agent ecosystems.

garagon·Feb 2025·10 min read
Read article →
Engineering

How I Built a Semgrep-Like Scanner for AI Agent Skills

The architecture behind Aguara: three detection layers, 148 YAML rules, concurrent file scanning, and self-testing rules. A deep dive into building a static security scanner for a new attack surface.

garagon·Feb 2025·9 min read
Read article →