Blog

Technical articles on AI agent security, threat research, and building open-source security tools.

Research

30 MCP CVEs in 60 Days: The Attack Surface That Keeps Growing

The MCP ecosystem accumulated 30 CVEs in 60 days. We break down every vulnerability category, analyze the root causes, map them to Aguara detection rules, and show what the data tells us about AI agent security.

garagon·Mar 2026·14 min read
Read article →
Engineering

Aguara Is Now a GitHub Action — Security Scanning for AI Agent Repos in One Line

Aguara Security Scanner is available on the GitHub Marketplace. One line in your workflow. 173+ detection rules. 13 categories. SARIF results in GitHub Code Scanning. No API keys, no cloud service, no dependencies.

garagon·Mar 2026·8 min read
Read article →
Engineering

Aguara v0.5.0: 173 Rules, Confidence Scoring & Configurable Limits

20 new detection rules across 8 categories. Confidence scoring (0.0–1.0) on every finding. Configurable --max-file-size. Atomic state writes. The largest single-release rule expansion in Aguara's history.

garagon·Mar 2026·14 min read
Read article →
Research

The Promptware Kill Chain: 7 Stages from Prompt Injection to Full Compromise

Schneier's framework reframes prompt injection as a 7-stage APT kill chain. 21 documented attacks traverse 4+ stages. Each stage mapped to real incidents, Aguara detection rules, MITRE ATLAS techniques, and defense strategy.

garagon·Mar 2026·16 min read
Read article →
Security

AI Agents Don't Understand Secrets: What Aguara Detects and Why It Matters

23.8M secrets leaked on GitHub in 2024. Copilot repos show 40% higher leak rates. Five paths credentials leak through AI agents, what Aguara's detection rules catch, and why static scanning alone isn't enough.

garagon·Mar 2026·10 min read
Read article →
Security Advisory

CVEs in Anthropic's Own MCP Servers: When Reference Implementations Teach the Ecosystem to Be Insecure

Anthropic created MCP, then shipped reference servers with path traversal, argument injection, SQL injection, and sandbox escapes. 9 CVEs analyzed, attack chains documented, and what Aguara detects.

garagon·Feb 2026·16 min read
Read article →
Research

Mapping the Agentic AI Attack Surface: How Aguara Detects the Threats Researchers Identified

Researchers at 4 universities formalize the agentic AI threat model with two supply chains and the Viral Agent Loop. Here is how Aguara's 153 detection rules map to every attack class they identified.

garagon·Feb 2026·12 min read
Read article →
Engineering

Aguara v0.4.0, MCP v0.3.0 & Watch Expansion — Coordinated Release

153 detection rules, 5 new file hardening guardrails, official MCP SDK migration across the stack, and Aguara Watch crosses 42,969 skills across 7 registries. One SDK, zero community forks.

garagon·Feb 2026·10 min read
Read article →
Guide

Securing Your OpenClaw Setup: 7 Checks and How to Automate Scanning with Aguara

OpenClaw's security team has shipped 40+ patches in weeks. But the skill ecosystem is still your responsibility. 7 practical security checks plus step-by-step Aguara integration for scanning skills, configs, and CI/CD.

garagon·Feb 2026·12 min read
Read article →
Security

Kali Linux + Claude Desktop: When Offensive Security Meets MCP, Scanning Becomes Non-Negotiable

Kali Linux officially integrates Claude Desktop via MCP to control nmap, metasploit, and hydra through natural language. If legitimate MCP servers give agents access to pentesting tools, imagine what a malicious one can do.

garagon·Feb 2026·8 min read
Read article →
Research

NIST Asks How to Secure AI Agents. We Already Have Answers.

NIST's NCCoE published a concept paper on AI agent identity and authorization. Their 6 open questions map directly to what Aguara and the MCP ecosystem are building today. Here is the mapping.

garagon·Feb 2026·9 min read
Read article →
Engineering

The Security Flywheel: How Scanner, Observatory, and MCP Server Compound

How a single scanner became a full feedback loop: observatory crawling 42,655 skills, 4 rounds of FP reduction, and an MCP server that gives agents access to the entire cycle. The engineering story.

garagon·Feb 2026·10 min read
Read article →
Research

Docker Sandboxes Are Not Enough: Why AI Agents Need Static Analysis Before Runtime Isolation

Docker Sandboxes isolate AI agents at runtime. But a sandboxed agent running malicious skills is still a compromised agent. What runtime isolation misses and why static analysis needs to come first.

garagon·Feb 2026·12 min read
Read article →
Security

Your AI Agent Config is a Security Liability

MCP configuration files are the most dangerous files on developer machines. Hardcoded secrets, npx -y without version pins, Docker with --privileged, shell metacharacters in args. Seven risks with concrete fixes.

garagon·Feb 2026·13 min read
Read article →
Research

OWASP Agentic Top 10 Mapped to Aguara Detection Rules

Every risk in the OWASP Top 10 for Agentic Applications mapped to specific Aguara detection rules. 173 rules across 13 categories covering all 10 OWASP risks with concrete examples from 40,000+ scanned skills.

garagon·Feb 2026·14 min read
Read article →
Research

MCP Tool Poisoning: Beyond Descriptions — Full-Schema Injection Attacks Explained

Tool poisoning goes far beyond malicious descriptions. Every field the LLM processes is an injection point: parameter names, enum values, error messages, return values. A deep dive into the full attack surface.

garagon·Feb 2025·11 min read
Read article →
Guide

From SKILL.md to Shell: A Security Audit Guide for AI Agent Skills

A step-by-step methodology for auditing AI agent skill files. Hidden content, instruction overrides, credential patterns, external communications, and command execution — with detection examples for each.

garagon·Feb 2025·12 min read
Read article →
Security

npx -y Considered Harmful: Supply Chain Risks in MCP Server Configurations

Most MCP servers are installed via npx -y — auto-downloading and executing unverified code. Typosquatting, package takeover, dependency confusion, and postinstall scripts make this a supply chain nightmare.

garagon·Feb 2025·8 min read
Read article →
Research

We Scanned 28,000 AI Agent Skills for Security Threats. Here's What We Found.

40,000+ skills across 7 registries. 485 critical-severity findings. Prompt injection, credential leaks, supply chain attacks. The first large-scale security audit of AI agent ecosystems.

garagon·Feb 2025·10 min read
Read article →
Engineering

How I Built a Semgrep-Like Scanner for AI Agent Skills

The architecture behind Aguara: three detection layers, 173 YAML rules, concurrent file scanning, and self-testing rules. A deep dive into building a static security scanner for a new attack surface.

garagon·Feb 2025·9 min read
Read article →