In February 2026, NIST’s National Cybersecurity Center of Excellence (NCCoE) published a concept paper titled “Accelerating the Adoption of Software and AI Agent Identity and Authorization.” The paper asks for public comment on how to apply identity, authentication, and authorization principles to AI agents.

The paper poses six categories of questions. Every single one maps to a problem the MCP security ecosystem is already solving.

Here is how each NIST question connects to real-world tooling that exists today.

What NIST is asking

The concept paper frames the challenge clearly: AI agents are systems capable of autonomous decision-making with limited human supervision. They access data, invoke tools, and take actions. The question is how to apply foundational identity principles — identification, authentication, authorization — to ensure agents are known, trusted, and properly governed.

NIST explicitly references the Model Context Protocol (MCP) as a relevant standard, alongside OAuth 2.0/2.1, OpenID Connect, SPIFFE/SPIRE, SCIM, and NGAC. The paper scopes to agentic architectures (agents that acquire context, process results, and take actions), explicitly excluding pure RAG and standalone LLM deployments.

The public comment period is open until April 2, 2026. Feedback goes to AI-Identity@nist.gov.

NIST’s 6 questions, mapped to existing solutions

1. Identification: “How might agents be identified?”

NIST asks: What metadata is essential for an agent’s identity? Should agent identity be ephemeral or fixed? Should identities be tied to hardware, software, or organizational boundaries?

What exists today: Oktsec assigns Ed25519 cryptographic key pairs to each agent at initialization. The identity is tied to the agent instance (not the hardware or the organization). Each agent has a fixed key pair for its lifetime, but key rotation is supported via the oktsec keygen command. The identity is software-bound and verifiable.

In the MCP ecosystem, agents are identified by their MCP client configuration. Aguara’s --auto discovery mode identifies agents across 17 MCP clients (Claude Desktop, Cursor, VS Code, Windsurf, Cline, Zed, and more) by locating their configuration files. Each client configuration defines which MCP servers the agent can access.

2. Authentication: “What constitutes strong authentication?”

NIST asks: What constitutes strong authentication for an AI agent? How do we handle key management — issuance, update, and revocation?

What exists today: Oktsec uses Ed25519 digital signatures on every message. The agent signs its messages with its private key. The proxy verifies the signature before processing. Key issuance is via oktsec keygen. Revocation is via removing the public key from the proxy’s keystore.

The MCP specification itself is integrating OAuth 2.1 for server authentication. Anthropic’s MCP auth specification defines how MCP servers authenticate clients using standard OAuth flows, including token issuance, refresh, and revocation.

3. Authorization: “How can zero-trust principles apply?”

NIST asks: Can authorization policies be dynamically updated? How do we establish least privilege when an agent’s required actions are not fully predictable? How do we handle delegation of authority?

What exists today: Oktsec implements YAML-based policy enforcement. Policies define which agents can communicate with which services and which tools they can invoke. Policies can be updated at runtime without restarting the proxy. Per-rule action overrides (block, quarantine, alert, allow) provide graduated enforcement.

# Example: least-privilege policy for a code review agent
policies:
  - name: code-reviewer
    from: agent/code-reviewer
    allow:
      - to: mcp/github-server
        tools: [read_file, list_files, search_code]
    deny:
      - to: mcp/github-server
        tools: [create_commit, delete_branch, merge_pr]

This directly addresses NIST’s question about least privilege. The agent can read code but not modify it. The policy is declarative and auditable.

4. Auditing: “How to ensure tamper-proof logging?”

NIST asks: How can we ensure agents log their actions in a tamper-proof manner? How do we ensure non-repudiation?

What exists today: Oktsec writes every agent action to a SQLite audit trail with WAL mode for concurrent read/write access. Events include the agent identity, the action taken, the tool invoked, the parameters passed, the verdict (allow/block/quarantine), and the timestamp. The dashboard surfaces per-agent risk scores and event timelines.

For non-repudiation: Ed25519 signatures bind agent identity to actions. If an agent signed a message that invoked a tool, the signature proves the agent authorized that action. The proxy stores the signed message alongside the audit record.

5. Prompt injection prevention: “What controls help?”

NIST asks: What controls prevent direct and indirect prompt injection? After injection occurs, what minimizes impact?

What exists today: Aguara’s 148 detection rules include 15 rules specifically for prompt injection attacks. These cover direct injection (instructions to override system prompts), indirect injection (instructions hidden in data the agent processes), and evasion techniques (base64-encoded injections, unicode obfuscation, HTML comment injection).

The scanner operates at three layers:

  • Pattern matching — regex-based detection of injection patterns, credential formats, command execution syntax
  • NLP analysis — AST-based classification of instruction overrides, authority claims, urgency markers
  • Taint tracking — source-to-sink flow analysis for data exfiltration paths

Aguara Watch scans 42,655 skills across 7 registries with all 148 rules, 4 times daily. This continuous monitoring provides early warning when new injection techniques appear in public registries.

6. Tracking data flows: “How to maintain provenance?”

NIST asks: How do we track and maintain provenance of user prompts and data input sources?

What exists today: In the Oktsec pipeline, every message includes the agent identity, the origin of the request, and the chain of tools invoked. The audit trail maintains the full provenance chain. When an agent invokes an MCP tool, the tool parameters, the response, and the subsequent actions are all recorded.

Aguara’s SARIF output format provides machine-readable finding reports that include the source file, the matched content, the rule that triggered, and the severity. This integrates with CI/CD pipelines for automated provenance tracking in the software supply chain.

What the NIST paper does not cover

The paper scopes to enterprise use cases with controlled environments. It explicitly excludes “identifying and managing access for external agents from untrusted sources.” But in the MCP ecosystem, this is the primary threat surface.

Public MCP registries host tens of thousands of servers from unknown authors. When an agent installs a new MCP server from a registry, it is accepting code from an untrusted source and granting it tool-level access to the user’s system. This is the supply chain problem that Aguara Watch monitors continuously.

The gap between NIST’s scope (enterprise agents in controlled environments) and the real-world deployment pattern (agents installing tools from public registries) is exactly where static analysis provides the most value.

Where to go from here

NIST is soliciting feedback until April 2, 2026. If you work on AI agent security, this is a direct opportunity to shape federal guidance. The questions they ask are the right questions. The answers should reference the open-source tooling that already exists.

The standards NIST cites — MCP, OAuth 2.0/2.1, OIDC, SPIFFE/SPIRE — are the right building blocks. The demonstration project they propose could validate how these standards work together in practice. The open-source ecosystem provides concrete reference implementations.

Public comment: AI-Identity@nist.gov — open until April 2, 2026.

Scan your agent infrastructure today

148 rules. 42,655 skills monitored. Static analysis before your agents install anything.