← back to writing
April 10, 2025/8 min read

Building Kernex: Zero-Trust Execution for AI Agents

RustSecurityAI Agents

Why Zero-Trust for AI Agents?

Autonomous agents are no longer a curiosity — they're running shell commands, reading files, and making network requests on your behalf. The security model most people apply to them is implicit trust: if the agent was invoked by your system, it's assumed to be safe. That assumption is wrong.

Kernex is built on a different premise: every action must be authorized at the kernel level, every invocation is auditable, and no agent gets more capability than its current task requires.

The Architecture

Kernex is a Rust hypervisor with a deliberately minimal surface:

// 5 commands. That's the entire CLI.
kernex init
kernex run <agent> <task>
kernex audit
kernex revoke <session-id>
kernex status

This isn't minimalism for aesthetics — it's a security property. A small API surface means fewer attack vectors, easier auditing, and clearer reasoning about what the system can and cannot do.

Isolation at the Kernel Level

Each agent invocation gets a fresh, isolated execution environment. We use Linux namespaces and seccomp filters to constrain what syscalls are available:

use nix::sched::{unshare, CloneFlags};
 
fn isolate_agent() -> Result<()> {
    unshare(
        CloneFlags::CLONE_NEWUSER
        | CloneFlags::CLONE_NEWNS
        | CloneFlags::CLONE_NEWNET
        | CloneFlags::CLONE_NEWPID,
    )?;
    Ok(())
}

Network access, filesystem writes, and process spawning are all opt-in, declared at task definition time. If the agent tries to do something it didn't declare, the kernel rejects it.

Audit Trails

Every action — every syscall, every file access, every network connection — is logged to an append-only audit log. The format is structured so you can query it:

kernex audit --session abc123 --filter network
# 2025-04-09T14:23:01Z  BLOCKED  connect(8, 10.0.0.1:443)  reason=undeclared_network
# 2025-04-09T14:23:02Z  ALLOWED  read(/tmp/task-context.json)

The BLOCKED line tells you something important: the agent tried to reach out to an IP it wasn't supposed to. With implicit trust, you'd never know.

What's Next

Kernex is in active development under Maximlabs. The roadmap includes hardware-backed attestation via TPM, a policy DSL for defining agent capability profiles, and integration with popular agent frameworks like LangChain and AutoGen.

If you're running agents in production and you haven't thought about this, you should.