DojOps security — 8 layers of defense for AI-generated infrastructure

8 layers of defense

Enough security layers that your compliance team won't flinch at AI-generated configs

01

Structured output enforcement

Provider-native JSON modes so LLM output is always valid and parseable. No guessing, no fixing.

02

Schema validation

Every response goes through Zod safeParse(). Markdown stripping, JSON parsing, type checks. Nothing gets used without passing.

03

Deep verification

External tool validation: terraform validate, hadolint, kubectl --dry-run, plus built-in structure lints for GitHub Actions and GitLab CI.

04

Policy engine

ExecutionPolicy controls which paths are allowed, which are denied, env vars, timeouts, and file size limits. Writes are restricted to infrastructure paths only.

05

Approval workflows

You see a diff preview before every write. Auto-approve, auto-deny, or wire up custom callbacks for CI/CD. High-risk plans need explicit confirmation.

06

Sandboxed execution

Path restrictions, size limits, atomic writes via temp + rename, .bak backups, per-file audit logging. PID-based locking prevents concurrent mutations.

07

Immutable audit trail

Hash-chained JSONL with SHA-256 integrity verification. SIEM-compatible format. Verify tampering with a single command.

08

Zero telemetry

Nothing leaves your machine except requests to your chosen LLM provider. No analytics, no tracking. Run fully local with Ollama.

Stay in the loop

Get updates on DojOps

New skills, provider integrations, and releases. Straight to your inbox. No spam, unsubscribe anytime.

Or reach us at contact@dojops.ai

We use cookies for analytics to understand how visitors interact with this site.