AI Engineering That Ships
Hard-won insights from assembly language to multi-agent orchestration.
Written for engineers who care how systems actually behave in production.
Agentic infrastructure · Defense-in-depth security · Modernizing legacy systems
Agentic AI Systems Engineering
Building production-ready multi-agent systems where AI agents generate Claude Code infrastructure for any project — with defined file ownership boundaries, specialized tool restrictions, and automated quality enforcement. Two completed production migrations prove compound returns: the second migration was more complex but completed in fewer sessions.
Read More →
Agentic AI Security Architecture
Applying defense-in-depth security to AI agent systems, directly addressing the OWASP Top 10 for Agentic Applications. Covers prompt injection defense (22 detection patterns), rate limiting as circuit breakers, inter-agent JSON Schema validation, secrets hygiene enforcement, and a 3-tier trajectory monitoring system.
Read More →
Production AI Systems
Three completed AI projects with real metrics: Text-to-SQL Dashboard (92–95% SQL accuracy, $45/month), Obsidian Knowledge Pipeline (1,000+ notes, 2,757 bidirectional links, $1.50 total cost), and Job Search Agent (1,975 companies monitored, 58,807 jobs/week, 311 curated matches, $5.04/run).
Read More →
Data Intelligence & SQL Engineering
Expert-level SQL across MS SQL Server and PostgreSQL. The text-to-SQL system auto-generates four-panel dashboards from plain English in under 30 seconds using vector search — achieving 92–95% accuracy on a schema with many tables and millions of records, where standard AI approaches fail.
Read More →The Dotzlaw Team
Two skilled engineers building advanced agentic AI projects and research alongside me. They contribute directly to the systems, articles, and tools published on this site.
Building AI-powered data pipelines and full-stack applications at the intersection of machine learning and real-world business problems.
Applying statistical analysis, neural networks, and modern UI to extract insight from complex datasets and build compelling data-driven applications.
Latest Insights
View all →
AI Security Securing Agentic AI Systems: What Two Rounds of Adversarial Testing Taught Us
27 attacks across 2 rounds, 14 defense patches, 550 lines of security hardening. The transferable lesson: patching fixes yesterday's attacks, architecture survives tomorrow's. Here is what we learned about building, testing, and defending agentic AI applications.
AI Security The Escalation Wave: Why Patches Work but Architecture Doesn't
Round 2 re-ran all 10 original attacks against patched code -- 8 were blocked (20% ASR). Then 7 new attacks hit structural weaknesses: Unicode zero-width characters bypassed every regex, 5 rapid requests crashed the server, and a pattern gap between security layers let 11 injection techniques through. Escalation ASR: 85.7%.
AI Security 65% Attack Success Rate Against an Unpatched Target
Round 1 of our adversarial exercise: 10 attacks in 5 minutes, 7 confirmed vulnerabilities, one critical credential exfiltration. The Red Team read our API keys through a base64-encoded path that nobody thought to validate. Blue Team detected everything -- but the damage was already done.
AI Security Adversarial Agent Testing: When Your AI Agents Attack Each Other
We built a platform where five Claude Code agents operate as Red Team attackers, Blue Team defenders, and an impartial Referee -- then pointed them at a real target. The first exercise found 7 confirmed vulnerabilities in 5 minutes. The second proved that patches work but architecture doesn't.
Claude Code WordPress to Astro: Migrating a Production Site with AI-Assisted Infrastructure
41 WordPress articles, 187 images, a design-matched dark theme, and a Projects section -- all extracted from a SQL backup file and rebuilt in Astro. This is the story of migrating dotzlaw.com from WordPress to a modern static site, and what the Bootstrap Framework actually contributed.
AI Security Securing Agentic AI: How We Found 11 Security Gaps in Our Own Framework and Built Defense-in-Depth to Close Them
We built a framework with 18 skills and 11 hooks. A security audit found 11 gaps. We closed all of them with 6 new hooks, 2 JSON schemas, a 3-tier trajectory monitoring system, and per-archetype security patterns across 7 project types.
Claude Code From Prototype to Platform: How a Framework Learned to Improve Itself
After two production migrations, we turned the framework on itself. A systematic gap analysis identified 8 missing capabilities. Round 1 added 3 of them, expanding the pipeline from 7 to 10 steps. An independent review graded the work A-. The compound returns operate not just project-to-project but within the framework itself.
Claude Code An Agent Swarm That Builds Agent Swarms: How We Used Claude Code to Generate Claude Code Infrastructure
We built a framework where Claude Code agents analyze an existing codebase, generate tailored agent teams, hooks, and skills. Two migrations later -- the second harder but faster -- the compound returns are real.
Claude Code Claude Code Security: Building Defense-in-Depth with Five Primitives
Most Claude Code projects ship with zero security infrastructure. The same 5 building blocks you use for capability -- hooks, agents, skills, commands, and teams -- become a comprehensive defense-in-depth architecture when configured for security.
Production Projects
View All →
Claude Code Bootstrap Framework
FeaturedAn agent swarm that builds agent swarms. A 12-step pipeline where Claude Code agents analyze any codebase and generate complete Claude Code infrastructure -- agent teams, hooks, skills, and slash commands -- in 30-55 minutes. Three production migrations validated. The second was harder but faster.
Adversarial Agent Testing
FeaturedAI agents that attack each other to find vulnerabilities. Red Team probes, Blue Team defends, a Referee scores both -- all using Claude Code with worktree isolation. Two rounds of live exercises against a real target drove ASR from 65% CRITICAL to 47% HIGH, with a regression wave proving patches hold at 20% and an escalation wave exposing architectural gaps at 85.7%.




