The Verluna Method
Six phases to AI-native operations.
This methodology emerged from production experience, not theory. 300+ AI sessions, enterprise deployments, and the work of redesigning real operations as agent-powered systems. The principle is simple: take any human-operated process, decompose it into domains, design an AI-native operating layer, and build it so it runs autonomously with humans only at the judgment points. The execution is where the work lives.
Take any human-operated process, decompose it into domains, design an AI-native operating layer, and build it so it runs autonomously -- with humans only at the judgment points.
Observe the Operation
Never start by building. Start by watching.
Watch how things actually work today. Not how people say they work. Not how the process document describes them. How people actually spend their time, where information breaks, and where humans do work that machines should handle.
What Happens
- Who does what? Where does information flow? Where does it break?
- Where are humans doing work that is repetitive, error-prone, or invisible?
- What decisions get made? Which require judgment? Which are mechanical?
- Where is institutional knowledge trapped in one person's head?
Deliverables
- Current-state process map
- Information flow diagram
- Pain point severity ranking
- Judgment vs. mechanical decision classification
Before designing anything for a marketing team, we mapped their entire measurement landscape. Discovered that marketing's revenue contribution was undervalued because of last-touch attribution. The observation revealed the real problem was not a missing dashboard. It was a missing measurement architecture.
Decompose into Domains
Break the messy reality into bounded domains.
Each domain has its own logic, its own data, its own decision patterns. Never build one system that does everything. Build a system of systems where each part has clear boundaries and its own rules.
What Happens
- Group related activities by decision type, not by org chart
- Each domain needs a single owner (human or agent) and clear inputs/outputs
- Boundaries between domains become routing rules
- Data flows across boundaries define the integration requirements
Deliverables
- Domain boundary map
- Routing rules between domains
- Owner assignment (human or agent per domain)
- Input/output specification per domain
Marketing measurement decomposed into three independent scoring domains: Fit (firmographic match), Engagement (behavioral signals), Product (usage signals). Each has different data sources, different models, different owners. A three-score architecture, not a single monolithic score.
Design the Architecture
The phase that separates building a tool from building an operating system.
Do not jump to features. Design the infrastructure that makes features possible. Six architectural components form the operating layer: routing, specialization, governance, memory, cadences, and observability.
What Happens
- Routing: how does incoming work reach the right agent or process?
- Specialization: dedicated agents per domain, not one general-purpose system
- Governance: what runs autonomously, what requires human approval
- Memory, cadences, and observability complete the layer
Deliverables
- Architecture Decision Record (ADR)
- Agent topology diagram
- Governance framework
- Operating layer blueprint (6 components)
For a personal knowledge management system: User input flows to routing rules, which classify by domain and dispatch to specialized agents. Each agent executes skills, triggers hooks for validation and sync, updates persistent memory, and schedules cadence operations. Not an app. An operating system.
Build Fast, Through AI
AI as both the development medium and the runtime.
Use AI to build AI-powered systems. This creates a compounding advantage: every system you build makes you better at building the next one. Production systems in weeks, not quarters. Iteration based on real usage, not specifications.
What Happens
- Describe architecture at the system level, use AI to implement
- Ship production software in hours, not weeks
- Iterate based on real usage, not specifications
- Stay at the design level where human judgment matters most
Deliverables
- Production-deployed agent system
- Integration layer with existing tools
- Automated test suite
- Deployment runbook
From observation (watching a field marketer do XLOOKUP) to production Kubernetes deployment in a single session. Next.js, React, Claude AI semantic matching, Helm chart, GitLab CI. Used by real employees, processing real data, deployed on enterprise infrastructure.
Autonomize
Push every process as far toward autonomous as possible.
Most people build tools that help humans work faster. This methodology builds systems that run without humans and only involve them for judgment calls. The autonomy gradient: Manual, Assisted, Supervised, Autonomous, Invisible. Target Autonomous and Invisible for operations.
What Happens
- Manual: human does the work (starting state for most processes)
- Assisted: AI helps the human (where most tools stop)
- Supervised: AI does the work, human approves
- Autonomous: AI does the work, human is notified
- Invisible: AI does the work, human does not think about it (target state)
Deliverables
- Autonomy classification per process
- Human checkpoint definition (only where judgment is required)
- Escalation paths for edge cases
- Monitoring for autonomous operations
Multi-agent research swarms (5-8 agents in parallel) ran autonomously across 7 sessions. Each agent had a specific mandate. They produced synthesized strategy documents. The human made strategic decisions based on the output. The research was autonomous. The judgment was human.
Codify and Teach
Turn implicit knowledge into explicit, repeatable frameworks.
This is what transforms a one-time solution into a methodology and a practitioner into a thought leader. Codify the architectural patterns, the decision criteria, the failure modes, and the methodology itself so the approach scales beyond one person.
What Happens
- Architectural patterns that can be replicated
- Decision criteria that others can apply
- Failure modes that others can avoid
- The methodology itself as a teachable framework
Deliverables
- Architecture documentation package
- Team training materials
- Pattern library additions
- Client independence validation
7,179 lines of enterprise code synthesized into 10 structured reference files with department playbooks, error handling guides, and a decision tree. Implicit knowledge (how the system works, when to use which component) became explicit operational infrastructure that anyone on the team can use.
Six Principles Behind the Method
| Principle | What It Means | What Most People Do Instead |
|---|---|---|
| Systems over tasks | Design the system that handles a category of work | Automate one task at a time |
| Autonomy over assistance | Build systems that run themselves | Build tools that help humans run things |
| Architecture over features | Design the infrastructure layer first | Jump to building features |
| Domains over monoliths | Decompose into bounded, specialized areas | One AI tool that does everything |
| Judgment at the edges | Humans make decisions, AI handles operations | Human in the loop for every step |
| Codify over improvise | Turn knowledge into repeatable frameworks | Figure it out each time |
The Five Questions
Every architecture Verluna designs must pass these five questions. All five must have written answers before implementation begins. No exceptions.
What happens when the primary data source is unavailable?
What does the agent do when it receives unexpected input?
Which human approves before the agent takes irreversible action?
How does the client know the system is working without asking us?
What does rollback look like in the first 30 days?
What This Methodology Is Not
Not prompt engineering
Prompt engineering is about getting better output from a single interaction. This is about designing systems of interactions.
Not workflow automation
Workflow automation connects existing tools in sequences. This redesigns the operation itself as AI-native.
Not AI strategy consulting
Strategy consulting produces PowerPoint decks. This produces working systems.
Not software engineering
Software engineering builds applications. This builds operating layers -- the infrastructure between AI capabilities and human organizations.
See How It Applies to You
The methodology is the constant. Your operations are the variable. Take the Agent Readiness Assessment to see which phases matter most for your organization.