Skip to content
Methodology

The Verluna Method

Six phases to AI-native operations.

This methodology emerged from production experience, not theory. 300+ AI sessions, enterprise deployments, and the work of redesigning real operations as agent-powered systems. The principle is simple: take any human-operated process, decompose it into domains, design an AI-native operating layer, and build it so it runs autonomously with humans only at the judgment points. The execution is where the work lives.

Take any human-operated process, decompose it into domains, design an AI-native operating layer, and build it so it runs autonomously -- with humans only at the judgment points.
01

Observe the Operation

Never start by building. Start by watching.

Watch how things actually work today. Not how people say they work. Not how the process document describes them. How people actually spend their time, where information breaks, and where humans do work that machines should handle.

What Happens

  • Who does what? Where does information flow? Where does it break?
  • Where are humans doing work that is repetitive, error-prone, or invisible?
  • What decisions get made? Which require judgment? Which are mechanical?
  • Where is institutional knowledge trapped in one person's head?

Deliverables

  • Current-state process map
  • Information flow diagram
  • Pain point severity ranking
  • Judgment vs. mechanical decision classification
Tools: Stakeholder interviews, process observation, system access review, Granola transcriptionDuration: 2-5 days
Real Example (Anonymized)

Before designing anything for a marketing team, we mapped their entire measurement landscape. Discovered that marketing's revenue contribution was undervalued because of last-touch attribution. The observation revealed the real problem was not a missing dashboard. It was a missing measurement architecture.

02

Decompose into Domains

Break the messy reality into bounded domains.

Each domain has its own logic, its own data, its own decision patterns. Never build one system that does everything. Build a system of systems where each part has clear boundaries and its own rules.

What Happens

  • Group related activities by decision type, not by org chart
  • Each domain needs a single owner (human or agent) and clear inputs/outputs
  • Boundaries between domains become routing rules
  • Data flows across boundaries define the integration requirements

Deliverables

  • Domain boundary map
  • Routing rules between domains
  • Owner assignment (human or agent per domain)
  • Input/output specification per domain
Tools: Domain-driven design principles, bounded context mapping, data flow analysisDuration: 2-3 days
Real Example (Anonymized)

Marketing measurement decomposed into three independent scoring domains: Fit (firmographic match), Engagement (behavioral signals), Product (usage signals). Each has different data sources, different models, different owners. A three-score architecture, not a single monolithic score.

03

Design the Architecture

The phase that separates building a tool from building an operating system.

Do not jump to features. Design the infrastructure that makes features possible. Six architectural components form the operating layer: routing, specialization, governance, memory, cadences, and observability.

What Happens

  • Routing: how does incoming work reach the right agent or process?
  • Specialization: dedicated agents per domain, not one general-purpose system
  • Governance: what runs autonomously, what requires human approval
  • Memory, cadences, and observability complete the layer

Deliverables

  • Architecture Decision Record (ADR)
  • Agent topology diagram
  • Governance framework
  • Operating layer blueprint (6 components)
Tools: Mermaid diagrams, Architecture Decision Records, Five Questions quality gateDuration: 3-5 days
Real Example (Anonymized)

For a personal knowledge management system: User input flows to routing rules, which classify by domain and dispatch to specialized agents. Each agent executes skills, triggers hooks for validation and sync, updates persistent memory, and schedules cadence operations. Not an app. An operating system.

04

Build Fast, Through AI

AI as both the development medium and the runtime.

Use AI to build AI-powered systems. This creates a compounding advantage: every system you build makes you better at building the next one. Production systems in weeks, not quarters. Iteration based on real usage, not specifications.

What Happens

  • Describe architecture at the system level, use AI to implement
  • Ship production software in hours, not weeks
  • Iterate based on real usage, not specifications
  • Stay at the design level where human judgment matters most

Deliverables

  • Production-deployed agent system
  • Integration layer with existing tools
  • Automated test suite
  • Deployment runbook
Tools: Claude Code, custom orchestration, CI/CD pipelines, Kubernetes, HelmDuration: 1-4 weeks
Real Example (Anonymized)

From observation (watching a field marketer do XLOOKUP) to production Kubernetes deployment in a single session. Next.js, React, Claude AI semantic matching, Helm chart, GitLab CI. Used by real employees, processing real data, deployed on enterprise infrastructure.

05

Autonomize

Push every process as far toward autonomous as possible.

Most people build tools that help humans work faster. This methodology builds systems that run without humans and only involve them for judgment calls. The autonomy gradient: Manual, Assisted, Supervised, Autonomous, Invisible. Target Autonomous and Invisible for operations.

What Happens

  • Manual: human does the work (starting state for most processes)
  • Assisted: AI helps the human (where most tools stop)
  • Supervised: AI does the work, human approves
  • Autonomous: AI does the work, human is notified
  • Invisible: AI does the work, human does not think about it (target state)

Deliverables

  • Autonomy classification per process
  • Human checkpoint definition (only where judgment is required)
  • Escalation paths for edge cases
  • Monitoring for autonomous operations
Tools: Event-driven hooks, scheduled cadences, background agents, health scoringDuration: 1-2 weeks
Real Example (Anonymized)

Multi-agent research swarms (5-8 agents in parallel) ran autonomously across 7 sessions. Each agent had a specific mandate. They produced synthesized strategy documents. The human made strategic decisions based on the output. The research was autonomous. The judgment was human.

06

Codify and Teach

Turn implicit knowledge into explicit, repeatable frameworks.

This is what transforms a one-time solution into a methodology and a practitioner into a thought leader. Codify the architectural patterns, the decision criteria, the failure modes, and the methodology itself so the approach scales beyond one person.

What Happens

  • Architectural patterns that can be replicated
  • Decision criteria that others can apply
  • Failure modes that others can avoid
  • The methodology itself as a teachable framework

Deliverables

  • Architecture documentation package
  • Team training materials
  • Pattern library additions
  • Client independence validation
Tools: Documentation generation, knowledge synthesis, structured training sessionsDuration: 3-5 days
Real Example (Anonymized)

7,179 lines of enterprise code synthesized into 10 structured reference files with department playbooks, error handling guides, and a decision tree. Implicit knowledge (how the system works, when to use which component) became explicit operational infrastructure that anyone on the team can use.

>Principles

Six Principles Behind the Method

PrincipleWhat It MeansWhat Most People Do Instead
Systems over tasksDesign the system that handles a category of workAutomate one task at a time
Autonomy over assistanceBuild systems that run themselvesBuild tools that help humans run things
Architecture over featuresDesign the infrastructure layer firstJump to building features
Domains over monolithsDecompose into bounded, specialized areasOne AI tool that does everything
Judgment at the edgesHumans make decisions, AI handles operationsHuman in the loop for every step
Codify over improviseTurn knowledge into repeatable frameworksFigure it out each time
>Quality Gate

The Five Questions

Every architecture Verluna designs must pass these five questions. All five must have written answers before implementation begins. No exceptions.

1

What happens when the primary data source is unavailable?

2

What does the agent do when it receives unexpected input?

3

Which human approves before the agent takes irreversible action?

4

How does the client know the system is working without asking us?

5

What does rollback look like in the first 30 days?

>Boundaries

What This Methodology Is Not

Not prompt engineering

Prompt engineering is about getting better output from a single interaction. This is about designing systems of interactions.

Not workflow automation

Workflow automation connects existing tools in sequences. This redesigns the operation itself as AI-native.

Not AI strategy consulting

Strategy consulting produces PowerPoint decks. This produces working systems.

Not software engineering

Software engineering builds applications. This builds operating layers -- the infrastructure between AI capabilities and human organizations.

See How It Applies to You

The methodology is the constant. Your operations are the variable. Take the Agent Readiness Assessment to see which phases matter most for your organization.