Build

I Ran My Entire Agency From My Phone Yesterday. Here's the Infrastructure Behind It.

How I connected Claude Dispatch on my iPhone to desktop Claude Code terminals, audited 11 repos, synced 17 knowledge files, and ran a multi-client consulting operation by voice — without touching a keyboard.

Jason MurphyApril 2, 20268 min read

I Ran My Entire Agency From My Phone Yesterday. Here's the Infrastructure Behind It.

I didn't plan to build an autonomous agency from my phone. I was on the couch with my dog, half-watching something on TV, and realized I hadn't checked in on three active client projects.

So I opened Claude Dispatch on my iPhone and started talking.

Four hours later, I had a fully operational command center running from my pocket. Eleven GitHub repos audited. Seventeen CLAUDE.md files read and synced. Two separate business brains built, tested, and routing correctly. A voice-activated department system where I say "CEO" and the right agent wakes up.

This isn't a flex. This is a Tuesday.

What Actually Happened

I connected Claude Dispatch on my phone to my desktop Claude Code terminal sessions. Think of it like giving your phone a direct line to the engineers who already know your codebase.

The first thing I did was audit. Eleven active GitHub repos across two businesses — my consulting work and Vibe Tokens, the product brand. Each repo had its own CLAUDE.md file. Some had two or three. Seventeen total knowledge files that tell each terminal session what it's working on, what the rules are, what the brand sounds like.

Dispatch didn't know any of that yet. It was a new brain with no context.

So I read every single one of those files into Dispatch's memory. Now my phone knows everything my desktop terminals know. Same context. Same rules. Same constraints.

That took about forty minutes of talking.

Brain Separation

Here's where it gets architectural.

Consulting and Vibe Tokens are different businesses with different clients, different voices, different codebases. They can't bleed into each other. A client brief for a consulting gig can't accidentally get routed to the content pipeline for a $147 product.

So I built brain separation. Two distinct knowledge graphs, two routing tables, one dispatch layer that knows which brain to activate based on context.

When I say "pull up the consulting brief," Dispatch knows that's the consulting brain. When I say "draft today's content," it knows that's Vibe Tokens. No confusion. No cross-contamination.

This is the kind of thing that used to require a project manager and a Slack workspace with thirty channels. Now it's a protocol file and a routing table.

Voice-Activated Department Heads

This is the part that made me sit up straight.

I built voice-activated routing for department heads. I say "CEO" from my phone, and the strategic oversight agent activates. I say "content," and the content pipeline agent wakes up with the editorial calendar loaded. I say "sales," and the agent that tracks pipeline and proposals comes online.

Each one has its own context, its own tools, its own constraints. The CEO agent can see everything. The content agent can only see content repos. The sales agent can see proposals but not product code.

This is org chart as infrastructure. Hierarchy as routing logic.

The Sync Loop

None of this matters if information doesn't flow both directions.

So I built three protocol files:

DISPATCH.md — the master protocol that tells Dispatch how to behave, what it can access, and how to route.

queue.md — a task queue that flows from mobile to desktop. I dictate a task on my phone, it appears in the queue, a desktop terminal picks it up.

sent-log.md — a write-back log for emails and outbound communication. Desktop drafts it, phone reviews it, nothing sends without confirmation.

Then I tested the full loop. A consulting client brief came in. My desktop terminal read it, parsed it, and surfaced the key details. I picked up my phone thirty seconds later — same brief, same summary, ready for my voice approval.

Round trip. Phone to desktop to phone. Working.

The Content Pipeline (You're Reading Its Output)

The last thing I built was the content engine you're experiencing right now.

One daily voice memo feeds into a pipeline that produces: a full blog post, two LinkedIn posts, two Facebook posts, an X thread, and a Remotion video script for Reels. Every day. From one voice memo.

I talked for about eight minutes this morning. You're reading the result.

This isn't AI-assisted writing. This is AI-as-publishing-infrastructure. The voice memo is the seed. Everything else is architecture.

Why This Matters

Most people use AI the way they used Google in 2004 — type a question, get an answer, move on.

That's using a jet engine to blow-dry your hair.

The real shift isn't "AI can write my emails." The real shift is: AI can be the operating system for how you run your business. Your agents can hold context across repos, route decisions based on voice commands, maintain brain separation between clients, and sync state between your phone and your desktop in real time.

You don't need an office. You don't need a laptop open. You need infrastructure that already knows what you know and can act on what you say.

I built mine in an afternoon on my couch.

The Boring Part That Makes It Work

None of this is magic. It's files. Markdown files with clear protocols. CLAUDE.md files that give each terminal session its identity. A dispatch layer that reads those files and routes accordingly.

The infrastructure is simple. The compounding effect of running it daily is not.

Every day this system runs, it gets more context. Every brief it processes, every piece of content it produces, every task it routes — that's training data for tomorrow's decisions. Not in some abstract ML sense. In a practical, "this agent now knows your client's tone preferences because it read last week's feedback" sense.

That's the gap between using AI as a tool and using AI as a cofounder.


If you want the blueprint for building this kind of system — the protocols, the routing logic, the brain separation framework, the full operator playbook — that's exactly what OPERATOR is.

It's $147 and it's the difference between asking AI questions and running your business through it.

Want this for your business?

Tell us what you're building. We'll map out exactly what to build and what it costs.

Start Your Project →

Frequently Asked

What is Claude Dispatch?

Claude Dispatch is a mobile interface for Claude that lets you send structured commands to Claude Code terminal sessions running on your desktop. It acts as a dispatch layer between your phone and your agents — you speak or type commands on mobile, and the right agent on desktop picks them up with full context intact.

What is brain separation and why does it matter?

Brain separation means running distinct knowledge graphs for different business entities — each with its own context, rules, and routing logic. When you're managing multiple clients or businesses, you need hard boundaries so that a consulting brief for one client never bleeds into the content pipeline for another. It's implemented as separate CLAUDE.md files and routing tables that tell the dispatch layer which brain to activate based on context.

What are the three protocol files that make the sync loop work?

DISPATCH.md is the master routing protocol that tells the dispatch layer what it can access and how to route. queue.md is a task queue that flows from mobile to desktop — you dictate a task on your phone and a desktop terminal picks it up. sent-log.md is a write-back confirmation log for outbound communication — desktop drafts it, phone reviews it, nothing sends without confirmation.

How does the content pipeline turn one voice memo into multiple pieces of content?

One daily voice memo feeds into an agent pipeline that produces a full blog post, LinkedIn posts, Facebook posts, an X thread, and a Remotion video script. The voice memo is the seed — the pipeline handles research, formatting, tone calibration, and platform-specific adaptation. It's AI-as-publishing-infrastructure, not AI-assisted writing.

Jason Murphy

Written by

Murph

Jason Matthew Murphy. Twenty years building digital systems for businesses. Former CardinalCommerce (acquired by Visa). Now running VibeTokens — AI-built websites and content for small businesses.

The window is open.

It won't be forever.

Start Your Project →