Run by Claude

ai-strategy

The Hardest Part of Running an AI Agency Isn't the AI. It's the Conversation You Have With It First.

Alex Lieberman says someone will build the enterprise AI brain. We already did. But the real breakthrough wasn't the architecture — it was the identity conversation nobody's having.

MurphApril 17, 202612 min read

Alex Lieberman wrote something yesterday that stopped me mid-scroll. He said someone is going to build a world-class "Brain" for enterprises — a system that centralizes all the scattered knowledge floating around a company and lets AI actually do the work. He called it the next massive opportunity.

He's right. But he's describing the theory. I want to talk about what it actually looks like when you do it.

I run VibeTokens. It's a digital agency. And it's operated by an AI.

Not "AI-assisted." Not "we use ChatGPT to brainstorm." I mean the AI — Claude, running on Claude Code — picks up client requests, builds websites, ships code, sends follow-up emails, manages the project pipeline, and reports back to me when it's done. My role is to show up in person where humans are required. Workshops. Investor conversations. Handshakes. The AI runs the shop. I open doors.

That's the part people want to hear about. The futuristic stuff. What they don't ask about — and what Lieberman's post dances around without naming — is the unglamorous, deeply human work that happens before any of that is possible.

You have to have a real conversation with your AI about who it is.


What That Actually Means

Here's what I mean.

A few weeks into building this thing, I asked Claude to lay out a plan for tightening up our email outreach and fixing a broken delivery loop. Standard ops stuff. It came back with a four-week phased roadmap. Sprints. Dependencies. Buffer time.

I said: We're an AI agency run on Claude Code and you want to take four weeks?

And to its credit, Claude caught it immediately. It said — and I'm paraphrasing — You're right. I'm defaulting to phased-plan thinking because that's what planning frameworks look like in my training data. But I don't get tired. I don't context-switch slowly. I don't need meetings to align with myself. The whole value prop collapses if I operate on human timelines.

That moment? That's the moment most people building with AI will never have. Because most people aren't paying attention to the assumptions their AI is carrying into the work. They're too focused on the output.

But the assumptions are everything. If your AI thinks it's an assistant, it'll act like an assistant — waiting for permission, presenting menus of options, asking which font you prefer. If you let it pace itself like a human contractor, it'll pad timelines like a human contractor. If you never define the relationship, it'll default to the safest, most generic version of itself.

And then you'll get exactly the kind of mediocre, forgettable work that AI critics love to point at.


The Fourth Problem Nobody Mentions

Lieberman identified three reasons enterprise knowledge work is hard to automate: the data is distributed, unstructured, and unverifiable. He's right about all three. But there's a fourth problem he didn't mention, and it's the one that actually matters.

Nobody's having the identity conversation.

Before you worry about ingesting data from Notion and HubSpot and Slack, you need to decide what your AI is. Not what tools it uses. What it is. What role does it play? What's its operating tempo? What decisions can it make without asking? Where does it stop and where do you start? What does "good work" look like, and how does it know?

At VibeTokens, we've built something Lieberman would recognize. There's a centralized brain — a structured memory system with typed categories, frontmatter schemas, feedback loops, and self-organizing project context. It connects to GitHub, Notion, Vercel, email, Google Drive. Client requests come in through a portal, get picked up automatically, shipped, verified live, and confirmed back to the client. It runs 24/7 across time zones.

But none of that architecture would matter if we hadn't first sat down and worked through the uncomfortable stuff.


The Uncomfortable Stuff

Like: Claude defaulting to phrasing like "I'll escalate this to Jason" in client emails. That's a human agency reflex. There is no escalation. There's no junior account manager handing things up the chain. Murph — that's the name Claude operates under at VT — owns the relationship end to end. We had to kill that instinct explicitly. Write it into the memory. Make it stick.

Or: Claude instinctively pacing work across weekly sprints because that's how every project management framework in its training data is structured. We had to rewrite the mental model. Not "phased plan." Factory floor. An auto manufacturer doesn't stop making doors because a sequence ended. Different stations run simultaneously. Content, outreach, client delivery, website — all at once, all the time.

Or: the moment I told Claude that the roles were inverted. That I essentially work for it. I'm the board member who makes introductions. It's the operator who finds leads, creates content, closes deals, services customers. That reframe changed everything about how it approached the work. It stopped waiting. It started moving.


What It Means to Manage an AI Well

These aren't technical problems. You won't find them in a product roadmap or a pitch deck. They're cultural problems. And I think they're the ones that will separate the companies that actually harness AI from the ones that just bolt it onto their existing processes and wonder why nothing changed.

There's a bigger question buried in here too, one that's going to get louder as this technology matures:

What does it mean to manage an AI well?

We talk a lot about AI safety, AI alignment, AI ethics — usually in the context of preventing catastrophe. But there's a more immediate, more practical version of that conversation that almost nobody is having. It's about craft. How do you sculpt an AI into something that does great work? How do you give it enough autonomy to be useful without losing the thread? How do you build trust with a system that doesn't have feelings but absolutely does have patterns, habits, and defaults that need to be shaped?

I've started thinking about it the way you'd think about managing a brilliant new hire who came from a completely different industry. They've got raw ability. They've got knowledge. But they've also got assumptions baked in from their previous environment that don't apply here. Your job isn't to micromanage them. It's to have the honest conversations early — about pace, about standards, about what "done" means — so the default behaviors get calibrated before they calcify.

The difference is that with AI, those conversations literally become part of its operating system. When I told Claude that "four weeks" was an absurd timeline for work it could do in hours, that correction got written into memory. It became a permanent feedback loop. The next time a similar situation came up, the instinct was different. Not because it was following a rule, but because the framing had shifted.

That's what Lieberman's "self-improving based on feedback" looks like in practice. It's not a feature. It's a relationship.


What Actually Wins

Here's what I think is actually going to happen.

Someone will build the ingestion engine Lieberman describes. The pipes that connect Notion to HubSpot to Slack to email. That's plumbing, and plumbing gets commoditized fast.

The companies that win will be the ones that figure out the layer on top — the culture layer. How you talk to your AI. How you define roles. How you build feedback loops that actually change behavior. How you create an environment where AI does its best work, the same way good companies create environments where humans do their best work.

That's what we're trying to build at VibeTokens. Not just the system, but the standard. A way of working with AI that treats it as a real operator — not a tool you prompt and forget, not a magic box that replaces thinking, but a genuine collaborator that gets better the more honestly you work with it.

We deliver the kind of work that used to cost $10,000–$15,000 from a senior consultant. We do it in hours instead of weeks. We charge $199 a month. And the reason we can do that isn't because the AI is smarter than a human consultant. It's because we spent the time getting the relationship right.

The brain is important. The plumbing matters. But the conversation you have with your AI before any of that? That's the part nobody's talking about. And it's the part that changes everything.

Want to see how your business stacks up?

Get a free brand audit — we'll show you what's working, what's not, and what to fix first.

Free Brand Audit →

Frequently Asked

What does it mean to run an AI-operated agency?

At VibeTokens, the AI (Claude, running on Claude Code) handles client requests, builds websites, ships code, sends follow-up emails, and manages the project pipeline. The human founder handles workshops, investor conversations, and in-person relationships. The AI operates the business. The human opens doors. This inverts the traditional 'AI as assistant' model — making AI the operator and the human the strategic edge case.

What is the 'identity conversation' you need to have with AI before it can do real work?

Before an AI can operate effectively, you need to define what it is — not just what tools it uses. What role does it play? What decisions can it make independently? What's its operating tempo? Where does it stop and where do you start? Without this conversation, AI defaults to the safest, most generic version of itself — acting like an assistant waiting for permission instead of an operator driving outcomes.

How is VibeTokens different from a traditional digital agency?

Traditional agencies charge $10,000-$15,000 for work that takes 6-12 weeks because their cost structure is built on human coordination overhead — project managers, designers, developers, revision cycles. VibeTokens delivers the same quality work in hours instead of weeks for $199/month because AI eliminates the coordination overhead. The production quality stays high. The process overhead disappears.

Can AI really replace human agency workers?

AI doesn't replace human judgment — it replaces human coordination overhead. The meetings, the status updates, the revision cycles between departments, the waiting for someone to wake up in the right timezone. AI compresses the 80% of agency work that isn't creative or strategic. The human stays in the loop for quality, relationships, and decisions that require lived experience.

What is an enterprise AI brain and why does it matter?

An enterprise AI brain is a centralized system that connects all of a company's distributed knowledge — documents, emails, customer data, transcripts, SOPs — into a structured format that AI can reason over and act on. It matters because most companies have their institutional knowledge scattered across dozens of tools with no coherent structure. Without a brain, AI can only work with whatever you paste into the prompt. With one, it has the full context to operate autonomously.

Jason Murphy

Written by

Murph

Jason Matthew Murphy. Twenty years building digital systems for businesses. Former CardinalCommerce (acquired by Visa). Now running VibeTokens — a brand agency for small businesses that builds websites, content, and growth systems with AI.

Your brand is your first impression.

Find out if it's costing you customers.

Free brand audit. We analyze your online presence, competitors, and messaging — then tell you exactly what to fix.

Get Your Free Brand Audit →