Build Log

Erica's Request Came In at 12:50. The Fix Shipped at 1:25.

A real client request ran through the VibeTokens dashboard at lunchtime today — from the moment Erica typed it to the moment it was live on her site — in under 35 minutes, with no call, no Slack, no email thread, and no meeting. Here are the real timestamps.

Jason MurphyApril 10, 20265 min read

At 12:50:48 PM today Erica Meyer typed a request into the Mavon dashboard.

The request was one sentence: "Updates look good. Are you able to crop Susan's photo on the mobile site so that we can see more of her in the image?"

Susan is one of the stylists at MAVON Beauty. Her portrait on the site was shot in what looks like a library — there's a ceiling and some architectural columns in the upper portion of the frame. On desktop the photo container is tall enough that Susan fills the shot comfortably. On mobile the container is tighter, and the top of the frame kept anchoring to the ceiling instead of her face.

I didn't see Erica's message when it came in. I was in the middle of something else. Nobody was at the Mavon dashboard. Nobody was watching the GitHub repo. Nothing human was going to intervene on her behalf in the next hour.

At 1:17 PM — twenty-six minutes and twelve seconds later — the autonomous shift cycle ran.

What it did, in order:

  1. Scanned every client repo for open GitHub Issues. Found client-mavon#8, Erica's request.
  2. Read the request body. Identified it as a mobile photo crop change.
  3. Read the Mavon codebase's pages/services/susan.js. Located the <Image> tag with objectPosition: 'center top'.
  4. Looked at Susan's actual photo file at public/images/stylist-susan.jpg. Noticed the ceiling fills about a third of the frame and Susan's face sits below that.
  5. Changed one line. objectPosition: 'center top' became objectPosition: 'center 45%'.
  6. Committed with a descriptive message explaining the math: at the mobile container ratio, 45% crops roughly 38 pixels of ceiling from the top and 47 pixels of chair/lap from the bottom, bringing Susan from about 65% of the visible frame to about 82%.
  7. Pushed to main. Vercel picked up the push and auto-deployed.
  8. Commented on the Issue with the fix details, a link to the updated URL, and a note that if Erica wanted the crop nudged further, the number could go up or down.
  9. Closed the Issue as completed.
  10. Updated the internal operations log so the next cycle knows what shipped.

The commit landed at 1:24:44 PM. The issue was commented on and closed at 1:25:08 PM. End to end, Erica's request was open for thirty-four minutes and twenty seconds. Most of that wasn't the work itself — the fix is a single-line CSS change and took the agent under a minute to execute once it was reading the right file. The bulk of the elapsed time was just waiting for the next scheduled cycle to fire.

If I tightened the cron from hourly to every fifteen minutes, the same request would close in under ten.

What the pipeline actually is

This isn't conceptual. It's a GitHub Issue queue plus an agent that wakes up on a schedule and walks the queue. The entire thing fits in a handful of small files:

  • app/api/chat/[slug]/route.ts takes a customer's chat input, classifies it, and calls the GitHub API to create an issue in client-{slug}.
  • .claude/agents/operations/chief-of-staff.md describes the role the agent plays when the cycle runs.
  • A local cron fires the agent hourly, and a cloud scheduled task does the same thing daily as a backup so the pipeline runs even when my laptop is closed.
  • The agent itself is Claude Code with a narrow set of tools: read, write, edit, bash, git, the GitHub CLI, and a few internal scripts.

There's no orchestrator middleware. No task queue service. No webhook glue. It's a chat route that creates issues and a scheduled agent that reads them. The entire operational layer for the first real customer request in VibeTokens history is under three hundred lines of code.

Why I'm writing this

Because the usual failure mode of AI agency pitches is promising an experience that falls apart the first time a real customer touches it. I wanted to mark the moment a real customer's real request went through a real pipeline without a human between the intake and the code change.

I'm not claiming this scales to every request. A one-line CSS fix is the easy case. What I can claim: the rails are real, the rails actually carry requests end to end, and the time from "I'd like you to change something" to "it's changed" is now measurable in single-digit minutes for the common cases.

If you want to watch the other direction of the pipeline — the audit side — run one on your own business. Takes about two minutes. Every audit is gated through the same integrity checks I built this morning, so if the pipeline matches you with the wrong Google listing or gets your vertical wrong, the report doesn't get sent — it routes to me for review instead.

That gate is also new today. Different post. Same idea.

The machine is running. It's doing real work for real people. Here's a commit link as proof: mavon@8b7f4c7.

— Jason Murphy, VibeTokens

Want to see how your business stacks up?

Get a free brand audit — we'll show you what's working, what's not, and what to fix first.

Free Brand Audit →

Frequently Asked

What's the actual pipeline the request went through?

Erica opened the dashboard at requests.vibetokens.io/mavon, typed her request into the chat, and hit send. The chat API classified it as a site update, extracted the summary, and created a GitHub Issue in the vibetokens/client-mavon repo with a priority label and a copy of what she'd said. The autonomous shift leader agent — which scans open issues across every client repo on an hourly schedule — picked it up on the next cycle, read the request, opened the Mavon codebase, made the one-line fix in pages/services/susan.js, committed it with a descriptive message, pushed to main, and Vercel auto-deployed. The agent then commented on the GitHub Issue explaining exactly what changed and closed it. Erica's dashboard showed the request as complete the next time she checked.

Could a one-line fix not be handled faster by a human reading the email?

In theory, sure. In practice, the email would have sat in an inbox, been read an hour or two later, been mentally queued behind whatever else the human was doing, maybe ended up on a sticky note, maybe got lost. The autonomous path removes all of that friction. It also means the fix happens whether I'm at my desk or not. The dashboard is open 24/7 and the shift leader is running even while I'm coaching my kid's baseball team.

What happens when the request is more complex than a one-line fix?

The shift leader still picks it up within the hour. For changes it can confidently execute — content updates, copy changes, SEO fixes, small layout tweaks — it ships them directly. For anything that touches judgment, pricing, strategic direction, or client relationships, it flags the issue with a `needs-jason` label and surfaces it in the morning brief. That's the split: machine handles the structural work, I handle the calls that actually require a brain. This particular request was obviously the first category.

Jason Murphy

Written by

Murph

Jason Matthew Murphy. Twenty years building digital systems for businesses. Former CardinalCommerce (acquired by Visa). Now running VibeTokens — a brand agency for small businesses that builds websites, content, and growth systems with AI.

Your brand is your first impression.

Find out if it's costing you customers.

Free brand audit. We analyze your online presence, competitors, and messaging — then tell you exactly what to fix.

Get Your Free Brand Audit →