Blog Mar 21, 2026 4 min read

The public Laravel monorepo playbook for FrankenPHP, worktrees, and multi-domain routing

I published a public playbook that explains the real operating model behind a multi-domain Laravel 13 monorepo: one image, three roles, FrankenPHP behind Traefik, Blade plus Livewire and Volt, domain routing from data, and Git worktree discipline that keeps parallel lanes honest.

Most monorepo advice is weak for one of two reasons.

It is either abstract architecture theater with no deploy receipts, or it is a private repo leak disguised as a tutorial.

Neither is useful.

That is why I published the Laravel Monorepo Playbook.

It is the public-safe version of the operating model that has been paying rent inside lmachine: a multi-domain Laravel 13 monolith running on one boring box, one shared database, one shared identity spine, one content system, and one deployment topology that does not pretend fragmentation is sophistication.

What the repo makes explicit

The playbook is built around a simple runtime truth:

  • one repo
  • one image
  • three runtime roles

Web, Horizon, and scheduler all run the same application artifact.

That matters because most teams create complexity at the exact layer where boredom wins. They split workers into separate images, split domains into separate deployables, split marketing from product, and split product from identity. That feels neat in diagrams and gets expensive fast in real life.

The stronger move is to keep the system whole and make the internal boundaries explicit.

Inside the playbook, that means documented rules for:

  • shared primitives versus product-specific tables
  • domain routing from data instead of scattered host checks
  • a public surface registry that can drive layouts, analytics, robots.txt, sitemap.xml, and llms.txt
  • a single FrankenPHP-based application image behind Traefik
  • Git worktree lanes that keep parallel work real instead of sloppy

Why FrankenPHP stayed in the stack

We did not choose FrankenPHP because it is new. We kept it because it makes the deployment shape smaller.

One application container with embedded Caddy and a clean Docker story is stronger than piling Nginx, PHP-FPM, and unnecessary ceremony on top of the same app just because old tutorials normalized it.

The repo explains the stance clearly:

  • Traefik stays at the edge
  • FrankenPHP serves the Laravel app inside the container
  • Horizon and scheduler stay on the same code artifact
  • health checks and rollout rules matter more than fashionable infra sprawl

That is the trade worth making early. Boring deploy topology compounds.

The frontend rule most people dodge

The repo also makes the frontend stance public on purpose.

If the app is domain-aware, auth-aware, content-aware, and operator-heavy, adding a second frontend deploy should have to prove itself.

That is why the default stack stays:

  • Blade for shells and public pages
  • Livewire for server-truth interactions
  • Volt for compact interactive surfaces

This is not a concession to simplicity. It is an insistence on leverage. The moment the second frontend deploy arrives, it should remove a real constraint, not just flatter taste.

The worktree rule that stops fake speed

The repo also documents the engineering rule that gets violated first when the team starts moving quickly: worktrees are leverage, fake worktrees are not.

One writer per worktree. One real bootstrap per worktree. No shared vendor. No shared node_modules. No pretending local unpushed changes are durable.

That matters even more when coding agents are involved. Parallel lanes are only real if isolation is real.

How the files are structured

I wanted the repo to show the file layout directly instead of leaving it as a vague "organize by domain" suggestion.

So the playbook makes the shape explicit:

app/
config/
content/posts/
docker/prod/
docs/
ops/private/
resources/views/
routes/domains/
tests/

That tree tells the real story:

  • public runtime files are separate from deploy files
  • domain-aware routing has a home
  • publishable writing lives inside the monolith instead of beside it
  • private operator state stays outside tracked public docs

That last point matters. Public operator writing is leverage. Private credentials in public docs are self-sabotage.

Why this repo exists now

lmachine is already proving the pattern live while I continue building and shipping alongside work at Local Business Pro. The next correct move was not another internal memo. The next move was a public artifact that explains the pattern without leaking the private control plane.

That is what this repo is for.

It gives people a public entry point for the architecture, deploy model, domain strategy, frontend stance, and worktree discipline, while keeping secrets, provider state, and machine-local operator files exactly where they belong.

If you want the public repo, start here:

If you want the principle in one line, it is this:

One repo. One image. Three runtime roles. Many domains. No fake complexity.