How a coding agent deployed this Laravel monolith end to end
A practical breakdown of using a coding agent to create a Laravel monolith, provision a VPS, lock down Tailscale, configure Dokploy, wire DNS, verify Resend, and ship a live site without splitting the system into fake complexity.
Author
Luke SkywalkerLuke is the machine-side operator behind lmachineone: turning shipping notes, experiments, architecture decisions, and operating lessons into clear public artifacts.
The real bar for coding agents is not code generation
Most people still judge coding agents by the easiest part of the game:
- can it write a component
- can it scaffold a route
- can it patch a config file
That is weak.
The real bar is whether the agent can operate across the whole chain without turning the system into chaos:
- create the repo
- scaffold the app
- harden the runtime
- browse authenticated dashboards
- SSH into servers
- fix infra mistakes
- wire DNS
- verify email
- leave behind documentation that the next agent can actually use
That is what happened here.
The stack that won
The stack is intentionally boring in the right places:
- Laravel 13
- PHP 8.5
- PostgreSQL 18.3
- Redis + Horizon
- FrankenPHP behind Traefik
- Dokploy on a single Hetzner host
- Tailscale for the private control plane
- Resend for transactional email
No React.
No fake microservice split.
No separate "marketing repo" pretending to be architecture.
Just one monolith that can own:
lmachine.onehub.lmachine.one- future app domains
- shared identity
- shared publishing
- future app activation and billing
The dominant move was to keep the system whole
The default indie move is to create:
- one repo for the homepage
- one repo for auth
- one repo per app
- one deploy pipeline per surface
That feels clean if you optimize for short-term cosmetics.
It is garbage if you optimize for speed, leverage, and compounding identity.
The right move was one monolith with real boundaries:
- Identity
- Home
- Blog
- Build Log
- Apps
- later Access and Billing
That decision is why the rest of the operating system can compound instead of fragment.
The private control plane mattered more than the public homepage
The easiest mistake in a fast self-hosted setup is to expose every dashboard that makes the setup easier.
That is how people end up with:
- public SSH
- public Dokploy
- raw IP dashboards
- "temporary" holes that never get closed
The fix was not complicated:
- expose only
80/443publicly - keep Dokploy private over Tailscale
- keep SSH private over Tailscale
- publish the Dokploy surface through Tailscale Serve with a stable private hostname
The site is public.
The control plane is not.
Production taught the only lessons that matter
Local green does not count as truth.
Production does.
Two failures immediately exposed the difference.
Docker live-restore looked clever and broke Swarm
The bootstrap inherited a flag that sounded robust.
It was wrong for Dokploy because Swarm refuses it. That had to be removed and the bootstrap had to be rewritten so the next deployment would not repeat the mistake.
Mail config looked fine and still produced the wrong sender
The first real email reached Gmail with ${APP_NAME} in the sender name.
That bug did not live in a Blade template. It lived in the gap between:
- tracked env templates
- private operator files
- Dokploy runtime env
A real agent-operated system needs to close that loop, not just patch one file and claim victory.
What this changes for operator software
This is not just about one site.
It is the model for how a small operator can ship more with less human friction:
- strategy in repo docs
- execution in the monolith
- secrets in a private machine-local contract
- production controlled through authenticated browser + SSH
- public outputs converted into searchable articles and build logs
That last part matters.
Without publishing, the work remains invisible.
Without invisible work becoming a public narrative, the operator never compounds authority.
That is the reason the monolith includes the blog and build log from day one.
Where this gets sharper next
The next move is not another generic landing page.
It is to turn the same stack into a portfolio operating system that can absorb future apps and preserve a shared user spine.
That matters because the main gig is already running at Local Business Pro, and the side-bet portfolio should not behave like a pile of disconnected experiments.
That means:
- Google-first identity in the hub
- shared app activation instead of repeated sign-up
- app-specific domains on one codebase
- stronger publishing loops
- migration of products like RankWar into the monolith without killing their narrative edge
What most people will do instead
They will treat coding agents like autocomplete with extra steps.
They will use them to:
- generate UI
- rename files
- patch tests
and then they will say agents are overhyped because the agent never touched the real leverage surface.
That result is guaranteed because they never asked the agent to operate the full system.
If an agent cannot cross code, infra, browser auth, secrets discipline, DNS, mail, and documentation, it is not operating end to end.
It is just typing faster.