How coding agents should run a public DNS cutover without public debugging
A real cutover is not done when the new app renders. It is done when public DNS, public TLS, and public smoke checks converge on the new runtime. This is how RankWar moved from a Next.js and Supabase stack into the lmachine Laravel monolith without turning the internet into the staging environment.
Most “AI deployment” content stops before reality
That is why so much of it is worthless.
It celebrates the part that is easiest to fake:
- code generation
- local demos
- internal previews
- host-header probes on the new box
None of that means the public cutover is done.
The public cutover is done when the internet tells the same story the destination system already tells.
That means:
- public DNS
- public TLS
- public smoke checks
If those are wrong, the migration is still a private rehearsal.
The real power problem
RankWar started on:
- Next.js
- Supabase
- Resend
- a Vercel-shaped deployment model
The destination was not a cosmetic rebuild.
It was a different operating system:
- one Laravel 13 monolith
- one shared Postgres spine
- one shared identity model
- one Dokploy and Traefik control plane
- many domains with one application runtime
The problem was not “rewrite the code.”
The problem was “move public truth without debugging in public.”
The loser sequence
Most teams run this sequence:
- finish the new app
- get excited
- point DNS
- discover certificate issues, stale records, or edge drift in public
- debug under live traffic
That sequence is amateur.
It makes the public internet your staging environment.
The winning sequence
The stronger move is:
- freeze the legacy source into a replayable artifact
- import production data into the new runtime first
- attach ingress before traffic moves
- verify host behavior privately
- change authoritative DNS
- force certificate issuance against the corrected public answers
- run real smoke checks from outside the box
- only then talk like the cutover is done
That is the sequence that keeps narrative aligned with truth.
Why DNS is a distribution move, not a config chore
Public DNS decides which runtime owns attention.
That makes it part of distribution, not just infrastructure.
In RankWar's cutover, the application and database were already ready.
The remaining risk was not framework logic.
It was that public resolvers and certificate issuance still pointed at the old edge.
The breakthrough came from treating DNS like a timing weapon:
- change the apex only when the monolith already tells the truth
- collapse campaign-host sprawl into wildcard coverage
- verify authoritative answers before trusting local cache
That turns DNS from a source of chaos into a clean handoff.
The certificate trap most people miss
If certificate issuance runs before DNS is ready, you get fake confidence and noisy failure.
That happened here.
Traefik had already tried to mint RankWar certificates against the old public answers.
That failure did not mean Traefik was bad.
It meant the sequence was incomplete.
Once the public Namecheap records converged on the Hetzner box, a clean ACME retry produced live Let's Encrypt certificates for the real hosts.
That is the correct mental model:
- certificate failure is often a timing signal
- not a reason to thrash the proxy config blindly
What the agent actually needed to control
The cutover touched more than code:
- Supabase data freeze and import
- Dokploy ingress
- Traefik logs and ACME state
- Namecheap authoritative records
- Tailscale + SSH validation
- live smoke checks on the public hosts
This is why the benchmark for coding agents is wrong when it focuses on code scaffolding.
The interesting benchmark is whether the agent can operate across all of those surfaces without losing the plot.
The real operator rule
Do not publish the victory lap from half-true docs.
Do not announce the migration when only the private host-header path works.
Do not let local DNS cache, stale provider UI, or certificate retries distort your story.
Wait until:
- authoritative DNS is right
- public resolvers agree
- live certificates are served
- public hosts return the destination system
Then narrate the move hard.
That is how you turn a migration into authority instead of content theater.