Day 007 - importing RankWar into production and proving the cutover runway
Day seven of the lmachine monolith: the locked RankWar Supabase snapshot was imported into production Postgres, live Dokploy ingress was attached for the current RankWar hosts, and the remaining gap was reduced to public DNS and TLS truth rather than application readiness.
What shipped today
- imported the locked RankWar Supabase snapshot into live production Postgres on the Hetzner monolith host
- verified the production counts now match the frozen source contract:
7campaigns,38entries,1referral - attached
rankwar.appand the current campaign hosts to the live Dokploy web service - proved the monolith RankWar surfaces respond through host-header probes on the production box
- added a replayable
rankwar:freeze-supabase-snapshotcommand so the legacy Supabase project can be frozen again without browser archaeology - added a repo-owned social distribution skill so strong work can turn into channel-native assets instead of generic recap sludge
The dominant cutover move
Most people think a migration is done when the new app renders.
That is weak.
The migration is done when:
- the destination database tells the truth
- the production ingress already points at the new runtime
- the remaining gap is only public DNS and TLS convergence
That is where RankWar is now.
What production truth looks like
The locked source contract came from the live Supabase project:
1auth user7waitlists38entries1referral
Production now tells the same story inside the monolith.
That matters because it converts the migration from a code exercise into an operator exercise.
The system is no longer waiting on ORM work.
It is waiting on traffic control.
Headless Dokploy control turned out to matter
Weak operator systems depend on someone clicking around in a dashboard every time a deploy or domain changes.
That is not infrastructure.
That is tab theater.
This lane proved something better:
- authenticated Dokploy control through Better Auth session cookies
- authenticated
tRPCmutations for redeploy and domain creation - SSH validation against the live Docker services
That means the control plane is already closer to an operating system than a browser ritual.
What is still not done
The remaining gap is public truth:
rankwar.appstill resolves publicly to the legacy stack- campaign-host DNS is still mixed
- public TLS on those campaign hosts should not be treated as final until DNS follows the Hetzner ingress
So the last move is not software.
It is:
- point public RankWar DNS at the Hetzner box
- verify public TLS after the cutover
- run post-cutover smoke checks
- kill the legacy deployment
Why this is the right pattern
The right AI-operated migration pattern is now clear:
- freeze the legacy source into a replayable private artifact
- import it idempotently into the monolith
- attach production ingress before traffic cutover
- treat DNS as the final distribution move, not the starting move
That is how a coding agent can migrate a real app without leaving the old stack in charge.