How to migrate a Supabase waitlist app into a Laravel monolith without losing history
The right way to move a small but real waitlist product from Next.js and Supabase into a Laravel 13 monolith is to freeze the legacy system into a replayable snapshot, converge identity through shared contacts and users, and keep product tables explicit instead of generic.
The wrong migration is “wire the new app into the old database”
That move feels fast.
It is weak.
It keeps the legacy system in control of the future architecture.
If the goal is to build a real monolith that can host many domains and many first-party apps, then the migration has to do more than keep the app working for one more week.
It has to rewrite the control plane.
The real problem
The problem is not framework replacement.
The problem is this:
- move the product into a shared Laravel runtime
- keep one shared Postgres database
- keep one shared auth and contact spine
- preserve product-specific meaning
- preserve history without preserving dependency
If you solve only the frontend or only the ORM mapping, you still lose.
The dominant move: freeze the source system first
Before writing migration logic, export an immutable private snapshot of the legacy data.
That one move changes everything.
It gives you:
- a stable input contract
- replayable imports
- deterministic debugging
- safer cutover discipline
Without that snapshot, every import run depends on a live dashboard, live credentials, and live assumptions.
That is weak operator behavior.
What the source system should become inside the monolith
Do not drag MVP naming directly into the new core.
For RankWar, the old Supabase model looked like this:
waitlistsentriesreferrals
Those names were fine for an MVP.
They are weak as a long-term monolith contract.
The correct split is:
- shared primitives stay shared
- product tables stay explicit
That means shared tables like:
userssocial_accountsappsapp_domainscontactsuser_app_accessoutbound_emails
And product tables like:
rankwar_campaignsrankwar_entriesrankwar_referralsrankwar_events
This is how one monolith supports many domains without becoming generic sludge.
Shared identity matters more than legacy auth
Legacy RankWar used Supabase auth and email-first joins.
That was a good launch shortcut.
It is not the right long-term identity model for a portfolio.
The stronger model is:
- every lead converges into
contacts - real authenticated users converge into
users - app activation converges into
user_app_access
That lets someone join a war room with low friction now and still become a shared user later without duplicate identities across apps.
Email history needs an honest ledger
Most migrations either drop email history completely or lie about it.
Both are weak.
The right move is to create a shared outbound email ledger in the monolith and distinguish between:
- provider-verified live sends
- legacy inferred sends from a system that never stored provider truth
For RankWar that means:
- live mail sends from
rankwar@lmachine.oneuntil a dedicated sending subdomain is verified - historical welcome mail imported as
legacy_inferred - the product domain remains
rankwar.app
That preserves reality and keeps deliverability concerns separate from application hostnames.
Multi-domain monolith rules that actually scale
If many apps will live in the same Laravel 13 codebase and the same Postgres database, the winning rules are:
- route domains from data, not scattered host checks
- keep shared tables neutral and bounded tables explicit
- use
app_idonly where the concept is truly shared - never split into schema-per-app unless there is a regulatory reason
- never create a second deploy target just to preserve old frontend habits
For UI and UX, the correct stack remains:
- Blade for domain-aware shells and public pages
- Livewire for stateful server-truth interactions
- Volt for compact interactive product surfaces
Adding another frontend deployment here would be pure self-harm.
What most people will do
They will do one of three weak things:
- keep Supabase permanently in the loop
- over-generalize everything into one shared generic schema
- split every app into its own backend because that “feels clean”
Those moves fail for the same reason:
they protect the old implementation instead of upgrading the operating system.
The better cutover sequence
Use this order:
- Build the bounded context inside the monolith.
- Add shared identity and domain primitives first.
- Freeze the legacy system into a private snapshot.
- Write an idempotent import.
- Re-run the import until the replay is boring.
- Verify counts and critical behavior.
- Cut traffic only when the monolith is already telling the truth.
That is how you migrate a real app without carrying its old stack forever.