Architecture
Tech Stack (Recommended)
The following stack is recommended based on experience with the Puddles project. It is not mandatory — alternatives are listed for each layer.
| Layer | Recommended | Alternatives | Notes |
|---|---|---|---|
| Framework | Next.js 14+ (App Router) | Nuxt (Vue), Remix, Astro + API | Next.js gives SSR, API routes, and static export in one framework. Free tier on Vercel. Nuxt is equivalent for Vue teams |
| Language | TypeScript | JavaScript, Go (backend) | Type safety across frontend + backend. Non-negotiable for a team > 1 |
| Database | PostgreSQL | MySQL, PlanetScale (MySQL), CockroachDB | PostgreSQL is the best general-purpose DB. Free tiers on Supabase, Neon, Railway. PlanetScale has a generous free tier but is MySQL |
| ORM | Prisma | Drizzle, Knex, raw SQL | Prisma has the best DX (schema-first, migrations, type generation). Drizzle is lighter and faster at runtime |
| Cache | Redis (Upstash) | Dragonfly, Memcached, in-process LRU | Upstash has serverless Redis with a free tier. Dragonfly is a drop-in replacement with better memory efficiency |
| Styling | Tailwind CSS | CSS Modules, Styled Components, vanilla CSS | Tailwind is fast to iterate with and keeps bundle size small. No runtime cost |
| UI Components | shadcn/ui | Radix UI (raw), Headless UI, custom | shadcn gives accessible, unstyled components you own (copy-paste, not a dependency) |
| Search | PostgreSQL full-text (MVP) → Meilisearch | Typesense, Algolia ($$$), Elasticsearch (complex) | Start with Postgres ILIKE + tsvector. Meilisearch is free, fast, and has typo tolerance. Algolia is best-in-class but expensive |
| CDN | Cloudflare | Vercel Edge, Fastly, AWS CloudFront | Cloudflare free tier includes CDN, DDoS protection, and DNS. Vercel handles this automatically if hosting there |
| Hosting | Vercel (frontend) + Railway/Fly.io (backend) | Render, Hetzner VPS, AWS ECS | Vercel is free for frontend. Railway/Fly.io are simple for containers at ~$5-20/mo. Hetzner is cheapest for raw VPS |
| Monitoring | Sentry (errors) + Betterstack (uptime) | LogRocket, Datadog ($$$), self-hosted Grafana | Sentry free tier covers error tracking. Betterstack free tier covers uptime. Datadog is powerful but expensive |
| Analytics | PostHog | Plausible, Umami, Mixpanel | PostHog is free self-hosted, has event analytics + session replay. Plausible/Umami are privacy-focused and simpler |
| Event Store | PostgreSQL (MVP) → ClickHouse | TimescaleDB, QuestDB | PostgreSQL with partitioning works to ~50M events. ClickHouse is the industry standard for analytical workloads. See Scaling |
Services
The platform is split into independently deployable services. Each service runs in its own container and can be scaled separately.
MVP Services
| Service | Role | Tech |
|---|---|---|
| Backend User | Public API serving the player-facing site | Next.js API routes or standalone Express/Fastify |
| Frontend User | Player-facing website | Next.js (SSR + static) |
| Backend Admin | Internal API for the admin dashboard | Next.js API routes or standalone |
| Frontend Admin | Admin dashboard UI | Next.js or Vite + React |
Future Services
| Service | Role |
|---|---|
| Backend Dev Portal | API for developer partners |
| Frontend Dev Portal | Dashboard for developer partners to publish and manage games |
The dev portal will also use iframes for game embedding, keeping the architecture consistent across all surfaces. This ensures that if broker-sourced games are ever removed, the transition is seamless.
Repository Strategy
Phase 1: Monorepo
All services live in a single GitHub repository, organized by directory:
playupi/├── apps/│ ├── web/ # Player-facing frontend (Next.js)│ │ ├── app/ # Next.js App Router pages│ │ │ ├── (home)/ # Home page│ │ │ ├── games/[slug]/ # Game detail page│ │ │ ├── category/[slug]/# Category page│ │ │ ├── author/[id]/ # Author page│ │ │ └── api/ # API routes (if colocated)│ │ ├── components/│ │ │ ├── game/ # GameCard, GameGrid, GamePlayer│ │ │ ├── layout/ # Header, Footer, Sidebar│ │ │ ├── search/ # SearchBar, SearchResults│ │ │ └── ui/ # Base components (shadcn)│ │ └── lib/│ │ ├── tracker.ts # Event tracking SDK│ │ ├── api-client.ts # API client│ │ └── utils.ts│ ││ ├── admin/ # Admin dashboard (Next.js or Vite)│ │ ├── app/│ │ │ ├── catalog/ # Main catalog view│ │ │ ├── games/[id]/ # Game admin page│ │ │ └── import/ # Bulk import│ │ ├── components/│ │ │ ├── filters/ # Filter panel components│ │ │ ├── metrics/ # Charts, metric cards│ │ │ └── ui/│ │ └── lib/│ ││ └── api/ # Backend API (if separate from Next.js)│ ├── routes/│ │ ├── public/ # /api/v1/* routes│ │ └── admin/ # /api/admin/* routes│ ├── services/│ │ ├── games.ts # Game CRUD, search, ranking│ │ ├── events.ts # Event ingestion, validation│ │ ├── metrics.ts # Metric computation, aggregation│ │ ├── import.ts # Broker import, mapping│ │ └── exploration.ts # Exploration queue, rotation│ └── jobs/│ ├── rank.ts # Ranking recalculation│ ├── aggregate.ts # Hourly metric aggregation│ └── tags.ts # Tag recomputation (New, Trendy)│├── packages/│ └── shared/ # Shared across apps│ ├── types/ # TypeScript types & interfaces│ ├── constants/ # Event types, tag definitions│ └── utils/ # Shared utilities│├── prisma/│ └── schema.prisma # Database schema│├── scripts/│ ├── import-broker.ts # Broker bulk import script│ └── seed.ts # Development seed data│└── docs/ # This documentationBenefits:
- Single CI/CD pipeline
- Shared libraries without publishing overhead
- Easier local development (all services available)
Phase 2: Multi-repo (if needed)
Split into separate repos only when required by:
- Multiple independent teams
- Security isolation (e.g., admin vs public)
- Significantly different release cycles
The monorepo structure should be designed so this split is straightforward.
Backend Organization
The backend is organized by domain with clear logical boundaries, even when deployed together.
| Domain | Scope |
|---|---|
| Public API | Public routes: game lists, game pages, search, events ingestion |
| Admin API | Protected routes: game CRUD, bulk import, metrics, visibility management |
Both domains share the same database but have separate access controls.
MVP: Single Deploy
For the MVP, both APIs run as Next.js API route handlers in a single Vercel deployment (see Scaling → Phase 1). The route structure provides logical separation:
app/api/v1/* → Public API routesapp/api/admin/* → Admin API routes (JWT-protected)Growth+: Separate Services
When traffic or team size warrants it, the APIs split into independently deployable services behind a load balancer. The route-based separation makes this split straightforward — see Scaling → Phase 2.
Data Flow: Game List Request
Player visits Home page │ ▼GET /api/v1/games?platform=desktop&page=1 │ ▼Check Redis cache ──hit──> Return cached response │ miss ▼Query: ranked games (90%) ordered by rank │ ▼Query: exploration games (10%) from rotation queue │ ▼Merge lists, interleave exploration at fixed intervals │ ▼Generate game_impression events (server-side, batched) │ ▼Cache response in Redis (TTL: 5 min) │ ▼Return JSON response │ ▼Frontend renders GameGrid with GameCard componentsData Flow: Game Page Load
Player clicks a game card │ ▼Frontend sends game_click event (batched) │ ▼Navigate to /games/:slug │ ▼GET /api/v1/games/:slug │ ▼Server: fetch game data + recommendations + author gamesServer: generate game_page_view event │ ▼Frontend renders:├── GamePlayer (iframe with game)├── Player footer (title, author, like/dislike, fullscreen)├── Recommendations (side + bottom)└── Game info section (description, instructions, author games) │ ▼Tracking SDK starts monitoring:├── game_loading_start (iframe begins loading)├── game_loading_end (iframe loaded)├── game_focused_start / stop (visibility API)├── gameplay_start / stop (from broker SDK)└── show_ad (from broker SDK)Data Flow: Event Ingestion
User interacts with game │ ▼Tracker SDK captures event with timestamp │ ▼Event added to in-memory buffer │ ▼Buffer flush (every 5s, on unload, or when full) │ ▼POST /api/v1/events { events: [...], sessionId: "..." } │ ▼API validates each event (type, fields, timestamp, dedup) │ ▼Valid events inserted into events table (partitioned by month) │ ▼Hourly aggregation job reads new events │ ▼Updates daily_metrics table (plays, DAU, impressions, playtime, etc.) │ ▼Ranking job reads daily_metrics │ ▼Recomputes ranks per surface × platform │ ▼Invalidates Redis cache for affected game listsSee Events Pipeline for full details.
Deployment
- Each service is containerized (Docker)
- Services can be deployed and scaled independently
- Environment separation: development, staging, production
- See Scaling for infrastructure evolution per growth phase