Skip to content

Architecture

The following stack is recommended based on experience with the Puddles project. It is not mandatory — alternatives are listed for each layer.

LayerRecommendedAlternativesNotes
FrameworkNext.js 14+ (App Router)Nuxt (Vue), Remix, Astro + APINext.js gives SSR, API routes, and static export in one framework. Free tier on Vercel. Nuxt is equivalent for Vue teams
LanguageTypeScriptJavaScript, Go (backend)Type safety across frontend + backend. Non-negotiable for a team > 1
DatabasePostgreSQLMySQL, PlanetScale (MySQL), CockroachDBPostgreSQL is the best general-purpose DB. Free tiers on Supabase, Neon, Railway. PlanetScale has a generous free tier but is MySQL
ORMPrismaDrizzle, Knex, raw SQLPrisma has the best DX (schema-first, migrations, type generation). Drizzle is lighter and faster at runtime
CacheRedis (Upstash)Dragonfly, Memcached, in-process LRUUpstash has serverless Redis with a free tier. Dragonfly is a drop-in replacement with better memory efficiency
StylingTailwind CSSCSS Modules, Styled Components, vanilla CSSTailwind is fast to iterate with and keeps bundle size small. No runtime cost
UI Componentsshadcn/uiRadix UI (raw), Headless UI, customshadcn gives accessible, unstyled components you own (copy-paste, not a dependency)
SearchPostgreSQL full-text (MVP) → MeilisearchTypesense, Algolia ($$$), Elasticsearch (complex)Start with Postgres ILIKE + tsvector. Meilisearch is free, fast, and has typo tolerance. Algolia is best-in-class but expensive
CDNCloudflareVercel Edge, Fastly, AWS CloudFrontCloudflare free tier includes CDN, DDoS protection, and DNS. Vercel handles this automatically if hosting there
HostingVercel (frontend) + Railway/Fly.io (backend)Render, Hetzner VPS, AWS ECSVercel is free for frontend. Railway/Fly.io are simple for containers at ~$5-20/mo. Hetzner is cheapest for raw VPS
MonitoringSentry (errors) + Betterstack (uptime)LogRocket, Datadog ($$$), self-hosted GrafanaSentry free tier covers error tracking. Betterstack free tier covers uptime. Datadog is powerful but expensive
AnalyticsPostHogPlausible, Umami, MixpanelPostHog is free self-hosted, has event analytics + session replay. Plausible/Umami are privacy-focused and simpler
Event StorePostgreSQL (MVP) → ClickHouseTimescaleDB, QuestDBPostgreSQL with partitioning works to ~50M events. ClickHouse is the industry standard for analytical workloads. See Scaling

Services

The platform is split into independently deployable services. Each service runs in its own container and can be scaled separately.

MVP Services

ServiceRoleTech
Backend UserPublic API serving the player-facing siteNext.js API routes or standalone Express/Fastify
Frontend UserPlayer-facing websiteNext.js (SSR + static)
Backend AdminInternal API for the admin dashboardNext.js API routes or standalone
Frontend AdminAdmin dashboard UINext.js or Vite + React

Future Services

ServiceRole
Backend Dev PortalAPI for developer partners
Frontend Dev PortalDashboard for developer partners to publish and manage games

The dev portal will also use iframes for game embedding, keeping the architecture consistent across all surfaces. This ensures that if broker-sourced games are ever removed, the transition is seamless.


Repository Strategy

Phase 1: Monorepo

All services live in a single GitHub repository, organized by directory:

playupi/
├── apps/
│ ├── web/ # Player-facing frontend (Next.js)
│ │ ├── app/ # Next.js App Router pages
│ │ │ ├── (home)/ # Home page
│ │ │ ├── games/[slug]/ # Game detail page
│ │ │ ├── category/[slug]/# Category page
│ │ │ ├── author/[id]/ # Author page
│ │ │ └── api/ # API routes (if colocated)
│ │ ├── components/
│ │ │ ├── game/ # GameCard, GameGrid, GamePlayer
│ │ │ ├── layout/ # Header, Footer, Sidebar
│ │ │ ├── search/ # SearchBar, SearchResults
│ │ │ └── ui/ # Base components (shadcn)
│ │ └── lib/
│ │ ├── tracker.ts # Event tracking SDK
│ │ ├── api-client.ts # API client
│ │ └── utils.ts
│ │
│ ├── admin/ # Admin dashboard (Next.js or Vite)
│ │ ├── app/
│ │ │ ├── catalog/ # Main catalog view
│ │ │ ├── games/[id]/ # Game admin page
│ │ │ └── import/ # Bulk import
│ │ ├── components/
│ │ │ ├── filters/ # Filter panel components
│ │ │ ├── metrics/ # Charts, metric cards
│ │ │ └── ui/
│ │ └── lib/
│ │
│ └── api/ # Backend API (if separate from Next.js)
│ ├── routes/
│ │ ├── public/ # /api/v1/* routes
│ │ └── admin/ # /api/admin/* routes
│ ├── services/
│ │ ├── games.ts # Game CRUD, search, ranking
│ │ ├── events.ts # Event ingestion, validation
│ │ ├── metrics.ts # Metric computation, aggregation
│ │ ├── import.ts # Broker import, mapping
│ │ └── exploration.ts # Exploration queue, rotation
│ └── jobs/
│ ├── rank.ts # Ranking recalculation
│ ├── aggregate.ts # Hourly metric aggregation
│ └── tags.ts # Tag recomputation (New, Trendy)
├── packages/
│ └── shared/ # Shared across apps
│ ├── types/ # TypeScript types & interfaces
│ ├── constants/ # Event types, tag definitions
│ └── utils/ # Shared utilities
├── prisma/
│ └── schema.prisma # Database schema
├── scripts/
│ ├── import-broker.ts # Broker bulk import script
│ └── seed.ts # Development seed data
└── docs/ # This documentation

Benefits:

  • Single CI/CD pipeline
  • Shared libraries without publishing overhead
  • Easier local development (all services available)

Phase 2: Multi-repo (if needed)

Split into separate repos only when required by:

  • Multiple independent teams
  • Security isolation (e.g., admin vs public)
  • Significantly different release cycles

The monorepo structure should be designed so this split is straightforward.


Backend Organization

The backend is organized by domain with clear logical boundaries, even when deployed together.

DomainScope
Public APIPublic routes: game lists, game pages, search, events ingestion
Admin APIProtected routes: game CRUD, bulk import, metrics, visibility management

Both domains share the same database but have separate access controls.

MVP: Single Deploy

For the MVP, both APIs run as Next.js API route handlers in a single Vercel deployment (see Scaling → Phase 1). The route structure provides logical separation:

app/api/v1/* → Public API routes
app/api/admin/* → Admin API routes (JWT-protected)

Growth+: Separate Services

When traffic or team size warrants it, the APIs split into independently deployable services behind a load balancer. The route-based separation makes this split straightforward — see Scaling → Phase 2.


Data Flow: Game List Request

Player visits Home page
GET /api/v1/games?platform=desktop&page=1
Check Redis cache ──hit──> Return cached response
│ miss
Query: ranked games (90%) ordered by rank
Query: exploration games (10%) from rotation queue
Merge lists, interleave exploration at fixed intervals
Generate game_impression events (server-side, batched)
Cache response in Redis (TTL: 5 min)
Return JSON response
Frontend renders GameGrid with GameCard components

Data Flow: Game Page Load

Player clicks a game card
Frontend sends game_click event (batched)
Navigate to /games/:slug
GET /api/v1/games/:slug
Server: fetch game data + recommendations + author games
Server: generate game_page_view event
Frontend renders:
├── GamePlayer (iframe with game)
├── Player footer (title, author, like/dislike, fullscreen)
├── Recommendations (side + bottom)
└── Game info section (description, instructions, author games)
Tracking SDK starts monitoring:
├── game_loading_start (iframe begins loading)
├── game_loading_end (iframe loaded)
├── game_focused_start / stop (visibility API)
├── gameplay_start / stop (from broker SDK)
└── show_ad (from broker SDK)

Data Flow: Event Ingestion

User interacts with game
Tracker SDK captures event with timestamp
Event added to in-memory buffer
Buffer flush (every 5s, on unload, or when full)
POST /api/v1/events { events: [...], sessionId: "..." }
API validates each event (type, fields, timestamp, dedup)
Valid events inserted into events table (partitioned by month)
Hourly aggregation job reads new events
Updates daily_metrics table (plays, DAU, impressions, playtime, etc.)
Ranking job reads daily_metrics
Recomputes ranks per surface × platform
Invalidates Redis cache for affected game lists

See Events Pipeline for full details.


Deployment

  • Each service is containerized (Docker)
  • Services can be deployed and scaled independently
  • Environment separation: development, staging, production
  • See Scaling for infrastructure evolution per growth phase