Skip to content

Brainstorm Session 2: Platform Architecture

Date: 2026-04-15 Objective: Design the architectural structure of Finnest on Elixir/Phoenix Depends on: Session 1 (Tech Stack → Elixir/Phoenix)

Techniques Used

  1. Mind Mapping — Hierarchical exploration of architectural layers
  2. Starbursting — Who/what/where/when/why/how for every decision
  3. SWOT Analysis — Assessment of the recommended architecture

Architecture Pattern: "Supervised Modular Monolith"

A single Elixir release containing multiple OTP applications, each with independent failure domains, supervised lifecycles, and MCP extraction points. Not traditional monolith, not microservices — something the BEAM uniquely enables.


OTP Application Structure

finnest/
├── apps/
│   │
│   │── # TIER 1: CORE (Day-1)
│   ├── finnest_core        # Foundation: org, user, auth, feature flags, audit, events
│   ├── finnest_people      # HR Core: employee lifecycle, leave, contracts, performance
│   ├── finnest_recruit     # Recruitment: job orders, scoring, pipeline, outreach (Scout)
│   ├── finnest_onboard     # Onboarding: screening, document verification (Verify)
│   ├── finnest_roster      # Rostering: shifts, scheduling, availability, demand forecasting
│   ├── finnest_timekeep    # Time & Attendance: timecards, clock-in, approvals, offline
│   ├── finnest_reach       # Communication: SMS, voice, chat, email AI agents
│   ├── finnest_pulse       # Operations Automation: AI roster/compliance/timesheet/onboarding
│   │
│   │── # TIER 2: ESSENTIAL (High value, build after core)
│   ├── finnest_payroll     # Payroll & Invoicing: pay calc, awards, STP, invoices
│   ├── finnest_clients     # Client Management / CRM: contacts, job costing, rates, sales
│   ├── finnest_safety      # Safety Management: WHS, incidents, SWMS, inspections
│   ├── finnest_assets      # Asset Management: equipment, fleet, maintenance, warranties
│   │
│   │── # TIER 3: GROWTH (Build as platform matures)
│   ├── finnest_quotes      # Quote & Project Management: leads, quotes, jobs, tasks
│   ├── finnest_learn       # Online Learning Management: courses, tutors, certifications
│   ├── finnest_benefits    # eBenefits: recognition, programs, perks
│   │
│   │── # TIER 4: INDUSTRY-SPECIFIC (Multi-industry expansion)
│   ├── finnest_compliance  # Compliance Engine: credentials, awards, industry profiles
│   ├── finnest_fatigue     # Fatigue Management: NHVR, FIFO/DIDO, EWD, CoR
│   ├── finnest_clearance   # Security Clearance: DISP, AGSVA, NV1/NV2/PV
│   │
│   │── # TIER 5: PEOPLE EXCELLENCE (NEW from competitor audit)
│   ├── finnest_performance # Performance: goals, OKRs, reviews, 360, pulse surveys, eNPS
│   │
│   │── # INFRASTRUCTURE
│   ├── finnest_agents      # Agent Infrastructure: orchestrator, tools, MCP, Claude
│   └── finnest_web         # Phoenix: LiveView, REST API, WebSocket, auth

19 domain modules + 2 infrastructure modules = 21 OTP applications

See 07-complete-module-inventory.md for full feature breakdown of every module (~365 features).

Dependency Rules (Enforced)

  • Any domain app → finnest_core (allowed)
  • finnest_web → any domain app (allowed, for routing/rendering)
  • finnest_agents → any domain app via MCP (allowed)
  • Domain app → domain app (FORBIDDEN — communicate via events only)
  • finnest_core → nothing (pure foundation)

Data Layer

PostgreSQL Schema Isolation

-- Each domain owns its schema (18 domain schemas + events + public)

-- Tier 1: Core
CREATE SCHEMA people;      -- HR core, employee lifecycle, leave, contracts
CREATE SCHEMA recruit;     -- Recruitment, scoring, pipeline, outreach
CREATE SCHEMA onboard;     -- Onboarding, document verification, compliance
CREATE SCHEMA roster;      -- Rostering, scheduling, availability
CREATE SCHEMA timekeep;    -- Time & attendance, timecards, clock
CREATE SCHEMA reach;       -- Communication agents (SMS, voice, chat, email)
CREATE SCHEMA pulse;       -- Operations automation

-- Tier 2: Essential
CREATE SCHEMA payroll;     -- Payroll processing, invoicing, STP
CREATE SCHEMA clients;     -- Client management, CRM, rates, job costing
CREATE SCHEMA safety;      -- WHS, incidents, SWMS, inspections
CREATE SCHEMA assets;      -- Asset management, fleet, maintenance

-- Tier 3: Growth
CREATE SCHEMA quotes;      -- Quote & project management, leads, tasks
CREATE SCHEMA learn;       -- LMS, courses, tutors, certifications
CREATE SCHEMA benefits;    -- eBenefits, recognition, perks

-- Tier 4: Industry-specific
CREATE SCHEMA compliance;  -- Credential registry, award rules, industry profiles
CREATE SCHEMA fatigue;     -- NHVR fatigue management, FIFO/DIDO, EWD
CREATE SCHEMA clearance;   -- Security clearance, DISP, AGSVA

-- Tier 5: People Excellence
CREATE SCHEMA performance; -- Goals, reviews, 360 feedback, pulse surveys, eNPS

-- Infrastructure
CREATE SCHEMA events;      -- Event store (append-only, immutable)
CREATE SCHEMA agents;      -- Agent sessions, messages, memories, AI budget limits
-- public schema: organisations, users, feature_flags, audit_log

State Tiers

Tier Storage Use Case Durability
Hot GenServer memory Agent conversations, active sessions, real-time Process lifetime
Warm ETS tables Feature flags, tenant config, cached queries Node lifetime
Cold PostgreSQL Everything persistent Permanent

Event Store

PostgreSQL-backed append-only table:

events.domain_events (
  id UUID PRIMARY KEY,
  domain VARCHAR NOT NULL,        -- 'scout', 'verify', etc.
  event_type VARCHAR NOT NULL,    -- 'job_order_created', etc.
  aggregate_id UUID,              -- entity the event belongs to
  org_id UUID NOT NULL,           -- tenant isolation
  payload JSONB NOT NULL,         -- event data
  metadata JSONB,                 -- correlation_id, causation_id, actor
  inserted_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
-- No UPDATE or DELETE allowed (DB-level trigger)
-- Partitioned by month for query performance
-- Index on (org_id, domain, event_type, inserted_at)

No External Dependencies

Need Traditional Finnest
Job queue Redis + Sidekiq/Bull Oban (PostgreSQL)
Event bus Kafka/RabbitMQ Phoenix PubSub + PostgreSQL
Cache Redis/Memcached ETS (in-memory, per-node)
Search Elasticsearch PostgreSQL full-text search
Pub/Sub Redis Pub/Sub Phoenix PubSub (distributed)

One database. One runtime. One deployment.


Interface Layer

Web UI — Phoenix LiveView

  • Server-rendered, reactive (no React/Vue/Svelte)
  • DaisyUI 5 + Tailwind v4 (shared design system with marketing site)
  • Per-domain LiveView modules (FinnestWeb.ScoutLive.*, FinnestWeb.VerifyLive.*)
  • Agent chat interface via LiveView + PubSub streaming
  • Real-time dashboards (PubSub broadcasts on domain events)

Mobile API — Phoenix JSON REST

  • RESTful endpoints (/api/v1/scout/*, /api/v1/pulse/*)
  • Token-based auth (Phoenix Token or JWT)
  • WebSocket channel for real-time (notifications, location streaming)
  • Consumed by KMP or React Native mobile app (separate repo)

MCP Servers — Per Domain

Each domain exposes an MCP server with typed tools: - finnest_scout_mcp → list_job_orders, get_candidate, score_candidates, etc. - finnest_verify_mcp → verify_document, get_checklist, check_status, etc. - finnest_reach_mcp → send_message, get_conversation, check_delivery, etc. - finnest_pulse_mcp → get_roster, create_shift, process_timesheet, etc. - finnest_people_mcp → get_employee, submit_leave, get_balance, etc.

Internal AI agents and external systems consume the same MCP interfaces.

IRAP Boundary (IRAP deployment only)

Go proxy between load balancer and Phoenix app: - TLS termination with FIPS-compliant cipher suites - Request/response audit logging (tamper-evident) - Access control enforcement (role + classification) - Classification header injection - Rate limiting and threat detection


Cross-Domain Communication

Pattern: Events Only

Scout creates JobOrder
  → publishes %Event{type: :job_order_created}
  → Pulse subscribes → creates roster template
  → Verify subscribes → checks candidate credentials
  → Reach subscribes → sends confirmation messages
  → Billing subscribes → creates billing record
  → Each domain acts independently

Rules

  1. No direct function calls between domain apps
  2. No cross-schema database queries
  3. All cross-domain data flows through typed events
  4. Events are persisted (audit trail + replay)
  5. Event handlers are idempotent (safe to replay)

Supervision Strategy

Application Supervisor (one_for_one)
├── Finnest.Core.Supervisor
│   ├── EventStore (GenServer — event persistence)
│   ├── FeatureFlagCache (ETS-backed)
│   └── AuditLogger (GenServer — async audit writes)
├── Finnest.Scout.Supervisor
│   ├── ScoringEngine (GenServer pool)
│   ├── PoolRefreshScheduler (periodic)
│   └── OutreachService (GenServer)
├── Finnest.Verify.Supervisor
│   ├── PipelineOrchestrator (GenServer)
│   ├── BudgetGuard (GenServer — circuit breaker)
│   └── ProviderHealthCheck (periodic)
├── Finnest.Agents.Supervisor
│   ├── Orchestrator (GenServer — intent routing)
│   ├── AgentSupervisor (DynamicSupervisor — agent pool)
│   ├── ClaudeClient (GenServer — API connection pool)
│   └── ToolRegistry (GenServer — MCP server catalog)
├── [Other domain supervisors...]
├── Oban (job processing)
│   ├── scout_queue (pool refresh, scoring, outreach)
│   ├── verify_queue (classification, extraction, validation)
│   ├── reach_queue (SMS, voice, email delivery)
│   ├── pulse_queue (roster generation, timesheet processing)
│   └── default_queue (general tasks)
└── FinnestWeb.Endpoint (Phoenix HTTP/WebSocket)

Restart Strategies

  • one_for_one (default): Restart only the failed child. Used for independent services.
  • rest_for_one: Restart failed child + all children started after it. Used for pipelines.
  • one_for_all: Restart everything. Used only for tightly coupled subsystems (rare).

Scaling Path

Phase 1: Single Node (0-10K employees)

  • One BEAM node, one PostgreSQL instance
  • Adequate for initial launch and early growth

Phase 2: Clustered (10K-100K employees)

  • 2-3 BEAM nodes behind load balancer
  • PostgreSQL with read replicas
  • Distributed PubSub across nodes
  • Oban distributes jobs across nodes

Phase 3: Extracted Domains (100K+ employees)

  • Extract hot domains to separate deployments
  • MCP contracts maintain compatibility
  • Dedicated PostgreSQL per extracted domain
  • Event bus migrates to NATS or Kafka

Phase 4: Geographic Distribution

  • BEAM nodes in multiple regions
  • PostgreSQL with regional replicas
  • Tenant routing to nearest region
  • IRAP deployment in dedicated Australian region

IRAP Deployment Variant

Same codebase, different configuration:

# config/runtime.exs
if config_env() == :irap do
  config :finnest, :ai_provider, :bedrock_sydney    # Not direct Anthropic API
  config :finnest, :audit_level, :enhanced           # Log everything
  config :finnest, :session_timeout, 900             # 15 min (stricter)
  config :finnest, :mfa_required, true               # Always
  config :finnest, :data_region, "ap-southeast-2"    # Sydney only
  config :finnest, :disabled_integrations, [          # No offshore data flow
    :seek_api, :indeed_api, :calendly                 
  ]
end

Deployed to: Separate AWS VPC in Sydney, Go proxy in front, separate RDS instance, separate S3 bucket with Object Lock.


Key Insights

Insight 1: OTP Applications ARE Microservices Without the Tax

Independent failure domains, independent state, supervised lifecycle, independent scaling via process allocation. 90% of microservice benefits, 10% of operational cost. The remaining 10% available via MCP extraction when needed.

Impact: High | Effort: Low

Insight 2: PostgreSQL Does Everything at This Scale

At 5K-50K employees: relational data, JSONB, event store, pub/sub, full-text search, job queue. One operational dependency instead of 5.

Impact: High | Effort: Low

Insight 3: Events + Audit = One Pattern

IRAP requires immutable audit logs. Event-driven architecture produces these naturally. Every cross-domain action is an event. The event store IS the audit log. One implementation, two requirements satisfied.

Impact: High | Effort: Medium

Insight 4: MCP Boundaries Are the Escape Hatch

Language-agnostic contracts at domain boundaries. If a domain needs extraction, MCP contract stays identical. Monolith becomes "default until reason to extract," not "permanent."

Impact: High | Effort: Medium

Insight 5: "Supervised Modular Monolith" is a New Pattern

Not traditional monolith, not microservices. BEAM uniquely enables: single deployment + independent failure domains + supervised lifecycle + hot code reload + standardized extraction. Deserves documentation because it's unfamiliar.

Impact: Medium | Effort: Low


Statistics

  • Total ideas: 30+
  • Categories: 4 (Runtime, Data, Interface, Operations)
  • Key insights: 5
  • Techniques applied: 3

→ Session 3: AI Agent Design (how agents work within this architecture) → Define MCP server contracts for 3 core domains as proof of concept → Create supervision tree diagram for the PoC codebase


Generated by BMAD Method v6 - Creative Intelligence