How this
dashboard works
A personal fitness analytics platform built as both a functional training tool and a portfolio project — demonstrating real-world API integration, automated data pipelines, and cloud architecture across its phases of build.
The current production system uses a static JSON pipeline. A Python script runs daily via GitHub Actions, fetches data from the Intervals.icu API, and commits processed JSON files directly to the repository. GitHub Pages serves the static site, which reads those JSON files client-side. This approach avoids CORS issues, requires zero backend infrastructure, and keeps running costs at $0.
V1 Data Flow — GitHub Pages Architecture
All fitness metrics are sourced from the Intervals.icu API (athlete ID: 5718022). Intervals.icu serves as the primary analytics engine — it ingests activities from Garmin and Zwift, computes training load metrics, and exposes them via a well-structured REST API. Strava data via the integration is treated as secondary; Intervals.icu provides the authoritative, fully-populated activity fields.
Data Source Hierarchy
| Method | Endpoint | Data Retrieved | JSON File |
|---|---|---|---|
| GET | /api/v1/athlete/{id}/activities |
All activities, sport type, TSS, NP, IF, HR, power | activities.json |
| GET | /api/v1/athlete/{id}/wellness |
CTL, ATL, TSB, HRV, sleep, weight — pre-computed | wellness.json |
| GET | /api/v1/athlete/{id} |
Current FTP, W'bal, weight, critical power | athlete.json |
| GET | /api/v1/athlete/{id}/power-curves |
Best power for each duration (1s → 3600s) | power_curves.json |
| GET | /api/v1/athlete/{id}/pace-curves |
Best pace for each distance (running) | pace_curves.json |
The project is structured in phases, moving from a simple static pipeline toward a full serverless AWS architecture with an AI coaching layer. Each phase is independently valuable and the existing production system remains live throughout.
Static Dashboard
GitHub Actions → Python → JSON → GitHub Pages. Daily automated sync, Chart.js visualisations, full Intervals.icu integration.
CompleteAWS Foundation
CDK infrastructure, IAM setup, CLI configuration. Migrate data pipeline from GitHub Actions to Lambda + EventBridge + DynamoDB.
In ProgressAI Coaching Layer
Claude-powered training coach. Post-race debriefs, PR detection, adaptive tone, context-aware nutrition guidance driven by live Intervals.icu data.
In ProgressAPI Layer
REST endpoints via API Gateway. Lambda query functions serving DynamoDB. Replace static JSON reads with live API calls.
PlannedS3 + CloudFront
Migrate frontend from GitHub Pages to S3 static hosting behind CloudFront CDN. Custom domain, SSL, global edge delivery.
UpcomingMulti-user Platform
Athlete auth, per-user data isolation, shareable profiles. Demonstrates production-grade SaaS architecture at portfolio scale.
VisionThe V1 static pipeline proves the concept — V2 is where the cloud engineering skills are demonstrated. The entire data pipeline is migrated from GitHub Actions into a fully serverless AWS stack, designed from the ground up as Infrastructure as Code using AWS CDK (Python). This phase exists primarily as a portfolio exercise in production cloud architecture: applying IAM least-privilege security, secrets management, serverless compute, NoSQL data modelling, REST API design, CDN delivery, and operational observability — all within a real system that genuinely runs and collects data daily. Deployed to eu-west-2 (London), the architecture separates concerns into two distinct pipelines — a scheduled collection pipeline triggered by EventBridge, and a synchronous query pipeline fronted by API Gateway — targeting sub-500ms API responses and a post-free-tier running cost of under £3/month.
V2 Target Architecture — AWS Serverless
The AI coach is where applied machine learning, prompt engineering, and software architecture converge. Built on the Anthropic Claude API, it demonstrates a set of skills that are hard to fake: designing agentic systems that are grounded in real data, engineering prompts that produce reliably structured outputs rather than freeform prose, preventing hallucination at the system level rather than by post-processing, and implementing adaptive behaviour (tone, focus, urgency) programmatically rather than hoping the model infers context. Real Intervals.icu metrics — CTL, ATL, TSB, HRV, FTP, W', power curves, and calendar events — are assembled into a structured JSON payload before any prompt is issued. The model never invents numbers; it reasons over data the system has fetched and validated.
AI Coach — Data Flow & Context Assembly
The quality of AI coaching output is entirely a function of prompt design. Rather than asking Claude a vague question, the system assembles a richly structured context payload from Intervals.icu before any prompt is sent. The prompt engineering follows several key principles:
Prompt Architecture — Layers
ctl, atl, tsb, hrv, ftp, w_prime, cp_5min, cp_20min, recent activity list (sport, TSS, NP, IF, duration), and upcoming events from the Intervals.icu calendar. No value is computed client-side — all pulled directly from the API.
summary, workout_primary, workout_zwift_equivalent, nutrition, flags[]. This makes the output reliably parseable and renderable in the dashboard — turning a language model into a structured data source.
Nutrition recommendations are generated contextually — they vary based on training load, upcoming race proximity, and current TSB. The coach doesn't issue generic meal plans; it reasons about the athlete's energy state and what the next 48 hours of training demand.
icu_weight, icu_training_load, icu_atl, icu_tsb. The AI is never given permission to invent numbers. This is enforced at the prompt level, not by post-processing.
The stack spans three distinct layers — each chosen deliberately rather than by default. The frontend is intentionally vanilla: no framework overhead, no build step, just well-structured HTML, CSS custom properties, and Chart.js doing exactly what it needs to do. The V1 backend replaces a traditional server entirely with a scheduled Python script and static files — a pragmatic architecture that runs at zero cost while demonstrating solid API integration and automation skills. The V2 AWS stack introduces every major serverless primitive: Lambda for compute, DynamoDB for storage, API Gateway for exposure, CloudFront for delivery, Secrets Manager for credentials, EventBridge for scheduling, and CloudWatch for observability — all provisioned through CDK as repeatable, version-controlled infrastructure code.
Frontend
Backend (V1)
Infrastructure (V2 Target)
Data Sources