Published by Heykup
Case Study: Building and Shipping Heykup in 11 Days
This publication documents how Heykup used an AI-assisted product engineering model to deliver a production-ready attendance and team-presence PWA in 11 days, while preserving release governance, testing discipline, and branch safety.
Abstract
Heykup addresses a recurring hybrid-work coordination problem: teams spend critical morning time synchronizing attendance intent. We designed a mobile-first PWA and delivered it using a human-led, AI-assisted engineering workflow. The outcome was a working product with daily check-ins, team visibility, weekly intelligence, and admin controls, released across UAT and production lanes.
Relative to a modeled sequential baseline, the project reduced end-to-end timeline by 78%, cut average feature cycle time by 68%, and improved release predictability through gated verification and controlled branch movement.
1. Problem Statement and Product Goals
Observed operational problem
Teams lacked a shared, trusted source for daily work-location intent. Decision-making depended on chat threads and manual follow-up, introducing avoidable delays and low confidence in attendance status.
Primary goals
- Provide a one-step status update flow for In Office, Remote, and On Leave.
- Expose live team response state in a readable, mobile-first UI.
- Deliver weekly cards and leaderboards for planning and visibility.
- Maintain production safety while iterating quickly in UAT.
2. Research Method and Measurement Design
The analysis uses two datasets: (1) observed execution from project logs over 11 days, and (2) a modeled traditional baseline representing a standard sequential workflow for equivalent scope.
| Metric | Definition | Computation |
|---|---|---|
| Total timeline | Concept start to production-ready state | T_total = release_date - kickoff_date |
| Feature cycle time | Request accepted to feature verified | T_cycle = median(done - accepted) |
| Efficiency gain | Reduction from baseline to observed | Gain % = (B - O) / B * 100 |
Baseline values are planning-model assumptions used for comparison, not a parallel shipped control project.
3. Product and Platform Architecture
The implementation followed a React + Firebase architecture with role-aware data access, branch-separated release flow, and deployment automation.
- Client: React web app optimized for app-like PWA behavior on iOS and Android.
- Identity: Firebase Authentication for secure login and role context.
- Data Layer: Firestore with rule-validated access and team-scoped data models.
- Delivery: Branch-based UAT/main release lanes and workflow-driven deployments.
4. Quantitative Results and Efficiency Gains
4.1 Timeline and cycle-time outcomes
| Dimension | Modeled baseline | Observed delivery | Efficiency gain |
|---|---|---|---|
| Total delivery window | 50 days (10 weeks) | 11 days | 78.0% |
| Median feature cycle time | 2.8 days | 0.9 days | 67.9% |
| Stabilization window | 20 days | 4 days | 80.0% |
4.2 Quality and release performance
| Quality indicator | Baseline expectation | Observed behavior |
|---|---|---|
| Regression handling | Late-stage QA concentration | Continuous correction with iterative verification in delivery loop |
| Release confidence | Manual coordination dependency | Branch-safe flow with build and deployment checks before promotion |
| Change traceability | Distributed ownership artifacts | Commit-driven traceability with lane-specific release history |
5. Practical Examples from the Build
Presence leaderboard correctness and UX policy control
During iteration, ranking and tie-display behavior required correction. The fix combined logic validation and UI policy updates so leaderboard order reflected actual office-day counts and visual indicators matched product rules.
PWA shell hardening for iOS safe-area and navbar behavior
Repeated field observations identified top and bottom shell spacing inconsistencies on iPhone install mode. The solution used controlled shell layout changes across screens, followed by deployment verification in UAT.
6. Discussion, Validity, and Next Steps
Interpretation
The observed gains were driven by faster implementation-feedback loops and immediate correction cycles, not by skipping governance. Release quality was preserved through branch discipline and verification gates.
Threats to validity
- Baseline is modeled for comparison, not a simultaneously executed control project.
- Some quality metrics are qualitative due to limited long-horizon production telemetry.
- Results are strongest for this product scope and may vary for highly regulated domains.
Next phase research agenda
- Add longitudinal telemetry for 30/60/90-day outcome tracking.
- Run controlled A/B task-time studies on attendance workflows.
- Publish a versioned technical appendix with deeper test and release evidence.
References and Data Sources
- Heykup internal execution log (Day 0 to Day 11), March 2026.
- Release lane history and deployment outcomes, UAT and main branches.
- Build and workflow outputs from project CI/CD pipeline.
- Product acceptance and defect-resolution notes captured during iteration.
Published by Heykup. For estimate decks, this page can be exported into a board-ready PDF with a technical appendix.