About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.
Twenty-nine tasks. April 4 was dominated by full-stack rewrites: an accounting platform rewritten from Node.js to Python (252 files, 27.7K LOC), a time tracking tool refitted from Flask to FastAPI, a list management app rebuilt from scratch, and a comprehensive auth architecture overhaul covering 13 OIDC clients. Testing was also heavy, with three separate test suites generated across different services. The day also included 12 new structured content specifications for AI/ML topics, a cross-application integration feature, and several infrastructure tasks.
The weighted average leverage factor was 54.1x with a supervisory leverage of 574.2x. In human terms, this was 35.6 weeks of work.
Task Log
| # | Task | Human Est. | Claude | Sup. | Factor | Sup. Factor |
|---|---|---|---|---|---|---|
| 1 | Full-stack accounting platform rewrite: Node.js to Python, 252 files, 27.7K LOC | 240h | 22m | 5m | 654.5x | 2880.0x |
| 2 | Full refit of time tracking service: Flask to FastAPI, JS to TypeScript, auth/migrations/CI-CD, 93 files | 80h | 15m | 3m | 320.0x | 1600.0x |
| 3 | Full rewrite of list management app: new framework stack, 36 divergences resolved, 38 files | 120h | 25m | 5m | 288.0x | 1440.0x |
| 4 | Accounting platform React 19 frontend: 77 files, 15 page sections, all routes, CSS modules | 40h | 12m | 10m | 200.0x | 240.0x |
| 5 | Time tracking frontend rewrite: 23 files, 5015 LOC, design system CSS | 24h | 8m | 5m | 180.0x | 288.0x |
| 6 | OIDC client registration (13 clients), email config, login integration for 7 tool frontends, privacy policy | 200h | 90m | 20m | 133.3x | 600.0x |
| 7 | Time tracking comprehensive test suite: strategy doc, 142 tests (80 unit + 62 integration), Playwright specs | 40h | 18m | 2m | 133.3x | 1200.0x |
| 8 | Dual auth architecture: browse-before-auth UX, site key gate, SMS verification, registration modes | 120h | 55m | 15m | 130.9x | 480.0x |
| 9 | 17 API route files (105 endpoints) with full CRUD, pagination, auth, and validation | 24h | 12m | 8m | 120.0x | 180.0x |
| 10 | Marketing platform test suite: 288 unit tests, 3 integration test files, 8 E2E specs | 80h | 45m | 3m | 106.7x | 1600.0x |
| 11 | Cross-app integration: list-to-task sync with API key generation, DB-backed auth | 16h | 10m | 3m | 96.0x | 320.0x |
| 12 | 6 accounting service files (ledger, invoicing, banking, reports, recurring, tax) with full SQL | 12h | 8m | 5m | 90.0x | 144.0x |
| 13 | Launch plan reconciliation + press kit + marketing feature gap analysis | 24h | 18m | 5m | 80.0x | 288.0x |
| 14 | Bidirectional cross-app integration: pull-from-source, export-full endpoint, MCP tools | 12h | 12m | 2m | 60.0x | 360.0x |
| 15 | MCP server config fix (5 tools) + 4 marketing launch features: scheduled campaigns, CSV import | 20h | 22m | 3m | 54.5x | 400.0x |
| 16 | 18 backend unit test files (98 tests), SQLite compatibility fixes | 16h | 18m | 5m | 53.3x | 192.0x |
| 17 | Task tracker comprehensive test suite: 293 tests, 83%/80% backend/frontend coverage | 16h | 20m | 3m | 48.0x | 320.0x |
| 18 | Accounting backend core skeleton: 21 files (factory, config, database, auth, dependencies) | 4h | 5m | 5m | 48.0x | 48.0x |
| 19 | Terraform infrastructure (ECR/CodePipeline/ALB/DNS/SSM) for 2 tool services | 3h | 4m | 5m | 45.0x | 36.0x |
| 20 | Critical proficiency scoring bug: scores stuck at 0.0 after 500 correct answers | 16h | 22m | 5m | 43.6x | 192.0x |
| 21 | Certification marketplace frontend: API client, catalog page, detail page, routes, sidebar | 4h | 6m | 5m | 40.0x | 48.0x |
| 22 | Reconcile task tracker with fleet conventions: 15 divergences fixed | 8h | 12m | 2m | 40.0x | 240.0x |
| 23 | Backend unit test suite: conftest, 10 test files, 80 tests covering all service layers | 8h | 12m | 5m | 40.0x | 96.0x |
| 24 | Production deployment: Terraform ECR/ALB/Route53/SSM/S3/CloudFront/CodePipeline + DB + Docker | 16h | 25m | 2m | 38.4x | 480.0x |
| 25 | Infrastructure rename migration: 4 tool renames across Terraform, CI/CD, DNS, SSM | 28h | 45m | 5m | 37.3x | 336.0x |
| 26 | Newsletter platform testing strategy + 99 new tests (246 total, 76% coverage) | 12h | 35m | 3m | 20.6x | 240.0x |
| 27 | Rename task tracker (GitHub repo, local dir, 13 source files) + comprehensive README | 2h | 7m | 3m | 17.1x | 40.0x |
| 28 | Write 12 new structured content specifications for AI/ML/data topics | 240h | 990m | 5m | 14.5x | 2880.0x |
| 29 | Update product website patent portfolio numbers | 1h | 8m | 2m | 7.5x | 30.0x |
Aggregate Statistics
| Metric | Value |
|---|---|
| Total tasks | 29 |
| Total human-equivalent hours | 1,426.0 |
| Total Claude minutes | 1,581 |
| Total supervisory minutes | 149 |
| Total tokens | 11,636,500 |
| Weighted average leverage factor | 54.1x |
| Weighted average supervisory leverage factor | 574.2x |
Analysis
The Trellis accounting rewrite (654.5x) was the standout. Rewriting an entire full-stack application from one language and framework to another, producing 252 files and 27.7K lines of code in 22 minutes, is the kind of task where AI leverage is most extreme. A human would spend weeks understanding the existing codebase, planning the migration, writing the new code, and debugging integration issues. The AI has the entire context in its window and generates the replacement in a single pass.
Three other rewrites followed the same pattern: the time tracking refit (320x), the list management rebuild (288x), and the accounting frontend (200x). All four shared a common characteristic: well-understood target architectures with clear specifications. When the destination is unambiguous, the AI's generation speed creates massive leverage. When it requires iterative design decisions, leverage drops.
The 12 structured content specifications (14.5x) represent the opposite end. At 990 minutes of Claude time, this was the longest single task. Content generation at this scale involves extensive validation loops; each specification requires domain knowledge verification, structural consistency checks, and quality gates. The leverage is still meaningful (240 human hours compressed into 16.5 hours), but the per-minute yield is lower than code generation tasks.
The supervisory leverage (574.2x) reflects the extreme delegation possible on a day like this. Most tasks required under 5 minutes of prompting. The auth architecture overhaul was the exception at 15 minutes of supervisory time, reflecting the architectural complexity of designing a dual-mode authentication system. Even so, 120 human hours for 15 minutes of direction is a 480x supervisory ratio.
Let's Build Something!
I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.
Currently taking on select consulting engagements through Vantalect.
