Skip to main content

Leverage Record: March 4, 2026

AI Time Record

About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.

Daily accounting of what Claude Opus 4.6 built today, measured against how long a senior engineer familiar with each codebase would need for the same work. Thirty-eight tasks across a dozen projects. A day that spanned diagram rendering engines, patent figure generation, AR/VR development, education platform overhauls, AWS service emulators, and a full-stack chatbot architecture article with companion demo repo. The breadth here is unusual even by recent standards.

About These Records
These time records capture personal project work done with Claude Code (Anthropic) only. They do not include work done with ChatGPT (OpenAI), Gemini (Google), Grok (xAI), or other models, all of which I use extensively. Client work is also excluded, despite being primarily Claude Code. The actual total AI-assisted output for any given day is substantially higher than what appears here.

The Numbers

# Task Human Est. Claude Leverage
1 Diagram rendering patches: diamond equalization, dogleg straightening, group alignment, page centering, regression tests (7 tests, 25 assertions) 28 hours 45 min 37.3x
2 Image generation: 5 thematic article images generated and deployed to articles 4 hours 35 min 6.9x
3 CMS automation skill update: added image generation phase and post-deploy staging verification phase 1.5 hours 10 min 9.0x
4 Daily leverage record post: CSV parsing, sanitization, post creation, staging deploy 2 hours 15 min 8.0x
5 Full content audit: em dashes, en dashes, and double dashes across all articles, posts, pages, and templates 5 hours 25 min 12.0x
6 Documentation overhaul and pipeline migration for diagram rendering fork 8 hours 8 min 60.0x
7 Fix 4 layout regressions in diagram renderer (centering, de-overlap, label width, diamond, back-edge) 16 hours 25 min 38.4x
8 visionOS immersive environment with HDRI skybox and button glass removal 6 hours 15 min 24.0x
9 Education platform: React frontend page components (Login, Dashboard, Chat, ConversationHistory, Analytics) 4 hours 4 min 60.0x
10 Education platform: React chat components (7 files and CSS) 4 hours 8 min 30.0x
11 Education platform: React frontend core files (11 files) 4 hours 8 min 30.0x
12 Cloud operations dashboard: database layer (13 files: models, data access, migrations) 3 hours 6 min 30.0x
13 Cloud operations dashboard: manager layer and auth helpers (5 files, 908 lines) 3 hours 5 min 36.0x
14 Cloud operations dashboard: routes, MCP server, and requirements 3 hours 4 min 45.0x
15 Cloud operations dashboard: React frontend scaffolding (23 files, 2,715 LOC) 4 hours 8 min 30.0x
16 Cloud operations dashboard: rewrite 5 frontend page components with full implementations 4 hours 8 min 30.0x
17 Cloud operations dashboard: real-time CloudTrail update components (SQS poller, app integration, Terraform) 1.5 hours 3 min 30.0x
18 Unit tests for education and chatbot backends (26 new tests, 3 model bug fixes) 2 hours 6 min 20.0x
19 Portfolio enhancements: 8-phase implementation across education, chatbot, and cloud operations platforms 120 hours 55 min 130.9x
20 Patent diagram fixes: diamond back-edge straightening, font size floor, 83-diagram validation sweep 10 hours 9 min 66.7x
21 AR/VR chalkboard entity with dynamic chalk text rendering, PBR wood textures, photorealistic surface layers 12 hours 15 min 48.0x
22 TTS config debugging, in-memory L1 lesson cache, Redis L2 hookup, DOM nesting fix (5 files across 2 repos) 6 hours 15 min 24.0x
23 Streaming TTS rewrite: section-scoped audio, AudioPlayer streaming, auto-advance (15 files across 5 repos) 20 hours 45 min 26.7x
24 Regenerate 85 patent figure PDFs, update diagram renderer docs for 2 new layout passes 3 hours 14 min 12.9x
25 Design specification for ML evaluation platform (8 pages: dashboard, domain detail, synthesis control, tribunal, spec authoring, analytics, WebSocket architecture, 6 phases) 24 hours 12 min 120.0x
26 Fix model pricing bugs (missing model key, wrong price lookups) and cumulative stage metadata bug in batch mode (2 bugs, 2 files) 3 hours 12 min 15.0x
27 Content moderation app: outcome filter tabs, review session tabs, CSS, navigation, backend sort (4 files) 3 hours 10 min 18.0x
28 ML evaluation pipeline: rerun-escalated feature with CLI flag and main restructure 4 hours 15 min 16.0x
29 ML evaluation pipeline: generalize rerun function, add --rerun-rejected CLI, fix spec parse crash 2 hours 8 min 15.0x
30 ML evaluation platform: Phase 8-10 (changeset system, analytics, settings, command palette, final integration) 120 hours 40 min 180.0x
31 AWS emulator: IAM/STS service (6 files: store, server, tests, Dockerfile, Go modules) 8 hours 4 min 120.0x
32 AWS emulator: Kinesis Data Streams service (6 files) 4 hours 3 min 80.0x
33 AWS emulator: ECR service (6 files) 4 hours 3 min 80.0x
34 AWS emulator: Firehose service (6 files) 4 hours 3 min 80.0x
35 AWS emulator expansion: 8 new services, 9 fixes, integration tests, sample project 120 hours 15 min 480.0x
36 Education platform: auth gate, cache fix, 5 chatbot themes across 3 apps 6 hours 25 min 14.4x
37 Enterprise chatbot architecture article (4,000 words) and demo repo (32 files: React, FastAPI, WebSocket) with image gen, AI detection, staging deploy 40 hours 55 min 43.6x
38 Certification exam research across 14 vendors (~495 exams) and persistent tracking file 16 hours 25 min 38.4x

Aggregate Statistics

Metric Value
Total tasks 38
Total human-equivalent hours 632
Total Claude minutes 621
Total tokens ~3.5M
Weighted average leverage factor 61.1x

Analysis

The 480x leverage factor on the AWS emulator expansion (task 35) stands out. That task added 8 complete service emulators with integration tests and a sample project in 15 minutes. Each emulator follows an identical structural pattern (store, server, handler, tests, Dockerfile, Go module), and once the first one existed, the remaining seven were variations on a theme. Pattern replication at that scale is where AI leverage compounds most aggressively.

The two ML evaluation platform tasks (25 and 30) together account for 244 human-equivalent hours at a combined leverage of 162x. Both involved generating comprehensive design documents and multi-phase implementations where the architecture was well-defined and the AI could execute without frequent clarification.

At the other end, image generation (task 2) scored only 6.9x. Generating images with the Gemini API involves iterating on prompts, evaluating visual output, and regenerating. The process is inherently interactive and harder to accelerate because the bottleneck is aesthetic judgment and API round-trip time, not typing speed.

The breadth of this day is worth noting: diagram rendering engines (Go/TypeScript), AR/VR development (Swift/RealityKit), patent figure generation, education platforms (React/Python), cloud operations dashboards (React/FastAPI), AWS service emulators (Go), content moderation systems (Python), and a published architecture article. Thirty-eight tasks across approximately twelve distinct repositories in seven programming languages.

Let's Build Something!

I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.

Currently taking on select consulting engagements through Vantalect.