About the author: I'm Charles Sieg, a cloud architect and platform engineer who builds apps, services, and infrastructure for Fortune 1000 clients through Vantalect. If your organization is rethinking its software strategy in the age of AI-assisted engineering, let's talk.
Manually shipping containerized Lambda functions works for experiments. Build the image locally, push it to ECR, update the function, verify it works. Fine for one function updated once a week. The moment you have multiple functions, multiple environments, or more than one engineer deploying? It falls apart. Someone forgets to tag the image. Someone pushes to the wrong ECR repository. Someone updates production instead of staging. I have personally done all three of those in a single bad afternoon. The worst one is deploying a broken image with no way to roll back except pushing the previous image and hoping you remember which tag it was. I have watched this exact progression on enough teams to know the pipeline question is never "if" but "when," and the answer is almost always "after something breaks in production at 2 AM."
This article walks through a pipeline built on the standard AWS deployment stack: CodePipeline for orchestration, CodeBuild for container image builds, ECR for image storage, CodeDeploy for safe traffic-shifting deployments. I include complete infrastructure-as-code for both Terraform and Pulumi. Every resource, from IAM roles to CloudWatch alarms. If you have read my deep-dives on the individual services, this is where all of those pieces snap together into something you can actually deploy on Monday morning.
Pipeline Architecture Overview
Three stages. Source, build, deploy. That is the whole thing. A push to GitHub triggers the pipeline through a CodeStar Connection. CodePipeline runs the show. CodeBuild compiles the container image, pushes it to ECR, captures the image digest. CodeDeploy then shifts traffic to the new Lambda version using a canary strategy, rolling back automatically if a CloudWatch error alarm fires. I have deliberately kept this pipeline simple because every moving part you add is another thing that breaks at 3 AM.
flowchart LR
GH[GitHub
Repository] --> CS[CodeStar
Connection]
CS --> CP[CodePipeline
V2 / QUEUED]
CP --> CB[CodeBuild
Docker Build]
CB --> ECR[Amazon ECR
Immutable Tags]
ECR --> CD[CodeDeploy
Canary 10/5]
CD --> LM[Lambda
Alias: live]
LM --> CW[CloudWatch
Error Alarm]
CW -.->|Rollback| CD Each component owns one job. When something fails (and it will), knowing these boundaries saves you from chasing the wrong logs.
| Component | Role | Key Configuration |
|---|---|---|
| CodeStar Connection | Authenticates GitHub access via an AWS-managed GitHub App | Requires one-time manual activation in the AWS Console |
| CodePipeline | Orchestrates the source, build, and deploy flow | V2 pipeline with QUEUED execution mode for ordered deployments |
| CodeBuild | Builds the Docker image and pushes to ECR | Privileged mode enabled for Docker-in-Docker. Captures image digest |
| Amazon ECR | Stores container images with immutable tags | Scan-on-push enabled. Lifecycle policy retains last 10 images |
| CodeDeploy | Shifts traffic on the Lambda alias using canary strategy | Canary10Percent5Minutes with CloudWatch alarm rollback trigger |
| Lambda | Runs the containerized function behind a "live" alias | package_type = "Image". CodeDeploy manages the alias version |
| CloudWatch | Monitors Lambda errors and triggers rollback | Error alarm (Errors > 0) associated with CodeDeploy deployment group |
For deeper coverage of individual services, see AWS CodePipeline: An Architecture Deep-Dive, AWS CodeBuild: An Architecture Deep-Dive, AWS CodeDeploy: An Architecture Deep-Dive, and AWS Lambda Container Images: An Architecture Deep-Dive.
The Example Lambda Function
The pipeline deploys a trivial Python Lambda function packaged as a container image. I kept the handler minimal on purpose. None of the architecture patterns here change based on what your function actually does.
import json
def handler(event, context):
"""Simple Lambda handler for demonstration."""
return {
"statusCode": 200,
"body": json.dumps({
"message": "Hello from containerized Lambda!",
"version": "1.0.0",
}),
}
In production, this handler holds your actual business logic: an API endpoint, an event processor, a queue consumer, whatever your service does. The pipeline does not care. It builds the image, pushes it to ECR, and shifts traffic safely. That separation is the whole point.
Container Image Design
The Dockerfile uses the AWS Lambda base image for Python. It ships with the Lambda Runtime Interface Client and the correct entry point configuration already wired up. I reach for this image every time because it is the shortest path to something that actually works. You can build custom base images (and I cover that in the container images article), but start here.
FROM public.ecr.aws/lambda/python:3.12
COPY requirements.txt ${LAMBDA_TASK_ROOT}/
RUN pip install -r ${LAMBDA_TASK_ROOT}/requirements.txt
COPY app.py ${LAMBDA_TASK_ROOT}/
CMD ["app.handler"]
Every line in this Dockerfile is there for a reason.
| Decision | Rationale |
|---|---|
AWS base image (public.ecr.aws/lambda/python:3.12) | Includes the Lambda Runtime Interface Client, correct entry point, and matches the Lambda execution environment exactly |
COPY requirements.txt before COPY app.py | Docker layer caching: dependency installation only re-runs when requirements.txt changes, not on every code change |
RUN pip install | Dependencies bake into the image at build time, ensuring reproducibility and eliminating cold start dependency downloads |
LAMBDA_TASK_ROOT | The environment variable the base image sets (/var/task); using it rather than hardcoding the path ensures compatibility across base image versions |
CMD ["app.handler"] | Specifies the handler in module.function format; can be overridden at the Lambda function level if needed |
For multi-stage builds, custom runtimes, and image optimization strategies, see AWS Lambda Container Images: An Architecture Deep-Dive.
The Build Specification
The buildspec.yml file is where the actual work happens. For this pipeline, CodeBuild authenticates to ECR, builds the Docker image, pushes it with both a commit-hash tag and a latest tag, then writes out an imageDetail.json file. That JSON file is how CodeDeploy knows which image to deploy. Lose it or format it wrong and the deploy stage fails silently, which is a fun one to debug the first time.
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- REPOSITORY_URI=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:$IMAGE_TAG .
- docker tag $REPOSITORY_URI:$IMAGE_TAG $REPOSITORY_URI:latest
post_build:
commands:
- echo Pushing the Docker image...
- docker push $REPOSITORY_URI:$IMAGE_TAG
- docker push $REPOSITORY_URI:latest
- echo Writing image detail file...
- printf '{"ImageURI":"%s"}' $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json
artifacts:
files:
- imageDetail.json
- appspec.yml
| Phase | Actions | Why |
|---|---|---|
| pre_build | ECR login, compute repository URI, extract commit hash for tagging | Authentication must happen before any Docker push. Commit-hash tags provide traceability from image back to source |
| build | docker build and docker tag | Builds the image with the commit-specific tag and aliases it as latest for convenience |
| post_build | Push both tags to ECR, write imageDetail.json | The imageDetail.json artifact is what CodeDeploy reads to determine which image to deploy |
That imageDetail.json file is the handoff point between CodeBuild and CodeDeploy. One field: ImageURI, containing the fully qualified ECR image URI with the tag. Simple, but if you get it wrong, CodeDeploy has no idea what to deploy. I have seen teams spend hours debugging deploy failures that traced back to a typo in the printf format string.
For build phases, caching strategies, and cost optimization, see AWS CodeBuild: An Architecture Deep-Dive.
ECR Repository Configuration
Three ECR settings matter here, and I set all three on every repository I create.
| Setting | Value | Rationale |
|---|---|---|
| Image tag mutability | IMMUTABLE | Prevents overwriting an existing tag with a different image. Once v1.2.3 is pushed, it always refers to the same image digest |
| Scan on push | Enabled | Automatically scans every pushed image for known CVEs using Amazon Inspector. Results appear in the ECR console and can trigger EventBridge rules |
| Lifecycle policy | Keep last 10 images | Prevents unbounded storage growth. Older images are automatically deleted while retaining enough history for rollback |
Immutable tags deserve special emphasis. Without them, pushing a new image tagged latest silently destroys the previous image. Need to roll back? Too bad; the old latest is gone. I learned this lesson the hard way on a production service in 2021. With immutable tags, the pipeline uses commit-hash tags that are unique and permanent. The latest tag still works as a convenience alias; ECR allows re-tagging within the same repository even with immutable tags enabled, as long as the manifest differs.
IAM Roles and Least-Privilege Policies
Four IAM roles. Each scoped to the minimum permissions needed for its job. I will not sugarcoat this: getting IAM right is the most tedious part of building a CI/CD pipeline. It is also the part that matters most. Overly permissive roles turn your build system into an attack surface, and I have seen security auditors flag wildcard policies on CodeBuild roles more times than I can count.
flowchart TD
CPR[CodePipeline Role] -->|Starts builds| CBR[CodeBuild Role]
CPR -->|Creates deployments| CDR[CodeDeploy Role]
CPR -->|Reads source| CSC[CodeStar Connection]
CBR -->|Pushes images| ECR[Amazon ECR]
CBR -->|Writes logs| CWL[CloudWatch Logs]
CDR -->|Updates alias| LR[Lambda Execution Role]
LR -->|Assumed by| LF[Lambda Function] | Role | Trusted Principal | Key Permissions | Scope |
|---|---|---|---|
| CodeBuild | codebuild.amazonaws.com | ECR push/pull, S3 artifact read/write, CloudWatch Logs | Scoped to specific ECR repo, artifact bucket, and log group |
| CodePipeline | codepipeline.amazonaws.com | S3 artifacts, CodeBuild start, CodeDeploy create deployment, CodeStar connection use, Lambda invoke, IAM PassRole | Scoped to specific resources in the pipeline |
| Lambda Execution | lambda.amazonaws.com | CloudWatch Logs (via AWSLambdaBasicExecutionRole managed policy) | Function-level. Extend with application-specific permissions |
| CodeDeploy | codedeploy.amazonaws.com | Lambda alias management, CloudWatch alarm read (via AWSCodeDeployRoleForLambda managed policy) | Service-managed policy covers Lambda deployment needs |
Two things in this IAM setup trip people up. First, the CodeBuild role needs GetAuthorizationToken at the account level. You cannot scope it to a specific repo; AWS requires it globally. All the other ECR operations (layer/image pulls and pushes) are scoped to the specific repository ARN. Second, the CodePipeline role needs iam:PassRole for the CodeDeploy role. CodePipeline passes the CodeDeploy service role when creating a deployment. Miss this permission and you get a cryptic "Access Denied" error in the deploy stage with no useful context in the logs. I spent two hours on this the first time I built one of these pipelines.
Safe Deployments with CodeDeploy
CodeDeploy is what makes this pipeline production-safe. Without it, you are flipping 100% of traffic to the new version and hoping for the best. With it, traffic shifts gradually through the Lambda alias mechanism, and CodeDeploy rolls back automatically if something goes wrong. The difference between these two approaches is the difference between a deployment and a prayer.
I use Canary10Percent5Minutes here. CodeDeploy routes 10% of traffic to the new version, waits 5 minutes for the CloudWatch error alarm to evaluate, then shifts the remaining 90% if no alarm fires. If the alarm triggers during that canary window, CodeDeploy reverts the alias to the previous version. No pager alert, no manual intervention, no scrambling to find the right image tag to push.
flowchart LR
ST[Start
Deployment] --> C10[Shift 10%
to New Version]
C10 --> W5[Wait 5 Minutes
Monitor Alarm]
W5 -->|Alarm OK| C90[Shift Remaining
90% Traffic]
C90 --> DN[Deployment
Complete]
W5 -->|Alarm FIRED| RB[Automatic
Rollback]
RB --> RV[Revert Alias
to Previous] Lambda supports several deployment strategies, each with a different risk profile.
| Strategy | Behavior | Use Case |
|---|---|---|
| AllAtOnce | 100% traffic shift immediately | Development and testing environments only |
| Canary10Percent5Minutes | 10% for 5 minutes, then 100% | Production: quick validation with fast feedback |
| Canary10Percent10Minutes | 10% for 10 minutes, then 100% | Production: extended validation window |
| Canary10Percent15Minutes | 10% for 15 minutes, then 100% | Production: conservative validation |
| Linear10PercentEvery1Minute | 10% increments every minute | Production: gradual rollout over 10 minutes |
| Linear10PercentEvery3Minutes | 10% increments every 3 minutes | Production: gradual rollout over 30 minutes |
Canary10Percent5Minutes is my default for everything. Five minutes gives you a fast feedback loop. You know quickly whether the new version is healthy, and 90% of traffic stays on the proven version the entire time. One caveat: if your function handles low traffic, 10% might not generate enough data points for the alarm to fire within 5 minutes. In that case, Linear10PercentEvery3Minutes gives you a more gradual rollout with more observation windows.
The CloudWatch error alarm is what actually triggers the rollback. It watches the Lambda function's Errors metric and fires if any errors occur within a 60-second evaluation period. Yes, that means a single error during the canary window triggers a rollback. That sensitivity is intentional. In a canary deployment, even one error during the validation window warrants investigation. You can loosen the threshold later based on your function's baseline error rate, but I recommend starting at zero tolerance. It forces your team to deploy clean code, and you can always relax it once you have confidence in your testing pipeline.
For lifecycle hooks, rollback mechanics, and more deployment strategy details, see AWS CodeDeploy: An Architecture Deep-Dive.
CodePipeline Orchestration
CodePipeline V2 ties it all together. I use QUEUED execution mode, which ensures concurrent commits deploy in order rather than superseding each other. This matters more than most people realize on a team with multiple engineers pushing throughout the day.
| Stage | Action | Provider | Artifacts | Key Configuration |
|---|---|---|---|---|
| Source | Fetch code from GitHub | CodeStarSourceConnection | Output: source_output | Repository, branch, CodeStar connection ARN |
| Build | Build and push container image | CodeBuild | Input: source_output. Output: build_output | CodeBuild project name |
| Deploy | Canary deployment to Lambda | CodeDeploy | Input: build_output | CodeDeploy app name, deployment group, imageDetail.json |
Do not use V1 pipelines for this. V1 defaults to SUPERSEDED execution mode, which cancels in-progress deployments when a new commit arrives. Picture what happens if CodeDeploy is mid-canary, with 10% of traffic on the new version, and the pipeline cancels. The Lambda alias can end up in an inconsistent state with partial traffic routing. I have seen this happen exactly once, and once was enough. QUEUED mode ensures each deployment completes fully before the next one starts.
CodeStar Connections replaced the old OAuth-based GitHub integrations, and good riddance. The old approach required storing OAuth tokens and polling for changes. CodeStar uses an AWS-managed GitHub App installed in your GitHub organization: push-based event detection, fine-grained repository access, no token management. One quirk: the connection requires a one-time manual activation in the AWS Console after you provision the infrastructure. AWS forces this so a human explicitly authorizes the GitHub integration. Annoying the first time, sensible as a security measure.
For cross-account patterns, execution modes, and trigger filtering, see AWS CodePipeline: An Architecture Deep-Dive.
Infrastructure as Code: Terraform
My Terraform implementation follows a one-file-per-service pattern. Every resource uses ${local.name_prefix} for naming. Tags come from local.common_tags via provider default tags. Variables include validation blocks. Nothing clever here; just consistent conventions that make it easy to find things six months later when you have forgotten how the pipeline works.
Here are the key resource definitions.
# ECR repository with immutable tags
resource "aws_ecr_repository" "lambda" {
name = "${local.name_prefix}-lambda"
image_tag_mutability = "IMMUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
# Lambda function with container image
resource "aws_lambda_function" "this" {
function_name = "${local.name_prefix}-lambda"
role = aws_iam_role.lambda_execution.arn
package_type = "Image"
image_uri = local.ecr_image_uri
memory_size = var.lambda_memory_size
timeout = var.lambda_timeout
architectures = [var.lambda_architecture]
publish = true
}
# Lambda alias managed by CodeDeploy
resource "aws_lambda_alias" "live" {
name = "live"
function_name = aws_lambda_function.this.function_name
function_version = aws_lambda_function.this.version
lifecycle {
ignore_changes = [function_version]
}
}
# CodeDeploy with canary deployment and alarm rollback
resource "aws_codedeploy_deployment_group" "lambda" {
app_name = aws_codedeploy_app.lambda.name
deployment_group_name = "${local.name_prefix}-lambda"
deployment_config_name = "CodeDeployDefault.LambdaCanary10Percent5Minutes"
service_role_arn = aws_iam_role.codedeploy.arn
deployment_style {
deployment_type = "BLUE_GREEN"
deployment_option = "WITH_TRAFFIC_CONTROL"
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE", "DEPLOYMENT_STOP_ON_ALARM"]
}
alarm_configuration {
alarms = [aws_cloudwatch_metric_alarm.lambda_errors.alarm_name]
enabled = true
}
}
Pay close attention to lifecycle { ignore_changes = [function_version] } on the Lambda alias. Without that block, terraform apply resets the alias to the initially deployed version every single time. All the deployments CodeDeploy performed since your last Terraform run? Gone. I have seen this catch people on their first infrastructure update after the pipeline has been running for a few weeks. The ignore_changes annotation tells Terraform to back off; CodeDeploy owns the function_version attribute. Terraform creates the alias. CodeDeploy manages which version it points to.
The complete Terraform implementation is available at github.com/CharlesSieg/tf-config-lambda-cicd.
Infrastructure as Code: Pulumi
The Pulumi version follows a similar one-module-per-service pattern, but in Python. Each service gets a create_*() function, and __main__.py orchestrates them. Dependencies pass explicitly as function parameters, which I prefer over Pulumi's automatic dependency tracking for anything beyond simple stacks.
Here are the key resource definitions.
# ECR repository with immutable tags (ecr.py)
def create_ecr_repo(config):
name_prefix = config["name_prefix"]
repo = aws.ecr.Repository(
"lambda-repo",
name=f"{name_prefix}-lambda",
image_tag_mutability="IMMUTABLE",
image_scanning_configuration={
"scan_on_push": True,
},
tags=config["common_tags"],
)
# Lifecycle policy: keep last 10 images
aws.ecr.LifecyclePolicy(
"lambda-repo-lifecycle",
repository=repo.name,
policy=json.dumps({
"rules": [{
"rulePriority": 1,
"description": "Keep last 10 images",
"selection": {
"tagStatus": "any",
"countType": "imageCountMoreThan",
"countNumber": 10,
},
"action": {"type": "expire"},
}],
}),
)
return repo
# Orchestration (__main__.py)
config = get_config()
ecr_repo = create_ecr_repo(config)
artifact_bucket = create_artifact_bucket(config)
lambda_role = create_lambda_role(config)
codedeploy_role = create_codedeploy_role(config)
codestar_conn = create_codestar_connection(config)
codebuild_role = create_codebuild_role(config, ecr_repo, artifact_bucket)
codebuild_project = create_codebuild_project(config, ecr_repo, codebuild_role)
lambda_fn, lambda_alias = create_lambda_function(config, ecr_repo, lambda_role)
log_group, error_alarm = create_cloudwatch_resources(config, lambda_fn)
codedeploy_app, codedeploy_group = create_codedeploy(
config, lambda_fn, lambda_alias, error_alarm, codedeploy_role
)
codepipeline_role = create_codepipeline_role(
config, artifact_bucket, codebuild_project,
codestar_conn, lambda_fn, codedeploy_role
)
pipeline = create_codepipeline(
config, artifact_bucket, codestar_conn, codebuild_project,
codedeploy_app, codedeploy_group, codepipeline_role
)
Both implementations produce identical AWS resources. Same pipeline, same behavior, different language. Pick whichever one matches your team's existing tooling. If your team already writes Python, Pulumi will feel more natural. If your infrastructure team standardized on HCL years ago, use Terraform. The pipeline does not care.
| Dimension | Terraform | Pulumi |
|---|---|---|
| Language | HCL (domain-specific) | Python (general-purpose) |
| State | S3 + DynamoDB or Terraform Cloud | Pulumi Cloud or self-managed S3 backend |
| Resource naming | "${local.name_prefix}-name" in HCL | f"{name_prefix}-name" in Python |
| Dependencies | Implicit via resource references | Explicit via function parameters |
| Dynamic logic | Limited (count, for_each, dynamic) | Full Python (loops, conditionals, classes) |
| Module structure | One .tf file per service | One .py file per service with create_*() functions |
| Testing | terraform validate, tflint, Terratest | pytest, standard Python testing tools |
The complete Pulumi implementation is available at github.com/CharlesSieg/pul-py-config-lambda-cicd.
For a broader comparison including CloudFormation and CDK, see Infrastructure as Code: CloudFormation, CDK, Terraform, and Pulumi Compared.
Operational Considerations
Bootstrap Sequence
Every CI/CD pipeline has a chicken-and-egg problem on first deployment, and this one is no exception. CodeDeploy needs a Lambda function with a published version. Lambda needs a container image in ECR. ECR is empty until CodeBuild runs. CodeBuild does not run until CodePipeline triggers. CodePipeline does not trigger until the infrastructure exists. It is circular, and the only way through is a manual bootstrap.
- Run
terraform apply(orpulumi up). ECR, IAM roles, and most resources will create successfully. Lambda creation may fail because no image exists yet. - Build and push an initial image to the ECR repository manually:
docker build -t ACCOUNT.dkr.ecr.REGION.amazonaws.com/NAME:initial .anddocker push. - Run
terraform applyagain. Lambda and all remaining resources will create successfully. - Activate the CodeStar Connection in the AWS Console under Developer Tools → Connections.
CodeStar Connection Activation
AWS creates the CodeStar Connection in a PENDING state. You have to activate it manually in the Console by authorizing the AWS-managed GitHub App to access your repository. There is no way around this; AWS requires a human to explicitly grant the access. Once activated, the connection status flips to AVAILABLE and the pipeline can trigger on push events. Forget this step and your pipeline just sits there doing nothing, which is exactly what happened the first time I set one up.
Cost Breakdown
Pipeline costs are genuinely small. The two line items that actually matter are CodeBuild compute time and the Lambda function itself, both of which scale with usage. Everything else is pocket change.
| Resource | Monthly Cost | Notes |
|---|---|---|
| CodePipeline V2 | ~$0.50–2.00 | $0.002 per action execution minute; 3 actions per execution |
| CodeBuild | ~$1.00–10.00 | $0.005/min (small); typical Docker build is 2–5 minutes |
| ECR storage | ~$0.10–1.00 | $0.10/GB/month; 10 images at ~200 MB each ≈ $0.20 |
| S3 artifacts | <$0.10 | Minimal storage for pipeline artifacts |
| CloudWatch | <$0.50 | Log group + metric alarm |
| CodeDeploy | $0.02/deployment | Per-deployment charge for Lambda platform |
| Lambda | Usage-dependent | Application cost, separate from pipeline overhead |
| Total pipeline overhead | ~$2–15/month | Excluding Lambda compute; scales with deployment frequency |
For a team deploying 5 times per day to a single Lambda function, expect roughly $5 to $10 per month in pipeline infrastructure costs. Compare that to the engineering time burned by manual deployments or the cost of a single production incident that a canary deployment would have caught. The math is not close.
Additional Resources
Infrastructure as Code Repositories
- tf-config-lambda-cicd: Complete Terraform implementation
- pul-py-config-lambda-cicd: Complete Pulumi (Python) implementation
Related Architecture Articles
- AWS CodePipeline: An Architecture Deep-Dive: Pipeline orchestration, V2 features, execution modes, cross-account patterns
- AWS CodeBuild: An Architecture Deep-Dive: Build environments, caching, VPC integration, cost optimization
- AWS CodeDeploy: An Architecture Deep-Dive: Deployment strategies, lifecycle hooks, rollback mechanics
- AWS Lambda Container Images: An Architecture Deep-Dive: Container image architecture, base images, optimization
- Infrastructure as Code: CloudFormation, CDK, Terraform, and Pulumi Compared: IaC tool comparison and selection guidance
AWS Documentation
- Lambda Container Image Support
- CodePipeline V2 Pipeline Structure
- CodeDeploy Lambda Deployments
- CodeBuild Docker Sample
- ECR Lifecycle Policies
- CodeStar Connections
Let's Build Something!
I help teams ship cloud infrastructure that actually works at scale. Whether you're modernizing a legacy platform, designing a multi-region architecture from scratch, or figuring out how AI fits into your engineering workflow, I've seen your problem before. Let me help.
Currently taking on select consulting engagements through Vantalect.

