Disclosure: This article contains affiliate links. If you purchase through these links, TechScriptAid may earn a commission at no extra cost to you. We only recommend services we use and trust in production environments.
I’ve spent 17+ years architecting enterprise .NET systems for Fortune 500 clients. One thing I’ve learned: the gap between a “working” deployment and a production-grade deployment is where most teams lose months of engineering time — and where real money is either saved or burned.
This isn’t another “Hello World in Docker” tutorial. This is the enterprise playbook — covering multi-stage Docker builds with distroless images, .NET Aspire orchestration, OpenTelemetry observability, health check infrastructure, and cloud-agnostic deployment patterns that let you move between providers without rewriting a single line of application code.
If you’ve only ever deployed with dotnet publish and FTP, buckle up. If you’re already containerizing but haven’t adopted Aspire or distroless images yet — you’re leaving performance and security on the table.
The Architecture Stack (What We’re Building)
Before writing a single line of code, let’s define the production stack. Enterprise deployments aren’t about individual tools — they’re about how tools compose into a reliable system.
| Layer | Tool | Why This, Not That |
|---|---|---|
| Runtime | .NET 10 LTS (C# 14) | LTS = 3 years of support. Non-negotiable for enterprise. |
| Orchestration | .NET Aspire 13 | Code-first infrastructure. Replaces Docker Compose YAML hell. |
| Containerization | Docker (multi-stage, distroless) | 70% smaller images. Minimal attack surface. |
| Observability | OpenTelemetry (OTLP) | Vendor-agnostic. Feeds Datadog, Grafana, App Insights — your choice. |
| Health Checks | ASP.NET Core Health Checks + Kubernetes probes | Self-healing infrastructure. Auto-restart on failure. |
| CI/CD | GitHub Actions + Container Registry | Automated build → test → push → deploy pipeline. |
| Cloud Infrastructure | Cloud-agnostic (demo on DigitalOcean) | Same image deploys to DO, Azure, AWS, or bare metal. |
This is the stack I use for production systems. Every layer is chosen for a reason — not because it’s trendy, but because it solves a real operational problem.
Step 1: The Dockerfile That Actually Belongs in Production
Most Docker tutorials teach you the basics and stop there. Here’s the problem: basic Dockerfiles create 500MB+ images loaded with SDK tools, debug symbols, and package caches that have zero business being in production. That’s not just wasteful — it’s a security liability.
Here’s what a production-grade, multi-stage Dockerfile looks like for ASP.NET Core on .NET 10:
# ============================================
# STAGE 1: Build (SDK image — never ships to prod)
# ============================================
FROM mcr.microsoft.com/dotnet/sdk:10.0-alpine AS build
ARG TARGETARCH
WORKDIR /src
# Restore FIRST — leverages Docker layer caching
# Only re-runs when .csproj files change
COPY ["src/MyApp.Api/*.csproj", "src/MyApp.Api/"]
COPY ["src/MyApp.Domain/*.csproj", "src/MyApp.Domain/"]
COPY ["src/MyApp.Infrastructure/*.csproj", "src/MyApp.Infrastructure/"]
RUN dotnet restore "src/MyApp.Api/MyApp.Api.csproj" \
-a ${TARGETARCH/amd64/x64}
# Now copy everything and build
COPY . .
RUN dotnet publish "src/MyApp.Api/MyApp.Api.csproj" \
-c Release \
-a ${TARGETARCH/amd64/x64} \
--no-restore \
-o /app/publish
# ============================================
# STAGE 2: Runtime (distroless — minimal attack surface)
# ============================================
FROM mcr.microsoft.com/dotnet/aspnet:10.0-alpine AS final
# Security: Run as non-root user
USER $APP_UID
WORKDIR /app
EXPOSE 8080
# Copy ONLY the published output — no SDK, no source
COPY --from=build /app/publish .
# Health check at container level
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/healthz || exit 1
ENTRYPOINT ["dotnet", "MyApp.Api.dll"]
Why This Matters
Layer caching strategy: By copying .csproj files and restoring before copying source code, Docker caches the expensive NuGet restore step. On a typical CI pipeline, this cuts build times from 3–4 minutes to under 60 seconds for code-only changes. I’ve seen teams waste thousands of CI/CD minutes per month because they didn’t structure their Dockerfile layers correctly.
Alpine base images: The Alpine variant of the ASP.NET runtime image is roughly 100MB compared to 210MB+ for the Debian-based default. In Kubernetes environments where you’re pulling images across nodes, that size difference translates directly into faster deployments and lower bandwidth costs.
Non-root execution: The USER $APP_UID directive ensures your application never runs as root inside the container. This is a CIS benchmark requirement and a hard gate in most enterprise security reviews. Amateurs skip this — professionals don’t.
Container-level health checks: The HEALTHCHECK instruction gives Docker (and any orchestrator) a way to verify your app is actually responding, not just that the process is running. A process can be alive but deadlocked — health checks catch that.
Step 2: .NET Aspire — The Tool That Replaced Our Docker Compose Files
If you’re still writing docker-compose.yml files by hand in 2026, you’re working harder than you need to. .NET Aspire 13 (shipped with .NET 10) replaces YAML-based orchestration with C# code — meaning you get IntelliSense, type safety, compile-time validation, and the ability to express complex infrastructure relationships in a language you already know.
Here’s what an Aspire AppHost looks like for a production-grade system with an API, PostgreSQL database, and Redis cache:
// AppHost/Program.cs — Your entire infrastructure in C#
var builder = DistributedApplication.CreateBuilder(args);
// Infrastructure resources
var postgres = builder.AddPostgres("db")
.WithDataVolume() // Persistent storage
.WithPgAdmin(); // Admin UI for dev
var appDb = postgres.AddDatabase("appdb");
var redis = builder.AddRedis("cache")
.WithRedisInsight(); // Redis monitoring UI
// Application services
var api = builder.AddProject<Projects.MyApp_Api>("api")
.WithReference(appDb) // Auto-injects connection string
.WithReference(redis) // Auto-injects Redis config
.WithReplicas(2) // Load-balanced in dev too
.WithHttpsEndpoint(); // TLS everywhere
builder.Build().Run();
Compare that to the equivalent Docker Compose YAML — which would be 80+ lines of untyped, error-prone configuration with no IntelliSense and no compile-time checking. Aspire isn’t just cleaner — it catches misconfiguration at build time instead of 2 AM in production.
What Aspire Gives You for Free
Service discovery: When you call .WithReference(appDb), Aspire injects the correct connection string into your API’s configuration. Same code works locally and in production — Aspire resolves the addresses at runtime using a https+http://servicename URI scheme.
Built-in observability dashboard: Launch your Aspire app and visit the dashboard at http://localhost:18888. You get real-time structured logs, distributed traces (OpenTelemetry), metrics, and health status for every service — without installing Grafana, Prometheus, or Jaeger. It’s the single-pane-of-glass that enterprise ops teams dream about.
One-command deployment artifacts: Run aspire publish and Aspire generates Docker Compose files, Kubernetes manifests, or Azure Bicep templates — your choice. Same AppHost model, multiple deployment targets.
Step 3: Health Checks That Actually Save Your Operations Team
In enterprise systems, health checks are not optional. They’re the difference between “the app auto-recovered at 3 AM” and “we got paged at 3 AM and spent two hours debugging.”
// Program.cs — Enterprise health check configuration
builder.Services.AddHealthChecks()
.AddNpgSql(
connectionString: builder.Configuration
.GetConnectionString("appdb")!,
name: "postgresql",
tags: ["db", "ready"])
.AddRedis(
redisConnectionString: builder.Configuration
.GetConnectionString("cache")!,
name: "redis",
tags: ["cache", "ready"])
.AddCheck<ExternalApiHealthCheck>(
"payment-gateway",
tags: ["external", "ready"]);
// Map endpoints: liveness vs readiness
app.MapHealthChecks("/healthz", new HealthCheckOptions
{
Predicate = _ => true,
ResponseWriter = UIResponseWriter
.WriteHealthCheckUIResponse
});
app.MapHealthChecks("/healthz/ready", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("ready")
});
app.MapHealthChecks("/healthz/live", new HealthCheckOptions
{
Predicate = _ => false // Just checks if the process responds
});
The pattern: Three separate endpoints for three different purposes. /healthz/live tells the orchestrator “the process is running” (liveness probe). /healthz/ready tells the load balancer “I can handle traffic” — meaning my database and cache connections are healthy (readiness probe). /healthz gives the full picture for monitoring dashboards.
When a Kubernetes readiness probe fails, the pod is removed from the service endpoint — no traffic is routed to a broken instance. When a liveness probe fails, the pod is restarted. This is self-healing infrastructure, and it’s non-negotiable for any system that needs to meet an SLA.
Step 4: OpenTelemetry — Vendor-Agnostic Observability
Here’s an enterprise truth: the monitoring tool you use today won’t be the one you use in three years. I’ve seen teams migrate from New Relic to Datadog, from ELK to Grafana, from custom APM to Application Insights — sometimes twice in five years.
OpenTelemetry (OTel) is the answer. It’s a CNCF-graduated project that provides a single, vendor-neutral standard for traces, metrics, and logs. Instrument once, export to anything.
// ServiceDefaults/Extensions.cs — Shared across all services
public static IHostApplicationBuilder AddServiceDefaults(
this IHostApplicationBuilder builder)
{
// OpenTelemetry: traces + metrics + logs
builder.Services.AddOpenTelemetry()
.WithTracing(tracing =>
{
tracing
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddEntityFrameworkCoreInstrumentation()
.AddRedisInstrumentation()
.AddSource("MyApp.*"); // Custom spans
})
.WithMetrics(metrics =>
{
metrics
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddRuntimeInstrumentation()
.AddProcessInstrumentation()
.AddMeter("MyApp.*"); // Custom metrics
});
// Export to ANY backend via OTLP
builder.Services.AddOpenTelemetry()
.UseOtlpExporter(); // Reads OTEL_EXPORTER_OTLP_ENDPOINT
return builder;
}
With this setup, your telemetry data flows to whatever backend you configure via a single environment variable. During development, Aspire’s dashboard consumes it. In production, point OTEL_EXPORTER_OTLP_ENDPOINT at Datadog, Grafana Cloud, Azure Monitor, or Jaeger — zero code changes.
This is what separates enterprise architecture from tutorial-grade code. Amateurs hardcode their monitoring. Professionals abstract it.
Step 5: Cloud-Agnostic Deployment (Demo on DigitalOcean)
The entire point of containerization is portability. Your Docker image should run identically on DigitalOcean, Azure, AWS, Google Cloud, or a Raspberry Pi in your closet. Here’s how to deploy our production-grade container to a real cloud environment.
I’m demoing on DigitalOcean’s infrastructure because it offers the fastest path from “container image” to “running in production” with predictable pricing — no surprise egress bills or complex IAM configurations. For enterprise teams evaluating infrastructure, DigitalOcean offers $200 in free credits for 60 days to test your deployment workflow.
Option A: DigitalOcean App Platform (Managed PaaS)
Connect your GitHub repo, configure the build, and App Platform handles the rest — scaling, SSL termination, zero-downtime deployments, and container orchestration.
# .do/app.yaml — DigitalOcean App Platform spec
name: myapp-production
services:
- name: api
dockerfile_path: Dockerfile
source_dir: /
http_port: 8080
instance_count: 2
instance_size_slug: professional-xs
health_check:
http_path: /healthz/ready
initial_delay_seconds: 15
period_seconds: 30
envs:
- key: ASPNETCORE_ENVIRONMENT
value: Production
- key: OTEL_EXPORTER_OTLP_ENDPOINT
value: ${OTLP_ENDPOINT}
databases:
- engine: PG
name: appdb
production: true
Total cost for this production setup: approximately $30–50/month (2 app instances + managed PostgreSQL). Compare that to $200+ on Azure App Service + Azure SQL for equivalent configuration.
Option B: Kubernetes on DigitalOcean (DOKS)
For teams that need full Kubernetes control — custom ingress rules, service mesh, advanced scheduling — DigitalOcean’s managed Kubernetes (DOKS) starts at $12/month per node. You get the Kubernetes API without managing the control plane.
# k8s/deployment.yaml — Production Kubernetes manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
spec:
containers:
- name: api
image: registry.digitalocean.com/myapp/api:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /healthz/live
port: 8080
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /healthz/ready
port: 8080
initialDelaySeconds: 15
Notice how the liveness and readiness probes map directly to the health check endpoints we configured in Step 3. That’s the whole point — infrastructure and application code working as one coherent system.
Option C: Aspire CLI (The 2026 Way)
With Aspire 13, you can generate deployment artifacts directly from your AppHost:
# Generate Docker Compose for any cloud
aspire publish --publisher docker-compose
# Generate Kubernetes manifests
aspire publish --publisher kubernetes
# Deploy directly to Azure Container Apps
aspire deploy --publisher azure-container-apps
# Full pipeline: build → publish → deploy
aspire do build publish deploy
One AppHost model. Multiple deployment targets. The same C# code that orchestrates your dev environment generates your production manifests. This is the level of infrastructure-as-code that enterprise teams have been trying to achieve with Terraform and Pulumi — except it’s native to your .NET solution.
For a detailed comparison of cloud hosting pricing for .NET apps, see our Azure vs AWS vs DigitalOcean guide.
Step 6: The CI/CD Pipeline
A production deployment without automated CI/CD is just a hobby project. Here’s a GitHub Actions workflow that builds, tests, scans, and pushes your container image:
# .github/workflows/deploy.yml
name: Build & Deploy
on:
push:
branches: [main]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run tests
run: dotnet test --configuration Release
- name: Build container image
run: docker build -t myapp-api:${{ github.sha }} .
- name: Security scan (Trivy)
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp-api:${{ github.sha }}
severity: CRITICAL,HIGH
- name: Push to registry
run: |
docker tag myapp-api:${{ github.sha }} \
registry.digitalocean.com/myapp/api:${{ github.sha }}
docker push registry.digitalocean.com/myapp/api:${{ github.sha }}
The Trivy security scan step is what separates enterprise pipelines from amateur ones. It scans your container image for known CVEs before it ever reaches a registry. If a critical vulnerability is found, the pipeline fails and the image never ships. Most teams don’t add this until after their first security audit — add it now.
The Enterprise Deployment Checklist
Before you deploy anything to production, run through this list. I’ve used variations of this checklist for Fortune 500 client deployments:
| Item | Status |
|---|---|
| Multi-stage Dockerfile with Alpine/distroless base | ⬜ |
| Non-root container execution (USER directive) | ⬜ |
| Health check endpoints (liveness + readiness) | ⬜ |
| OpenTelemetry instrumentation (traces, metrics, logs) | ⬜ |
| Container image security scan (Trivy/Snyk) in CI | ⬜ |
| Secrets via environment variables (not appsettings.json) | ⬜ |
| Resource limits (CPU/memory) defined | ⬜ |
| Rolling update strategy (zero-downtime) | ⬜ |
| Automated tests pass before deployment | ⬜ |
| Backup and rollback procedure documented | ⬜ |
Why This Approach Wins
The architecture we’ve built here is cloud-agnostic by design. The same Docker image deploys to:
- DigitalOcean — for predictable pricing and fast time-to-production (where I demo’d above)
- Azure Container Apps — for teams deep in the Microsoft ecosystem
- AWS ECS/Fargate — for maximum service breadth
- Any Kubernetes cluster — on-premises or cloud
- Cloudways managed hosting — for teams that want ops handled for them
The application doesn’t know or care where it’s running. That’s the point. When your infrastructure contract expires, when pricing changes, when compliance requirements shift — you migrate by changing a deployment target, not by rewriting your application.
This is how Fortune 500 companies approach cloud strategy. And now you have the same playbook.
What’s Next
This article covered the deployment pipeline. In upcoming posts, I’ll dive deeper into:
- Structured logging with Serilog + Seq — production-grade log aggregation
- EF Core migration strategies for zero-downtime deployments — the patterns that prevent 3 AM outages
- Rate limiting, circuit breakers, and resilience patterns — using Polly with ASP.NET Core
About the Author: Mickey (Harsimrat Singh Thukral) is the Founder & Software Architect at TechScriptAid, with 17+ years of enterprise .NET experience consulting for Fortune 500 companies including Applied Materials. He writes about production-grade .NET architecture, cloud deployment, and the tools that serious engineering teams actually use.
Last updated: March 2026
