Skip to main content

Deploying a Next.js Application to Azure Container Apps

I had to deploy a full-stack Next.js app that uses API routes, Cosmos DB, Stripe, and Auth0. A static export was never going to carry that workload. I needed a real Node.js server running in a container with repeatable CI/CD.

This page is the exact pattern I now use for Azure Container Apps with GitHub Actions. It also includes the two mistakes that cost me the most time. One was how environmentVariables gets parsed by the deploy action. The other was treating NEXT_PUBLIC_* like runtime settings when they are really compile-time constants.

Why I picked Azure Container Apps

I use Azure Container Apps when I want container flexibility without running cluster infrastructure myself. For this Next.js workload, I needed:

  • a long-running Node.js process for server-side routes
  • HTTPS and ingress handled for me
  • scale settings without AKS operational overhead

That balance was the right fit for this app.

What I needed before I started

  • Azure subscription
  • Azure Container Registry
  • Azure Container Apps environment
  • GitHub repository with the Next.js app
  • Azure CLI installed locally

Step 1: Configure Next.js for standalone runtime

Without standalone output, the container build is heavier than it needs to be and startup behavior gets messy.

In next.config.ts:

import type { NextConfig } from "next";

const nextConfig: NextConfig = {
output: "standalone",
};

export default nextConfig;

This is non-negotiable for this container pattern.

Step 2: Build a multi-stage Dockerfile

I split this into dependency install, build, and runtime. The runtime image only gets what Next.js standalone output needs.

# Stage 1: Install dependencies
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm ci

# Stage 2: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

# NEXT_PUBLIC_* values are compiled into client bundles during build
ARG NEXT_PUBLIC_AUTH0_DOMAIN
ARG NEXT_PUBLIC_AUTH0_CLIENT_ID
ARG NEXT_PUBLIC_BASE_URL

ENV NEXT_TELEMETRY_DISABLED=1
ENV NODE_ENV=production
ENV NEXT_PUBLIC_AUTH0_DOMAIN=$NEXT_PUBLIC_AUTH0_DOMAIN
ENV NEXT_PUBLIC_AUTH0_CLIENT_ID=$NEXT_PUBLIC_AUTH0_CLIENT_ID
ENV NEXT_PUBLIC_BASE_URL=$NEXT_PUBLIC_BASE_URL

RUN npm run build

# Stage 3: Production runner
FROM node:20-alpine AS runner
WORKDIR /app

ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

# Copy only what the standalone server needs
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["node", "server.js"]

Step 3: Create Azure resources

Create Azure Container Registry

az acr create \
--resource-group my-resource-group \
--name mycontainerregistry \
--sku Basic

Create Container Apps environment

az containerapp env create \
--name my-app-env \
--resource-group my-resource-group \
--location australiaeast

Create the app once

I create the app one time, then CI updates revisions.

az containerapp create \
--name my-nextjs-app \
--resource-group my-resource-group \
--environment my-app-env \
--image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
--target-port 3000 \
--ingress external \
--min-replicas 0 \
--max-replicas 3

Step 4: Pre-create secrets in the Container App

This was my first major gotcha. secretref: only works if the secret already exists in the Container App. Referencing it in workflow config does not create it.

az containerapp secret set \
--name my-nextjs-app \
--resource-group my-resource-group \
--secrets \
cosmos-db-endpoint=https://my-cosmos.documents.azure.com:443/ \
cosmos-db-key=<your-cosmos-key> \
stripe-secret-key=<your-stripe-secret-key> \
stripe-publishable-key=<your-stripe-publishable-key> \
openai-api-key=<your-openai-key>

I always verify they exist before trusting the pipeline:

az containerapp secret list \
--name my-nextjs-app \
--resource-group my-resource-group \
--output table

Step 5: Configure GitHub Actions OIDC

I use OIDC so I do not keep long-lived Azure credentials in GitHub.

Create service principal

az ad sp create-for-rbac \
--name "my-nextjs-app-deploy" \
--role Contributor \
--scopes /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/my-resource-group \
--json-auth

I keep clientId, tenantId, and subscriptionId from this output.

Add federated credential

az ad app federated-credential create \
--id <CLIENT_ID> \
--parameters '{
"name": "github-actions",
"issuer": "https://token.actions.githubusercontent.com",
"subject": "repo:<your-github-org>/<your-repo>:ref:refs/heads/main",
"audiences": ["api://AzureADTokenExchange"]
}'

Add repository secrets

SecretValue
AZURE_CLIENT_IDService principal client ID
AZURE_TENANT_IDAzure tenant ID
AZURE_SUBSCRIPTION_IDAzure subscription ID

Step 6: Use a two-phase deployment workflow

The most stable workflow I found is:

  1. Build and push the image with explicit build args for NEXT_PUBLIC_*
  2. Deploy the image to the Container App
  3. Apply runtime environment variables with az containerapp update --set-env-vars
name: Deploy to Azure Container Apps

on:
push:
branches:
- main
workflow_dispatch:

permissions:
id-token: write
contents: read

env:
ACR_NAME: mycontainerregistry
RESOURCE_GROUP: my-resource-group
CONTAINER_APP_NAME: my-nextjs-app
CONTAINER_APP_ENV: my-app-env
LOCATION: australiaeast

jobs:
build-and-deploy:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Log in to Azure using OIDC
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

- name: Log in to Azure Container Registry
run: az acr login --name ${{ env.ACR_NAME }}

- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.ACR_NAME }}.azurecr.io/${{ env.CONTAINER_APP_NAME }}:${{ github.sha }}
${{ env.ACR_NAME }}.azurecr.io/${{ env.CONTAINER_APP_NAME }}:latest
build-args: |
NEXT_PUBLIC_AUTH0_DOMAIN=my-tenant.au.auth0.com
NEXT_PUBLIC_AUTH0_CLIENT_ID=my-auth0-client-id
NEXT_PUBLIC_BASE_URL=https://my-custom-domain.com

- name: Deploy image to Azure Container Apps
uses: azure/container-apps-deploy-action@v2
with:
containerAppName: ${{ env.CONTAINER_APP_NAME }}
resourceGroup: ${{ env.RESOURCE_GROUP }}
imageToDeploy: ${{ env.ACR_NAME }}.azurecr.io/${{ env.CONTAINER_APP_NAME }}:${{ github.sha }}

# I use Azure CLI here because the action can mis-handle multiline env var blocks.
- name: Configure environment variables
run: |
az containerapp update \
--name ${{ env.CONTAINER_APP_NAME }} \
--resource-group ${{ env.RESOURCE_GROUP }} \
--set-env-vars \
COSMOS_DB_ENDPOINT=secretref:cosmos-db-endpoint \
COSMOS_DB_KEY=secretref:cosmos-db-key \
COSMOS_DB_DATABASE_NAME=my-database \
STRIPE_SECRET_KEY=secretref:stripe-secret-key \
STRIPE_PUBLISHABLE_KEY=secretref:stripe-publishable-key \
OPENAI_API_KEY=secretref:openai-api-key \
NODE_ENV=production

Things I got wrong first

environmentVariables in deploy action

I lost time on this one. The deploy action documentation says environmentVariables takes space-separated key=value entries. In practice, when I passed them as a multiline YAML block, only the first line got applied and the rest were silently ignored.

I stopped using that parameter and switched to an explicit CLI step.

- name: Configure environment variables
run: |
az containerapp update \
--name my-app \
--resource-group my-rg \
--set-env-vars \
COSMOS_DB_ENDPOINT=secretref:cosmos-db-endpoint \
COSMOS_DB_KEY=secretref:cosmos-db-key \
STRIPE_SECRET_KEY=secretref:stripe-secret-key

NEXT_PUBLIC_* values are compile-time, not runtime

Next.js inlines NEXT_PUBLIC_* values during next build. Setting them later in Container Apps does nothing for client-side code.

This is the rule I follow now:

  • pass NEXT_PUBLIC_* via Docker build args
  • keep sensitive server-side values as runtime env vars via secretref:
  • never treat client-exposed values like runtime secrets

How I verify a deployment

I always inspect effective container env values after each deployment.

az containerapp show \
--name my-nextjs-app \
--resource-group my-resource-group \
--query "properties.template.containers[0].env" \
--output json

I expect entries shaped like:

{ "name": "COSMOS_DB_KEY", "secretRef": "cosmos-db-key" }
{ "name": "NODE_ENV", "value": "production" }

Then I validate revision health:

az containerapp revision list \
--name my-nextjs-app \
--resource-group my-resource-group \
--query "[].{name:name, active:properties.active, state:properties.runningState, healthy:properties.healthState, traffic:properties.trafficWeight}" \
--output table

If an env var is missing, I rerun the az containerapp update --set-env-vars command first. In my experience that fixes more failed releases than rebuilding the image again.

My deployment checklist

  • output: "standalone" in next.config.ts
  • Dockerfile copies only public, .next/standalone, and .next/static
  • NEXT_PUBLIC_* values passed as Docker build args
  • Container App secrets created before first pipeline run
  • OIDC configured for GitHub Actions
  • Image deploy and env var configuration split into separate steps
  • Runtime env vars applied with az containerapp update --set-env-vars
  • Post-deploy env and revision checks run with Azure CLI