Azure Landing Zones
What are Azure Landing Zones?
When I'm starting an enterprise Azure engagement, one of the first conversations is always about landing zones. Before any workload gets deployed, I want the foundation in place — the governance structure, network topology, identity configuration, and logging plumbing that every team will inherit.
A landing zone is that foundation. Think of it as utilities and infrastructure for a building before tenants move in. The tenants (application teams) shouldn't have to wire their own electricity; they should be able to move in and start building.
Core Components:
┌─────────────────────────────────────────────────────────────┐
│ Azure Landing Zone Architecture │
└─────────────────────────────────────────────────────────────┘
Management Group Hierarchy
┌────────────────────────────────────────────────────────┐
│ Tenant Root Group │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Platform Landing Zone │ │
│ │ ├── Management (Logs, Monitoring, Backup) │ │
│ │ ├── Connectivity (Hubs, VPN, ExpressRoute) │ │
│ │ └── Identity (AD DS, Bastion) │ │
│ └──────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Application Landing Zones │ │
│ │ ├── Corp (On-prem connectivity required) │ │
│ │ ├── Online (Internet-facing workloads) │ │
│ │ └── Sandbox (Isolated experimentation) │ │
│ └──────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────┘
Two Types:
| Type | Purpose | Examples |
|---|---|---|
| Platform Landing Zone | Shared services for entire organization | Azure Monitor, VPN Gateway, Azure AD |
| Application Landing Zone | Workload-specific environment | Web app resources, database, storage |
Why Use Azure Landing Zones?
I keep running into the same pattern in enterprise Azure engagements: teams race to get workloads deployed, skip the foundation, and then spend months untangling a mess of inconsistent configurations, missing governance, and sprawling subscriptions. Landing zones are the answer to that — not as overhead on top of actual work, but as the thing that makes actual work sustainable at scale.
The clearest way to see this is the split between what I'm managing with and without them:
Without Landing Zones
❌ Each team builds infrastructure differently
❌ Security configurations inconsistent
❌ No centralized logging or monitoring
❌ Compliance violations discovered late
❌ High operational overhead
❌ Shadow IT and sprawl
With Landing Zones
✅ Consistent Security: Azure Policy enforces standards automatically
✅ Faster Onboarding: Teams get pre-configured subscriptions in minutes
✅ Built-in Governance: Compliance from day one
✅ Centralized Operations: Single pane of glass for monitoring
✅ Cost Controls: Budget alerts and tagging enforced
✅ Hybrid Connectivity: Hub-and-spoke networking ready
When to Use Landing Zones
✅ Use Landing Zones When:
- Multi-team organization: 3+ teams deploying to Azure
- Regulated industry: Need compliance (HIPAA, PCI-DSS, FedRAMP)
- Hybrid cloud: Connecting on-premises to Azure
- Growth trajectory: Expecting rapid Azure adoption
- Governance requirements: Need centralized policy enforcement
- Enterprise scale: 10+ subscriptions planned
⚠️ Consider Alternatives When:
- Single small app: One team, one subscription
- Proof of concept: Short-term experimentation
- Greenfield startup: No existing infrastructure
- Limited Azure use: Not planning to expand
Platform Landing Zone
The platform landing zone provides shared services for the entire organization.
Core Components
1. Management
Purpose: Centralized monitoring, logging, and operational tooling
Key Resources:
# Log Analytics Workspace
resource "azurerm_log_analytics_workspace" "platform" {
name = "law-platform-prod"
location = "eastus"
resource_group_name = "rg-platform-management-prod"
sku = "PerGB2018"
retention_in_days = 90
tags = {
Environment = "Production"
ManagedBy = "Platform Team"
}
}
# Azure Monitor (automatic)
# Automation Account
resource "azurerm_automation_account" "platform" {
name = "aa-platform-prod"
location = "eastus"
resource_group_name = "rg-platform-management-prod"
sku_name = "Basic"
}
What it Provides:
- Centralized logging (all subscriptions send logs here)
- Azure Monitor workbooks and dashboards
- Update Management for VMs
- Backup vaults
- Security Center configuration
2. Connectivity
Purpose: Hub-and-spoke network topology for hybrid connectivity
Hub VNet Architecture:
┌─────────────────────────────────────────────────────┐
│ Hub VNet (10.0.0.0/16) │
│ │
│ ┌────────────────┐ ┌────────────────┐ │
│ │ VPN Gateway │ │ ExpressRoute │ │
│ │ Subnet │ │ Subnet │ │
│ │ 10.0.1.0/24 │ │ 10.0.2.0/24 │ │
│ └────────────────┘ └────────────────┘ │
│ │
│ ┌────────────────────────────────────┐ │
│ │ Azure Firewall Subnet │ │
│ │ 10.0.3.0/26 (must be /26) │ │
│ └────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────┐ │
│ │ Azure Bastion Subnet │ │
│ │ 10.0.4.0/26 (must be /26) │ │
│ └────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Spoke 1 │ │ Spoke 2 │ │ Spoke N │
│ VNet │ │ VNet │ │ VNet │
└─────────┘ └─────────┘ └─────────┘
Terraform Example:
# Hub VNet
resource "azurerm_virtual_network" "hub" {
name = "vnet-hub-prod"
address_space = ["10.0.0.0/16"]
location = "eastus"
resource_group_name = "rg-platform-connectivity-prod"
}
# Azure Firewall Subnet
resource "azurerm_subnet" "firewall" {
name = "AzureFirewallSubnet" # Name is mandatory
resource_group_name = azurerm_virtual_network.hub.resource_group_name
virtual_network_name = azurerm_virtual_network.hub.name
address_prefixes = ["10.0.3.0/26"]
}
# Azure Firewall
resource "azurerm_firewall" "hub" {
name = "fw-hub-prod"
location = "eastus"
resource_group_name = "rg-platform-connectivity-prod"
sku_name = "AZFW_VNet"
sku_tier = "Standard"
ip_configuration {
name = "configuration"
subnet_id = azurerm_subnet.firewall.id
public_ip_address_id = azurerm_public_ip.firewall.id
}
}
What it Provides:
- Centralized internet egress via Azure Firewall
- VPN or ExpressRoute for on-premises connectivity
- Azure Bastion for secure VM access
- Network security managed centrally
3. Identity
Purpose: Centralized identity and access management
Key Resources:
- Azure AD Domain Services: Managed AD for legacy apps
- Jump Box / Bastion: Secure access to Azure resources
- Privileged Identity Management (PIM): Just-in-time admin access
Example:
# Azure AD Domain Services
resource "azurerm_active_directory_domain_service" "platform" {
name = "platform.example.com"
location = "eastus"
resource_group_name = "rg-platform-identity-prod"
domain_name = "platform.example.com"
sku = "Standard"
filtered_sync_enabled = false
initial_replica_set {
subnet_id = azurerm_subnet.adds.id
}
}
Application Landing Zones
Application landing zones are workload-specific subscriptions for teams to deploy applications.
Types of Application Landing Zones
1. Corp-Connected Landing Zones
For: Applications requiring on-premises connectivity
Characteristics:
- VNet peered to hub VNet
- Access to ExpressRoute/VPN
- Subject to corporate network policies
- Routed through Azure Firewall
Network Topology:
# Spoke VNet (Corp-connected)
resource "azurerm_virtual_network" "spoke" {
name = "vnet-app1-prod"
address_space = ["10.1.0.0/16"]
location = "eastus"
resource_group_name = "rg-app1-prod"
}
# VNet Peering to Hub
resource "azurerm_virtual_network_peering" "spoke_to_hub" {
name = "peer-app1-to-hub"
resource_group_name = azurerm_virtual_network.spoke.resource_group_name
virtual_network_name = azurerm_virtual_network.spoke.name
remote_virtual_network_id = data.azurerm_virtual_network.hub.id
allow_forwarded_traffic = true
allow_gateway_transit = false
use_remote_gateways = true # Use hub's VPN gateway
}
Use Cases:
- Enterprise web applications
- Databases needing on-prem data
- Line-of-business applications
- Lift-and-shift migrations
2. Online Landing Zones
For: Internet-facing applications with no on-premises connectivity
Characteristics:
- No VNet peering to hub
- Direct internet access
- More relaxed network policies
- Ideal for SaaS applications
Example:
# Standalone VNet (Online)
resource "azurerm_virtual_network" "online" {
name = "vnet-webapp-prod"
address_space = ["10.10.0.0/16"]
location = "eastus"
resource_group_name = "rg-webapp-prod"
}
# No peering to hub
Use Cases:
- Public-facing websites
- Mobile app backends
- SaaS products
- APIs for external customers
3. Sandbox Landing Zones
For: Experimentation, training, and development
Characteristics:
- Isolated from production
- Relaxed governance (no policy enforcement)
- Auto-deletion policies (resources older than 30 days)
- Limited Azure quotas
Policy Example:
{
"properties": {
"displayName": "Auto-delete resources older than 30 days",
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines"
},
{
"field": "tags['CreatedDate']",
"less": "[addDays(utcNow(), -30)]"
}
]
},
"then": {
"effect": "delete"
}
}
}
}
Use Cases:
- Developer learning
- Proof of concepts
- Security testing
- Training environments
Application Landing Zone Architecture Patterns
Pattern 1: App Service Environment
Application Landing Zone (Subscription)
│
├── Resource Group: Networking
│ ├── VNet (10.1.0.0/16)
│ ├── App Service Subnet (delegated)
│ └── Private Endpoints Subnet
│
├── Resource Group: Compute
│ ├── App Service Plan (Isolated SKU)
│ ├── App Service Environment v3
│ └── Function Apps
│
├── Resource Group: Data
│ ├── SQL Database (Private Endpoint)
│ └── Storage Account (Private Endpoint)
│
└── Resource Group: Security
├── Key Vault (Private Endpoint)
└── Application Gateway (WAF)
Pattern 2: Azure Kubernetes Service (AKS)
Application Landing Zone (Subscription)
│
├── Resource Group: Networking
│ ├── VNet (10.2.0.0/16)
│ ├── AKS Subnet (/22)
│ ├── Application Gateway Subnet
│ └── Private Endpoints Subnet
│
├── Resource Group: Compute
│ └── AKS Cluster (Private)
│ ├── System Node Pool
│ └── User Node Pool
│
├── Resource Group: Data
│ ├── Azure Database for PostgreSQL
│ └── Azure Cache for Redis
│
└── Resource Group: Supporting Services
├── Container Registry (Private Endpoint)
├── Key Vault
└── Log Analytics Workspace
Management Group Hierarchy
Organize subscriptions with management groups to apply governance at scale:
Tenant Root Group
│
├── Platform
│ ├── Management (1 subscription)
│ ├── Connectivity (1 subscription)
│ └── Identity (1 subscription)
│
├── Landing Zones
│ ├── Corp
│ │ ├── Finance Apps (3 subscriptions)
│ │ └── HR Apps (2 subscriptions)
│ │
│ ├── Online
│ │ ├── Marketing Website (1 subscription)
│ │ └── Customer Portal (2 subscriptions)
│ │
│ └── Sandbox
│ └── Dev/Test (5 subscriptions)
│
└── Decommissioned
└── Legacy Apps (archived)
Apply Azure Policy at Management Group Level:
# Policy: Require specific tags
resource "azurerm_management_group_policy_assignment" "require_tags" {
name = "require-tags"
management_group_id = azurerm_management_group.landing_zones.id
policy_definition_id = azurerm_policy_definition.require_tags.id
parameters = jsonencode({
tagNames = {
value = ["Environment", "Owner", "CostCenter"]
}
})
}
Best Practices
These are the principles I apply on every landing zone engagement. Some are non-negotiable; others are defaults I adjust based on the organisation's size and maturity.
1. Subscription Democratisation
My rule: treat subscriptions as units of management and scale, not scarcity.
✅ Do:
- Give each application team their own subscription
- Use subscriptions for blast radius isolation
- Automate subscription provisioning (subscription vending)
❌ Don't:
- Share subscriptions across multiple teams
- Manually create subscriptions
- Use subscriptions as cost centers (use tags instead)
2. Policy-Driven Governance
My rule: enforce compliance automatically, not through manual checklists.
# Prevent public IP creation in production
resource "azurerm_policy_definition" "deny_public_ip" {
name = "deny-public-ip-creation"
policy_type = "Custom"
mode = "All"
display_name = "Deny Public IP Creation"
policy_rule = jsonencode({
if = {
field = "type"
equals = "Microsoft.Network/publicIPAddresses"
}
then = {
effect = "deny"
}
})
}
3. Hub-and-Spoke Networking
Do:
- Centralize egress through Azure Firewall
- Use VNet peering for spoke-to-hub connectivity
- Route all traffic through hub for inspection
Don't:
- Create individual VPN gateways per subscription
- Allow direct internet access from spoke VNets
- Peer spoke-to-spoke directly (use hub routing)
4. Separation of Duties
| Role | Responsibilities | Management Group Scope |
|---|---|---|
| Platform Team | Manage hub, connectivity, policies | Platform |
| App Team | Deploy applications, manage resources | App Landing Zone |
| Security Team | Define policies, audit compliance | Tenant Root |
| Finance Team | Cost management, budgets | All subscriptions |
5. Tagging Strategy
Enforce tags via policy:
# Policy: Inherit tags from resource group
resource "azurerm_policy_definition" "inherit_tags" {
name = "inherit-tags-from-rg"
policy_type = "Custom"
mode = "Indexed"
display_name = "Inherit Tags from Resource Group"
policy_rule = jsonencode({
if = {
field = "[concat('tags[', parameters('tagName'), ']')]"
notEquals = "[resourceGroup().tags[parameters('tagName')]]"
}
then = {
effect = "modify"
details = {
operations = [{
operation = "addOrReplace"
field = "[concat('tags[', parameters('tagName'), ']')]"
value = "[resourceGroup().tags[parameters('tagName')]]"
}]
}
}
})
}
Required tags:
- Environment: dev, staging, prod
- Owner: Team email
- CostCenter: Finance code
- Application: App name
- DataClassification: Public, Internal, Confidential
Deployment Approaches
Option 1: Azure Landing Zone Accelerator (Portal)
Best for: Getting started quickly, small organizations
- Go to: https://aka.ms/alz/portal
- Select deployment options:
- Management groups
- Hub networking
- Identity
- Click Deploy
Pros: Fast, no code
Cons: Limited customization
Option 2: Terraform Module
Best for: Infrastructure as Code, customization, automation
# Use CAF Landing Zones Terraform Module
module "enterprise_scale" {
source = "Azure/caf-enterprise-scale/azurerm"
version = "~> 6.0"
default_location = "eastus"
root_parent_id = data.azurerm_client_config.current.tenant_id
deploy_management_resources = true
deploy_connectivity_resources = true
deploy_identity_resources = false
configure_management_resources = {
location = "eastus"
settings = {
log_analytics = {
retention_in_days = 90
}
}
}
configure_connectivity_resources = {
location = "eastus"
settings = {
hub_networks = [{
enabled = true
config = {
address_space = ["10.0.0.0/16"]
azure_firewall_enabled = true
azure_firewall_sku = "Standard"
vpn_gateway_enabled = false
expressroute_gateway_enabled = true
}
}]
}
}
}
Option 3: Bicep/ARM Templates
Best for: Azure-native approach, Azure DevOps integration
# Deploy using Azure Verified Modules
az deployment tenant create \
--name "landing-zone-deployment" \
--location "eastus" \
--template-file "main.bicep" \
--parameters "@parameters.json"
Common Pitfalls
These are the mistakes I see most often — and in a few cases, mistakes I've made myself.
1. Skipping landing zones for speed
The reasoning is always "we'll add governance later." It never works out that way. By the time governance becomes a priority, the estate is already inconsistent, teams have built dependencies on non-standard configurations, and any policy enforcement triggers hundreds of violations. Starting with even a lightweight landing zone is always the right call — it's far cheaper to enforce standards before workloads arrive than after.
2. Over-engineering too early
The opposite problem: building a full enterprise-scale landing zone for a two-person team or a pilot. It slows everything down and creates a maintenance burden that makes the platform team the bottleneck. I match the complexity to the actual organisation size and plan for scale incrementally.
3. Hard-coding the primary region everywhere
I've inherited environments where location = "eastus" was duplicated across hundreds of Terraform files. Adding a secondary region for disaster recovery became a weeks-long refactoring project. The fix is trivial upfront:
variable "primary_location" {
default = "eastus"
}
variable "secondary_location" {
default = "westus"
}
4. Tight coupling between platform and application layers
When application teams build direct dependencies on specific platform resources (particular resource group names, hardcoded VNet IDs), platform changes break applications. I define clear interface contracts with the application teams — things like "the hub VNet CIDR won't change" — and enforce loose coupling on both sides.
5. Ignoring subscription limits
Azure has hard limits (for example, 500 VNets per subscription). I've seen environments hit these limits unexpectedly mid-project when subscription topology wasn't planned early. I design the subscription structure upfront and monitor quotas as the estate grows.