Skip to main content

Azure DNS Private Resolver

Before Private Resolver existed, the way I handled hybrid DNS in Azure was a pair of Windows Server VMs running DNS Forwarder in the hub VNet, managed like any other server. It worked, but it was IaaS overhead in a place where I didn't want it — update cycles, availability concerns, NSG rules, and a configuration that drifted whenever someone touched it manually.

Azure DNS Private Resolver replaces that pattern. It's a managed service that handles inbound DNS queries from on-premises into Azure, and outbound forwarding from Azure to on-premises or custom DNS namespaces. No VMs, no patching, no managing availability sets.

When I reach for it

I deploy Private Resolver when the hub needs to resolve names that don't naturally resolve in either direction:

  • On-premises clients querying Azure private DNS zones — Private DNS zones (like privatelink.blob.core.windows.net) are only resolvable from within Azure VNets by default. On-premises machines can't reach them without a resolver endpoint sitting in Azure to answer those queries.
  • Azure resources forwarding queries to on-premises DNS — If there's internal DNS hosted on-premises (Active Directory zones, legacy application hostnames), Azure workloads need an outbound forwarding path to get there.
  • Centralising resolution across spokes — Rather than configuring DNS servers in every spoke VNet separately, I link them to the hub's Private Resolver and manage all the forwarding rules in one place.

If the environment is purely Azure-native with no on-premises connectivity and no private endpoints, I don't deploy this. The need becomes obvious quickly once Private Link endpoints or an ExpressRoute/VPN connection enters the picture.

How it works

Private Resolver has two endpoint types, and they're worth keeping separate in your mental model because they do entirely different jobs:

Inbound endpoints receive DNS queries from outside Azure and forward them into Azure DNS. I use these when on-premises clients need to resolve Azure private DNS zones. The inbound endpoint gets a private IP in the hub VNet, and that IP becomes the forwarding target on the on-premises DNS servers.

Outbound endpoints forward DNS queries originating inside Azure to external DNS destinations. I use these when Azure workloads need to resolve on-premises hostnames. Forwarding rules on the outbound endpoint define which domains go to which DNS server.

Both endpoint types need dedicated subnets in the hub VNet — a /28 each.

Where I put it

I always place Private Resolver in the connectivity hub. The reasoning is straightforward: the hub already contains ExpressRoute, VPN, the Azure Firewall, and the VNet links to all the private DNS zones. DNS resolution belongs with that infrastructure — not in individual application subscriptions where it becomes a configuration management problem.

The architecture I deploy:

Hub VNet
├── AzureFirewallSubnet (/26)
├── GatewaySubnet (/27)
├── DnsResolverInboundSubnet (/28) ← Inbound endpoint
└── DnsResolverOutboundSubnet (/28) ← Outbound endpoint

On-premises DNS servers
└── Conditional forwarder → Inbound endpoint IP
(for Azure private DNS zones)

Spoke VNets
└── DNS server setting → Azure-provided (168.63.129.16)
→ Resolution flows through hub via VNet link

Terraform setup

# DNS Private Resolver
resource "azurerm_private_dns_resolver" "hub" {
name = "dnsresolver-hub-prod"
resource_group_name = "rg-platform-connectivity-prod"
location = "eastus"
virtual_network_id = azurerm_virtual_network.hub.id
}

# Inbound endpoint — on-premises DNS forwards to this IP
resource "azurerm_private_dns_resolver_inbound_endpoint" "hub" {
name = "inbound-hub-prod"
private_dns_resolver_id = azurerm_private_dns_resolver.hub.id
location = azurerm_private_dns_resolver.hub.location

ip_configurations {
private_ip_allocation_method = "Dynamic"
subnet_id = azurerm_subnet.dns_inbound.id
}
}

# Outbound endpoint — Azure forwards queries to on-premises DNS via this
resource "azurerm_private_dns_resolver_outbound_endpoint" "hub" {
name = "outbound-hub-prod"
private_dns_resolver_id = azurerm_private_dns_resolver.hub.id
location = azurerm_private_dns_resolver.hub.location
subnet_id = azurerm_subnet.dns_outbound.id
}

# DNS forwarding ruleset — defines which domains go where
resource "azurerm_private_dns_resolver_dns_forwarding_ruleset" "hub" {
name = "ruleset-hub-prod"
resource_group_name = "rg-platform-connectivity-prod"
location = "eastus"
private_dns_resolver_outbound_endpoint_ids = [azurerm_private_dns_resolver_outbound_endpoint.hub.id]
}

# Forwarding rule — send corp.example.com queries to on-premises DNS
resource "azurerm_private_dns_resolver_forwarding_rule" "corp" {
name = "forward-corp"
dns_forwarding_ruleset_id = azurerm_private_dns_resolver_dns_forwarding_ruleset.hub.id
domain_name = "corp.example.com."
enabled = true

target_dns_servers {
ip_address = "10.0.0.4" # On-premises DNS server
port = 53
}
}

# Link the ruleset to the hub VNet
resource "azurerm_private_dns_resolver_virtual_network_link" "hub" {
name = "link-hub"
dns_forwarding_ruleset_id = azurerm_private_dns_resolver_dns_forwarding_ruleset.hub.id
virtual_network_id = azurerm_virtual_network.hub.id
}

Private endpoint DNS — where the complexity concentrates

The reason I end up deploying Private Resolver on almost every engagement is private endpoints. When I deploy a private endpoint for a storage account, it gets an A record in privatelink.blob.core.windows.net. Azure VNets with a DNS zone link can resolve this automatically. On-premises machines cannot, because that zone doesn't exist in on-premises DNS.

The fix I use is configuring the on-premises DNS server with a conditional forwarder for privatelink.blob.core.windows.net (and every other relevant privatelink zone) pointing to the inbound endpoint IP. Queries for private endpoint names from on-premises then arrive at the resolver, get forwarded into Azure DNS, and return the correct private IP.

This makes Private Resolver a critical dependency for private endpoint access from on-premises. I design it for availability from the start — the service itself has zone-redundant inbound endpoints, but the subnet sizing and VNet link configuration still needs to be right.

Things I've gotten wrong

Forgetting to link spoke VNets to the forwarding ruleset — The ruleset only applies to VNets explicitly linked to it. I've deployed the resolver, configured all the rules, and then wondered why spoke workloads couldn't resolve on-premises names — because I never linked the spoke VNets to the ruleset. Don't skip this step.

Getting the subnet sizes wrong — Each endpoint type needs a dedicated /28. That's non-negotiable and the subnet can't be shared with any other resources. I reserve these in the hub VNet address space before standing anything else up.

Configuring Azure and forgetting on-premises — The inbound endpoint only works if the on-premises DNS servers are actually configured to forward to it. I've seen setups where everything in Azure was correct but on-premises resolution still failed because nobody updated the DNS forwarder on the on-premises servers.