Skip to main content

Getting Started with Microsoft Sentinel

I've deployed Sentinel in environments ranging from a single Azure subscription to estates spanning six countries. The setup isn't the hard part — every engagement eventually hits the same questions: what do you connect first, how do you control ingestion costs before they spiral, and how do you get analytics rules producing useful signal without drowning the SOC in noise. This page is my answer to those questions.

How Microsoft Sentinel Works

┌─────────────┐
│ Data Sources│
└─────┬───────┘
│ Data Connectors

┌─────────────────────────┐
│ Log Analytics Workspace │ (Storage & Query Engine)
└───────────┬─────────────┘

┌──────┴──────┐
│ │
▼ ▼
┌──────────┐ ┌───────────┐
│Analytics │ │ Hunting │
│ Rules │ │ Queries │
└────┬─────┘ └─────┬─────┘
│ │
▼ ▼
┌────────────────────────┐
│ Incidents │ (SOC Investigation)
└──────────┬─────────────┘


┌────────────┐
│ Playbooks │ (Automated Response)
└────────────┘

Prerequisites

  • Azure Subscription: Active subscription required
  • Permissions:
    • Microsoft Sentinel Contributor (to enable Sentinel)
    • Log Analytics Contributor (to create workspace)
    • Reader role on subscription (to connect data sources)
  • Log Analytics Workspace: Required foundation for Sentinel

Setting Up Microsoft Sentinel

Step 1: Create Log Analytics Workspace

Using Azure Portal

  1. Navigate to Log Analytics workspaces
  2. Click + Create
  3. Configure:
    • Subscription: Select subscription
    • Resource Group: rg-security-prod
    • Name: law-sentinel-prod
    • Region: East US (choose region close to data sources)
  4. Click Review + Create

Using Azure CLI

# Create resource group
az group create \
--name rg-security-prod \
--location eastus

# Create Log Analytics workspace
az monitor log-analytics workspace create \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--location eastus \
--retention-time 90 # Days to retain data (30-730)

# Get workspace ID (needed later)
WORKSPACE_ID=$(az monitor log-analytics workspace show \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--query id -o tsv)

Step 2: Enable Microsoft Sentinel

Using Azure Portal

  1. Search for Microsoft Sentinel
  2. Click + Create
  3. Select workspace: law-sentinel-prod
  4. Click Add

Using Azure CLI

# Enable Sentinel on workspace
az sentinel workspace create \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod

Step 3: Connect Data Sources

Common data connectors:

# Enable Azure Activity connector (management plane logs)
az monitor diagnostic-settings create \
--name send-to-sentinel \
--resource /subscriptions/{subscription-id} \
--workspace $WORKSPACE_ID \
--logs '[{\"category\": \"Administrative\", \"enabled\": true}]'

# Enable Microsoft Defender for Cloud
# (Must be enabled in Defender for Cloud first)
# Portal: Sentinel > Data connectors > Microsoft Defender for Cloud > Open connector page > Connect

# Enable Microsoft Entra ID Sign-in Logs
# Portal: Entra ID > Diagnostic settings > Add diagnostic setting
# Send to: law-sentinel-prod
# Logs: SignInLogs, AuditLogs, NonInteractiveUserSignInLogs

Terraform Example

# Resource Group for Security
resource \"azurerm_resource_group\" \"security\" {\n name = \"rg-security-prod\"
location = \"East US\"
}

# Log Analytics Workspace
resource \"azurerm_log_analytics_workspace\" \"sentinel\" {\n name = \"law-sentinel-prod\"
location = azurerm_resource_group.security.location
resource_group_name = azurerm_resource_group.security.name
sku = \"PerGB2018\"
retention_in_days = 90 # 30-730 days

tags = {
Environment = \"Production\"
Service = \"Security\"
}
}

# Enable Microsoft Sentinel
resource \"azurerm_sentinel_log_analytics_workspace_onboarding\" \"sentinel\" {\n workspace_id = azurerm_log_analytics_workspace.sentinel.id
}

# Data Connector: Azure Activity
resource \"azurerm_sentinel_data_connector_azure_active_directory\" \"aad\" {\n name = \"AzureActiveDirectory\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
}

# Data Connector: Microsoft Defender for Cloud
resource \"azurerm_sentinel_data_connector_microsoft_defender_advanced_threat_protection\" \"defender\" {\n name = \"MicrosoftDefender\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
}

# Analytics Rule: Brute Force Attack Detection
resource \"azurerm_sentinel_alert_rule_scheduled\" \"brute_force\" {\n name = \"Multiple-Failed-Logins-Brute-Force\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"Multiple Failed Login Attempts\"
severity = \"High\"
enabled = true

query = <<QUERY
SigninLogs
| where TimeGenerated > ago(1h)
| where ResultType != \"0\" // Failed sign-in
| summarize FailedAttempts = count() by UserPrincipalName, IPAddress, bin(TimeGenerated, 5m)
| where FailedAttempts >= 5
QUERY

query_frequency = \"PT1H\" # Run every hour
query_period = \"PT1H\" # Look back 1 hour
trigger_operator = \"GreaterThan\"
trigger_threshold = 0

tactics = [\"CredentialAccess\"]
techniques = [\"T1110\"] # MITRE ATT&CK: Brute Force

incident_configuration {
create_incident = true
grouping {
enabled = true
reopen_closed_incidents = false
lookback_duration = \"PT5H\"
entity_matching_method = \"AllEntities\"
}
}
}

# Watchlist: VIP Users (protect high-value accounts)
resource \"azurerm_sentinel_watchlist\" \"vip_users\" {\n name = \"VIPUsers\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"VIP Users - Executives\"
item_search_key = \"UserPrincipalName\"
}

# Automation Rule: Auto-assign incidents to Tier 1
resource \"azurerm_sentinel_automation_rule\" \"auto_assign\" {\n name = \"AutoAssignLowSeverity\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"Auto-assign Low/Medium severity incidents\"
order = 1
enabled = true

triggers_on = \"Incidents\"
triggers_when = \"Created\"

condition {
property = \"IncidentSeverity\"
operator = \"In\"
values = [\"Low\", \"Medium\"]
}

actions {
order = 1
action_type = \"ModifyProperties\"

modify_properties {
owner_id = \"user-object-id-here\" # Assign to specific user/group
status = \"Active\"
}
}
}

Creating Analytics Rules

Built-in Templates

Microsoft Sentinel includes 200+ built-in detection templates:

In Portal:

  1. Sentinel > Analytics > Rule templates
  2. Filter by data sources you have connected
  3. Select a template (e.g., "Brute force attack against Azure Portal")
  4. Click Create rule
  5. Customize query/scheduling if needed
  6. Click Review + create

Custom Analytics Rule (KQL Example)

// Detect privilege escalation via role assignment
AzureActivity
| where TimeGenerated > ago(1h)
| where OperationNameValue == \"Microsoft.Authorization/roleAssignments/write\"
| where ActivityStatusValue == \"Success\"
| where Properties contains \"Owner\" or Properties contains \"Contributor\"
| project TimeGenerated, Caller, ResourceGroup, SubscriptionId, Properties
| extend RoleAssigned = tostring(parse_json(Properties).roleDefinitionName)
| where RoleAssigned in (\"Owner\", \"Contributor\", \"User Access Administrator\")

Save as scheduled analytics rule:

  • Frequency: Every 15 minutes
  • Lookback: 1 hour
  • Threshold: Alert on any result
  • Tactics: Privilege Escalation (MITRE ATT&CK)

Using Workbooks

Pre-built workbooks for visualization:

# Popular workbooks
# 1. Azure AD Sign-ins and Audit
# 2. Azure Activity
# 3. Office 365
# 4. Threat Intelligence
# 5. Identity & Access

# In Portal:
# Sentinel > Workbooks > Templates > Select workbook > Save

Creating Playbooks (Automation)

Playbooks use Azure Logic Apps for automation:

Example: Auto-block malicious IP in NSG

Using Portal

  1. Sentinel > Automation > Create > Playbook with incident trigger
  2. Name: Block-Malicious-IP-in-NSG
  3. Add steps:
    • Get incident entities (IP addresses)
    • For each IP:
      • Parse JSON (extract IP)
      • Azure NSG - Create security rule (deny inbound from IP)
      • Add comment to incident ("Blocked IP: X.X.X.X")
  4. Save playbook
  5. Grant playbook identity Network Contributor role on NSG

Terraform Example for Playbook

# Create Logic App for playbook
resource \"azurerm_logic_app_workflow\" \"block_ip\" {\n name = \"playbook-block-malicious-ip\"
location = azurerm_resource_group.security.location
resource_group_name = azurerm_resource_group.security.name

identity {
type = \"SystemAssigned\"
}
}

# Grant playbook permission to modify NSG
resource \"azurerm_role_assignment\" \"playbook_nsg\" {\n scope = azurerm_network_security_group.app.id
role_definition_name = \"Network Contributor\"
principal_id = azurerm_logic_app_workflow.block_ip.identity[0].principal_id
}

# Attach playbook to analytics rule (automation rule)
resource \"azurerm_sentinel_automation_rule\" \"block_on_high_severity\" {\n name = \"BlockIPOnHighSeverity\"
log_analytics_workspace_id = azurerm_sentinel_log_analytics_workspace_onboarding.sentinel.workspace_id
display_name = \"Block IP on High Severity Incidents\"
order = 2
enabled = true

triggers_on = \"Incidents\"
triggers_when = \"Created\"

condition {
property = \"IncidentSeverity\"
operator = \"Equals\"
values = [\"High\"]
}

actions {
order = 1
action_type = \"RunPlaybook\"

run_playbook {
logic_app_id = azurerm_logic_app_workflow.block_ip.id
tenant_id = data.azurerm_client_config.current.tenant_id
}
}
}

CI/CD Integration

Deploy Analytics Rules via GitHub Actions

name: Deploy Sentinel Analytics Rules

on:
push:
paths:
- 'sentinel-rules/**'

jobs:
deploy-rules:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2

- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

- name: Deploy Analytics Rules
run: |\n for rule in sentinel-rules/*.json; do
RULE_NAME=$(basename "$rule" .json)

az sentinel alert-rule create \\\n --resource-group rg-security-prod \\\n --workspace-name law-sentinel-prod \\\n --rule-name "$RULE_NAME" \\\n --rule-file "$rule"
done

Validate Rules Before Deployment

- name: Validate KQL Syntax
run: |\n # Use Sentinel APIs to validate query syntax
az rest --method post \\\n --url \"https://management.azure.com/subscriptions/{sub-id}/resourceGroups/rg-security-prod/providers/Microsoft.OperationalInsights/workspaces/law-sentinel-prod/api/query?api-version=2021-03-01-privatepreview\" \\\n --body '{\"query\": \"SigninLogs | where TimeGenerated > ago(1h) | take 1\"}'

Best Practices

1. Data Retention Strategy

Configure appropriate retention:

  • Interactive retention: 90 days (hot, queryable)
  • Archive: Up to 7 years (cold storage, lower cost)
# Set retention
az monitor log-analytics workspace update \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--retention-time 90

# Enable archival for specific table
az monitor log-analytics workspace table update \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod \
--name SigninLogs \
--retention-time 90 \
--total-retention-time 365 # Archive for 1 year

2. Cost Optimization

Reduce ingestion costs:

  • Use Basic Logs for high-volume, low-value data
  • Filter unnecessary data at source
  • Use Commitment Tiers (100GB/day, 200GB/day, etc.)
// Example: Filter non-essential sign-ins before ingestion
SigninLogs
| where ResultType != \"0\" or UserPrincipalName contains \"admin\" // Only failed logins or admin accounts

3. Analytics Rule Tuning

Reduce false positives:

  • Start with high-confidence detections
  • Use watchlists for allow-lists (known IPs, approved apps)
  • Tune thresholds based on environment
// Use watchlist to exclude known service accounts
let ServiceAccounts = _GetWatchlist('ServiceAccounts') | project UserPrincipalName;
SigninLogs
| where TimeGenerated > ago(1h)
| where ResultType != \"0\"
| where UserPrincipalName !in (ServiceAccounts) // Exclude service accounts
| summarize FailedAttempts = count() by UserPrincipalName

4. MITRE ATT&CK Mapping

Map detections to tactics and techniques:

  • Provides context on attacker behavior
  • Helps prioritize incidents
  • Enables coverage gap analysis

Common mappings:

  • Brute Force → CredentialAccess (T1110)
  • Privilege Escalation → PrivilegeEscalation (T1078)
  • Data Exfiltration → Exfiltration (T1537)

5. Incident Management Workflow

Standardize investigation process:

  1. Triage: Review alert context, entities involved
  2. Investigate: Use entity timelines, related alerts
  3. Contain: Run playbook to isolate affected resources
  4. Remediate: Remove threat, patch vulnerabilities
  5. Document: Add comments, close incident with classification

Common Use Cases

1. Detect Suspicious Sign-ins

SigninLogs
| where TimeGenerated > ago(24h)
| where RiskLevelDuringSignIn == \"high\" or RiskLevelAggregated == \"high\"
| project TimeGenerated, UserPrincipalName, AppDisplayName, IPAddress, Location, RiskDetail

2. Monitor Privileged Role Assignments

AzureActivity
| where OperationNameValue == \"Microsoft.Authorization/roleAssignments/write\"
| where Properties contains \"Owner\" or Properties contains \"Contributor\"
| extend RoleAssigned = tostring(parse_json(Properties).roleDefinitionName)
| extend Assignee = tostring(parse_json(Properties).principalName)
| project TimeGenerated, Caller, Assignee, RoleAssigned, ResourceGroup

3. Detect Anomalous Resource Creation

AzureActivity
| where OperationNameValue endswith \"write\"
| where TimeGenerated > ago(1h)
| summarize ResourcesCreated = count() by Caller, bin(TimeGenerated, 15m)
| where ResourcesCreated > 20 // Unusual volume

Things to Avoid

I avoid ingesting all logs without filtering — cost explosion is real, and I've seen engagements where the Sentinel bill doubled in a month because someone connected a noisy data source without thinking through volume. I also avoid creating too many low-confidence analytics rules: alert fatigue is the fastest way to make a SOC less effective.

I never ignore built-in templates. There are 200+ of them and they cover most of what you actually need. I don't reinvent what Microsoft has already built and tuned.

I avoid running playbooks without proper RBAC in place — a playbook with excessive permissions is a lateral movement path waiting to be exploited. I test analytics rules in a non-production workspace before enabling them for production. I never use Sentinel as general-purpose log storage, and I always monitor data connector health: missing data means blind spots, and blind spots look exactly like a clean environment until they don't.

I start with critical data sources: Azure AD, Defender for Cloud, and Azure Activity. I enable built-in detections first, use watchlists for known-good entities, and automate common response actions with playbooks once the detections are tuned. I integrate with ITSM for ticketing, review MITRE ATT&CK coverage regularly, and archive old data for compliance.

Troubleshooting

Data Not Appearing

# Check data connector status
az sentinel data-connector list \
--resource-group rg-security-prod \
--workspace-name law-sentinel-prod

# Check Log Analytics ingestion
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query \"Heartbeat | summarize count() by bin(TimeGenerated, 1h)\" \
--timespan P1D

Analytics Rule Not Triggering

  1. Test query manually in Logs blade
  2. Check scheduling: Frequency vs. lookback period
  3. Verify data connector is sending data
  4. Review threshold: Is it too high?

Playbook Failing

# Check Logic App run history
az logic workflow show \
--resource-group rg-security-prod \
--name playbook-block-malicious-ip

# View run details (in Portal)
# Logic Apps > playbook-block-malicious-ip > Run history

Cost Estimation

Typical monthly costs (East US region):

  • Log Analytics ingestion: $2.30/GB (after 5GB free)
  • Sentinel ingestion: Additional $2.46/GB
  • Total: ~$4.76/GB ingested

Example scenario (500GB/month):

  • LA ingestion: 500GB × $2.30 = $1,150
  • Sentinel: 500GB × $2.46 = $1,230
  • Total: ~$2,380/month

Optimization: Use Commitment Tiers (100GB/day = $1.70/GB, 30% savings)