On-Demand Webinar

A Technical Guide to Automating SOC Operations with Anvilogic Blueprints

Content
April 3, 2026 9:00 AM
CST
Online
On-Demand Webinar

A Technical Guide to Automating SOC Operations with Anvilogic Blueprints

Detection Strategies

Every SOC has the same invisible problem: your best analysts are doing the same tedious tasks over and over and over and over.

An alert comes in, the analyst enriches it manually, checks three tools, writes up findings, closes or escalates, then repeats that process like 200 times a day. This situational expertise is really what makes that process work. They're intimately familiar with the specific order of lookups and the thresholds that matter, as well as the context that separates a real threat from noise.

All of that context exists entirely in that analyst's head. When they leave for the day, or leave altogether, that knowledge leaves with them.

Blueprints is Anvilogic's answer to this problem.

Blueprints is an AI-assisted workflow layer built natively into the Anvilogic AI SOC platform that lets analysts encode their methods as reusable, automated workflows — called "Blueprints" — that automate triage, tuning workflows, investigation, and response around detections. You can build it once, run it everywhere, and hand it to every analyst on your team.

This guide walks through four concrete use cases:

  • Alert triage automation
  • Detection tuning at scale
  • Investigation runbooks
  • Response orchestration

We'll show you what the problem looks like before using Blueprints, how to build the workflow, and what it looks like after.

What's Blueprints?

A Blueprint is a structured, reusable workflow that runs inside the Anvilogic AI SOC platform. Think of it as a living runbook. It has triggers (an alert fires, a schedule runs, a user clicks execute), logic steps (enrich this field, query this dataset, evaluate this condition), and outputs (create a ticket, send a notification, suppress the alert, escalate).

What separates Blueprints from traditional SOAR playbooks is how you build them. Instead of writing connector code or navigating a drag-and-drop spaghetti diagram, you describe what you want to automate in natural language, and the AI-assisted builder helps generate a structured workflow that you can review, refine, and deploy.

Blueprints are also data-agnostic. Anvilogic can sit across your SIEM and/or data lake, so a Blueprint you build today works whether your data is in Splunk, Microsoft Sentinel, or Snowflake — all without rewriting the workflow for each platform.

A Blueprint has three parts:

  • Triggers (what starts it): Triggers can be alert-based, schedule-based, or on-demand.
  • Workflows (what it does): Workflows can branch on conditions, loop over results, and call external APIs.
  • Outputs (what it produces): Outputs can be annotations, tickets, notifications, suppression rules, or response actions.
Blueprints key concept diagram

Use Case 1: Automated Alert Triage

The Problem

Alert triage is where analysts' time goes to die. When an alert fires, the analyst opens it, copies the source IP into VirusTotal, checks it against internal asset inventory, looks up the user in Active Directory, queries the SIEM for related events in the last 24 hours, and finally makes a judgment call — benign, suspicious, or incident.

That sequence, done manually, takes an average 8 to 15 minutes per alert. At 100 alerts a day, that's a full-time job just for first-pass triage.

The bigger problem is that the process is inconsistent. Different analysts check different things in different orders, weight evidence differently, and document differently. You get inconsistent outcomes and no repeatable standard.

Building the Blueprint

Here's how you'd build an automated triage Blueprint for a phishing alert use case:

  1. Define the trigger. Set the Blueprint to fire on any alert with detection rule category = "Phishing" or "Suspicious Email Link." This scopes the Blueprint to the right alert type without broad over-triggering.
  2. Pull context automatically. Configure enrichment steps: query your email gateway for the full message headers and attachment hash, check the hash against your threat intel feed, look up the recipient's role and access tier in your identity provider, and pull the last 30 days of email activity for that user.
  3. Apply scoring logic. Add a conditional scoring step: assign points based on what the enrichment returns — known-bad hash (+3), external sender to finance/exec (+2), first-time sender to recipient (+1), link to newly registered domain (+2). Set thresholds: score ≥ 4 auto-escalates to Tier 2, score 1–3 flags for analyst review, score 0 suppresses with audit log.
  4. Define outputs. For auto-escalations: create a ticket in your case management system pre-populated with all enrichment context. For analyst-review cases: annotate the alert with a triage summary card. For suppressions: log the decision with a reason code for later QA.

The Anvilogic Advantage

Triage time for that alert type drops from 8–15 minutes to under 60 seconds. Every analyst follows the same process. Context is pre-populated before they even open the alert. And suppression decisions are auditable, not invisible.

For Detection Engineers

The scoring thresholds in step 3 are tunable. After running for two weeks, pull the suppression log and review the false-negative rate. If too many real threats are scoring below threshold, adjust the weights. The Blueprint is a living document where you own the logic.

Detection engineer triage Blueprint scoring thresholds

Use Case 2: Detection Tuning at Scale

The Problem

Noisy detections are one of the most stubborn problems in detection engineering. A rule fires 500 times a day, almost always on legitimate activity, and the team quietly learns to ignore it. The rule is still "on," still generating alerts, still consuming SIEM ingest budget, but nobody's looking at it. Even worse, if a real threat ever fires that same rule, it's buried.

Manual tuning is slow. Someone has to pull the false-positive data, identify the common pattern, write an exclusion, test it, push it, and document it. For a team managing hundreds of rules across multiple SIEMs and a data lake, this is a months-long backlog item that never gets prioritized.

Building the Blueprint

A detection tuning Blueprint runs on a schedule — say, weekly — and automatically identifies and processes candidates for tuning:

  1. Identify high-volume, low-fidelity rules. Query your alert data for rules that fired more than 200 times in the past 7 days with a close/suppression rate above 85%. These are your tuning candidates.
  2. Analyze the pattern. For each candidate rule, run a frequency analysis on the alert fields. Which source IPs, user accounts, hostnames, or process names appear in the suppressed alerts most often? The Blueprint extracts the top recurring values automatically.
  3. Generate an exclusion draft. Using the pattern data, the Blueprint drafts a proposed exclusion — for example: "exclude alerts where process_name = 'svchost.exe' AND parent_process = 'services.exe' AND user = 'SYSTEM'." The draft is written in your SIEM's native query language, ready to review.
  4. Route for approval. Instead of auto-applying, the Blueprint creates a review task in your ticketing system with the proposed exclusion, the supporting data, and a one-click approve/reject. This keeps a human in the loop for the actual rule change while eliminating all the manual analysis work.

The Anvilogic Advantage

Your detection engineers stop spending time finding tuning candidates and start spending time reviewing and approving them. The analysis that used to take 2–3 hours per rule now takes minutes. And because it runs weekly on a schedule, your rule hygiene stays current instead of becoming a once-a-year project.

Use Case 3: Consistent Investigation Runbooks

The Problem

Ask ten analysts how they investigate a suspected compromised account, and you'll get ten different answers. Some check login history first, some go to VPN logs, some look at email activity. Some document their findings in the ticket, some in a separate doc, some not at all. The inconsistency isn't laziness — it's the natural result of tribal knowledge with no enforcement mechanism.

For compliance-heavy environments like finance, healthcare, or tech, this is also an audit problem. "Show me how you investigated this incident" should have a consistent, documented answer. Without runbooks baked into your process, it never will.

Building the Blueprint

An investigation Blueprint works as an on-demand runbook that analysts launch from an open alert:

  1. Define the investigation scope. When the Blueprint is triggered on a "Compromised Account" alert, it automatically scopes the investigation by pulling the account name, associated device IDs, and alert timestamp from the alert fields.
  2. Run the investigation steps in sequence. The Blueprint executes a structured sequence: (a) pull authentication logs for the account ±4 hours around the alert timestamp, (b) query VPN and remote access logs for the same window, (c) check for new persistence mechanisms like scheduled tasks, registry modifications, or new services on the associated device, (d) pull email activity for forwarding rules or new delegates, (e) check for lateral movement indicators by querying access logs on adjacent systems.
  3. Build the investigation summary. Each step appends its findings to a structured investigation card attached to the alert. The analyst sees a consolidated timeline and evidence summary — not just raw log results, but a readable, structured narrative with relevant findings highlighted.
  4. Present a disposition recommendation. Based on what the investigation steps surface, the Blueprint outputs a recommended disposition: "No malicious activity found — suggest close," "Suspicious activity — escalate to IR," or "Confirmed compromise — initiate containment." The analyst makes the final call, but arrives at it with full context already assembled.

The Anvilogic Advantage

Every analyst runs the same investigation for every compromised account alert. Investigation time drops because the data gathering is automated — analysts spend their time on judgment, not lookups. Every investigation produces the same structured output, which satisfies audit requirements and makes post-incident review dramatically faster.

For Managers

Investigation Blueprints also give you visibility you didn't have before. Because every investigation follows the same documented steps and produces the same output format, you can measure investigation quality, identify gaps in data coverage, and compare analyst performance against a consistent baseline — instead of each other's subjective process.

Use Case 4: Response Orchestration

The Problem

When containment needs to happen, every minute counts. Disabling a compromised account should take seconds, not the ten minutes it takes to open the identity management portal, find the account, verify it's the right one, and disable it while simultaneously trying to stay on the incident bridge. Blocking a malicious IP at the firewall, revoking a session token, or isolating an endpoint are all 2–5 minute manual tasks that shouldn't require a human hand.

A fully automated response carries risk too. Auto-block the wrong IP and you might cut off a business-critical system. Auto-disable the wrong account and you lock out someone mid-incident response. The challenge is automating the safe, unambiguous actions while keeping humans in the loop for the high-stakes ones.

Building the Blueprint

A response Blueprint uses conditional logic to separate the high-confidence automated actions from the human-review ones:

  1. Define the confidence threshold. Set the Blueprint to trigger only on confirmed incidents (severity = Critical AND analyst-confirmed = true, OR auto-escalation from a triage Blueprint with score ≥ 6). This ensures response actions only fire on cases with sufficient evidence.
  2. Automate the safe actions. For confirmed compromised accounts: trigger actions via integrated identity providers to disable the account, revoke active sessions, and enforce MFA re-enrollment. These actions are unambiguous, reversible, and time-sensitive — good candidates for full automation.
  3. Queue the high-stakes actions for approval. For endpoint isolation or network-level blocks: create an approval request in your ticketing system with the proposed action, the supporting evidence, and a 15-minute SLA. If the approval window expires without a response, the Blueprint escalates to a senior analyst automatically.
  4. Document everything. Every action the Blueprint takes — whether automated or approved — is logged with a timestamp, the analyst or system that authorized it, and the evidence that triggered it. The incident timeline builds itself.

The Anvilogic Advantage

Response time for high-confidence, reversible actions drops from minutes to seconds. The high-stakes actions that need human judgment still get it — but they arrive pre-packaged with context and an SLA, instead of waiting in a queue with no deadline. And the incident timeline is complete and auditable without anyone having to reconstruct it after the fact.

Getting Started with Blueprints

If you're already running on the Anvilogic AI SOC platform, Blueprints is available in your environment today. Here's the recommended sequence for teams just getting started:

  1. Start with one triage Blueprint for your highest-volume, lowest-fidelity alert type. This gives you fast ROI and a concrete example to learn from before building out more complex workflows.
  2. Use the AI-assisted builder. Describe the workflow in plain language first, review what it generates, then refine. Most teams find the first usable draft takes under 30 minutes.
  3. Run in audit mode before deploying. Blueprints can run silently for a period, logging what they would have done without actually doing it. Use this to validate logic before going live.
  4. Share Blueprints across your team. Once a Blueprint is working well, publish it as a shared template. Every analyst benefits from the first analyst's work.
  5. Review and iterate weekly. Blueprints are not set-and-forget. Review suppression and escalation logs weekly, especially in the first month, and tune thresholds based on what you see.

Get the Latest Resources

Leave Your Data Where You Want: Detect Across Snowflake

Demo Series
Leave Your Data Where You Want: Detect Across Snowflake
Watch

MonteAI: Your Detection Engineering & Threat Hunting Co-Pilot

Demo Series
MonteAI: Your Detection Engineering & Threat Hunting Co-Pilot
Watch
White Paper

A Technical Guide to Automating SOC Operations with Anvilogic Blueprints

Content
April 3, 2026

A Technical Guide to Automating SOC Operations with Anvilogic Blueprints

Content
No items found.

Every SOC has the same invisible problem: your best analysts are doing the same tedious tasks over and over and over and over.

An alert comes in, the analyst enriches it manually, checks three tools, writes up findings, closes or escalates, then repeats that process like 200 times a day. This situational expertise is really what makes that process work. They're intimately familiar with the specific order of lookups and the thresholds that matter, as well as the context that separates a real threat from noise.

All of that context exists entirely in that analyst's head. When they leave for the day, or leave altogether, that knowledge leaves with them.

Blueprints is Anvilogic's answer to this problem.

Blueprints is an AI-assisted workflow layer built natively into the Anvilogic AI SOC platform that lets analysts encode their methods as reusable, automated workflows — called "Blueprints" — that automate triage, tuning workflows, investigation, and response around detections. You can build it once, run it everywhere, and hand it to every analyst on your team.

This guide walks through four concrete use cases:

  • Alert triage automation
  • Detection tuning at scale
  • Investigation runbooks
  • Response orchestration

We'll show you what the problem looks like before using Blueprints, how to build the workflow, and what it looks like after.

What's Blueprints?

A Blueprint is a structured, reusable workflow that runs inside the Anvilogic AI SOC platform. Think of it as a living runbook. It has triggers (an alert fires, a schedule runs, a user clicks execute), logic steps (enrich this field, query this dataset, evaluate this condition), and outputs (create a ticket, send a notification, suppress the alert, escalate).

What separates Blueprints from traditional SOAR playbooks is how you build them. Instead of writing connector code or navigating a drag-and-drop spaghetti diagram, you describe what you want to automate in natural language, and the AI-assisted builder helps generate a structured workflow that you can review, refine, and deploy.

Blueprints are also data-agnostic. Anvilogic can sit across your SIEM and/or data lake, so a Blueprint you build today works whether your data is in Splunk, Microsoft Sentinel, or Snowflake — all without rewriting the workflow for each platform.

A Blueprint has three parts:

  • Triggers (what starts it): Triggers can be alert-based, schedule-based, or on-demand.
  • Workflows (what it does): Workflows can branch on conditions, loop over results, and call external APIs.
  • Outputs (what it produces): Outputs can be annotations, tickets, notifications, suppression rules, or response actions.
Blueprints key concept diagram

Use Case 1: Automated Alert Triage

The Problem

Alert triage is where analysts' time goes to die. When an alert fires, the analyst opens it, copies the source IP into VirusTotal, checks it against internal asset inventory, looks up the user in Active Directory, queries the SIEM for related events in the last 24 hours, and finally makes a judgment call — benign, suspicious, or incident.

That sequence, done manually, takes an average 8 to 15 minutes per alert. At 100 alerts a day, that's a full-time job just for first-pass triage.

The bigger problem is that the process is inconsistent. Different analysts check different things in different orders, weight evidence differently, and document differently. You get inconsistent outcomes and no repeatable standard.

Building the Blueprint

Here's how you'd build an automated triage Blueprint for a phishing alert use case:

  1. Define the trigger. Set the Blueprint to fire on any alert with detection rule category = "Phishing" or "Suspicious Email Link." This scopes the Blueprint to the right alert type without broad over-triggering.
  2. Pull context automatically. Configure enrichment steps: query your email gateway for the full message headers and attachment hash, check the hash against your threat intel feed, look up the recipient's role and access tier in your identity provider, and pull the last 30 days of email activity for that user.
  3. Apply scoring logic. Add a conditional scoring step: assign points based on what the enrichment returns — known-bad hash (+3), external sender to finance/exec (+2), first-time sender to recipient (+1), link to newly registered domain (+2). Set thresholds: score ≥ 4 auto-escalates to Tier 2, score 1–3 flags for analyst review, score 0 suppresses with audit log.
  4. Define outputs. For auto-escalations: create a ticket in your case management system pre-populated with all enrichment context. For analyst-review cases: annotate the alert with a triage summary card. For suppressions: log the decision with a reason code for later QA.

The Anvilogic Advantage

Triage time for that alert type drops from 8–15 minutes to under 60 seconds. Every analyst follows the same process. Context is pre-populated before they even open the alert. And suppression decisions are auditable, not invisible.

For Detection Engineers

The scoring thresholds in step 3 are tunable. After running for two weeks, pull the suppression log and review the false-negative rate. If too many real threats are scoring below threshold, adjust the weights. The Blueprint is a living document where you own the logic.

Detection engineer triage Blueprint scoring thresholds

Use Case 2: Detection Tuning at Scale

The Problem

Noisy detections are one of the most stubborn problems in detection engineering. A rule fires 500 times a day, almost always on legitimate activity, and the team quietly learns to ignore it. The rule is still "on," still generating alerts, still consuming SIEM ingest budget, but nobody's looking at it. Even worse, if a real threat ever fires that same rule, it's buried.

Manual tuning is slow. Someone has to pull the false-positive data, identify the common pattern, write an exclusion, test it, push it, and document it. For a team managing hundreds of rules across multiple SIEMs and a data lake, this is a months-long backlog item that never gets prioritized.

Building the Blueprint

A detection tuning Blueprint runs on a schedule — say, weekly — and automatically identifies and processes candidates for tuning:

  1. Identify high-volume, low-fidelity rules. Query your alert data for rules that fired more than 200 times in the past 7 days with a close/suppression rate above 85%. These are your tuning candidates.
  2. Analyze the pattern. For each candidate rule, run a frequency analysis on the alert fields. Which source IPs, user accounts, hostnames, or process names appear in the suppressed alerts most often? The Blueprint extracts the top recurring values automatically.
  3. Generate an exclusion draft. Using the pattern data, the Blueprint drafts a proposed exclusion — for example: "exclude alerts where process_name = 'svchost.exe' AND parent_process = 'services.exe' AND user = 'SYSTEM'." The draft is written in your SIEM's native query language, ready to review.
  4. Route for approval. Instead of auto-applying, the Blueprint creates a review task in your ticketing system with the proposed exclusion, the supporting data, and a one-click approve/reject. This keeps a human in the loop for the actual rule change while eliminating all the manual analysis work.

The Anvilogic Advantage

Your detection engineers stop spending time finding tuning candidates and start spending time reviewing and approving them. The analysis that used to take 2–3 hours per rule now takes minutes. And because it runs weekly on a schedule, your rule hygiene stays current instead of becoming a once-a-year project.

Use Case 3: Consistent Investigation Runbooks

The Problem

Ask ten analysts how they investigate a suspected compromised account, and you'll get ten different answers. Some check login history first, some go to VPN logs, some look at email activity. Some document their findings in the ticket, some in a separate doc, some not at all. The inconsistency isn't laziness — it's the natural result of tribal knowledge with no enforcement mechanism.

For compliance-heavy environments like finance, healthcare, or tech, this is also an audit problem. "Show me how you investigated this incident" should have a consistent, documented answer. Without runbooks baked into your process, it never will.

Building the Blueprint

An investigation Blueprint works as an on-demand runbook that analysts launch from an open alert:

  1. Define the investigation scope. When the Blueprint is triggered on a "Compromised Account" alert, it automatically scopes the investigation by pulling the account name, associated device IDs, and alert timestamp from the alert fields.
  2. Run the investigation steps in sequence. The Blueprint executes a structured sequence: (a) pull authentication logs for the account ±4 hours around the alert timestamp, (b) query VPN and remote access logs for the same window, (c) check for new persistence mechanisms like scheduled tasks, registry modifications, or new services on the associated device, (d) pull email activity for forwarding rules or new delegates, (e) check for lateral movement indicators by querying access logs on adjacent systems.
  3. Build the investigation summary. Each step appends its findings to a structured investigation card attached to the alert. The analyst sees a consolidated timeline and evidence summary — not just raw log results, but a readable, structured narrative with relevant findings highlighted.
  4. Present a disposition recommendation. Based on what the investigation steps surface, the Blueprint outputs a recommended disposition: "No malicious activity found — suggest close," "Suspicious activity — escalate to IR," or "Confirmed compromise — initiate containment." The analyst makes the final call, but arrives at it with full context already assembled.

The Anvilogic Advantage

Every analyst runs the same investigation for every compromised account alert. Investigation time drops because the data gathering is automated — analysts spend their time on judgment, not lookups. Every investigation produces the same structured output, which satisfies audit requirements and makes post-incident review dramatically faster.

For Managers

Investigation Blueprints also give you visibility you didn't have before. Because every investigation follows the same documented steps and produces the same output format, you can measure investigation quality, identify gaps in data coverage, and compare analyst performance against a consistent baseline — instead of each other's subjective process.

Use Case 4: Response Orchestration

The Problem

When containment needs to happen, every minute counts. Disabling a compromised account should take seconds, not the ten minutes it takes to open the identity management portal, find the account, verify it's the right one, and disable it while simultaneously trying to stay on the incident bridge. Blocking a malicious IP at the firewall, revoking a session token, or isolating an endpoint are all 2–5 minute manual tasks that shouldn't require a human hand.

A fully automated response carries risk too. Auto-block the wrong IP and you might cut off a business-critical system. Auto-disable the wrong account and you lock out someone mid-incident response. The challenge is automating the safe, unambiguous actions while keeping humans in the loop for the high-stakes ones.

Building the Blueprint

A response Blueprint uses conditional logic to separate the high-confidence automated actions from the human-review ones:

  1. Define the confidence threshold. Set the Blueprint to trigger only on confirmed incidents (severity = Critical AND analyst-confirmed = true, OR auto-escalation from a triage Blueprint with score ≥ 6). This ensures response actions only fire on cases with sufficient evidence.
  2. Automate the safe actions. For confirmed compromised accounts: trigger actions via integrated identity providers to disable the account, revoke active sessions, and enforce MFA re-enrollment. These actions are unambiguous, reversible, and time-sensitive — good candidates for full automation.
  3. Queue the high-stakes actions for approval. For endpoint isolation or network-level blocks: create an approval request in your ticketing system with the proposed action, the supporting evidence, and a 15-minute SLA. If the approval window expires without a response, the Blueprint escalates to a senior analyst automatically.
  4. Document everything. Every action the Blueprint takes — whether automated or approved — is logged with a timestamp, the analyst or system that authorized it, and the evidence that triggered it. The incident timeline builds itself.

The Anvilogic Advantage

Response time for high-confidence, reversible actions drops from minutes to seconds. The high-stakes actions that need human judgment still get it — but they arrive pre-packaged with context and an SLA, instead of waiting in a queue with no deadline. And the incident timeline is complete and auditable without anyone having to reconstruct it after the fact.

Getting Started with Blueprints

If you're already running on the Anvilogic AI SOC platform, Blueprints is available in your environment today. Here's the recommended sequence for teams just getting started:

  1. Start with one triage Blueprint for your highest-volume, lowest-fidelity alert type. This gives you fast ROI and a concrete example to learn from before building out more complex workflows.
  2. Use the AI-assisted builder. Describe the workflow in plain language first, review what it generates, then refine. Most teams find the first usable draft takes under 30 minutes.
  3. Run in audit mode before deploying. Blueprints can run silently for a period, logging what they would have done without actually doing it. Use this to validate logic before going live.
  4. Share Blueprints across your team. Once a Blueprint is working well, publish it as a shared template. Every analyst benefits from the first analyst's work.
  5. Review and iterate weekly. Blueprints are not set-and-forget. Review suppression and escalation logs weekly, especially in the first month, and tune thresholds based on what you see.
David Lugo
Head of Product Marketing

Resources

No items found.

Build Detection You Want,
Where You Want

Build Detection You Want,
Where You Want

April 3, 2026

A Technical Guide to Automating SOC Operations with Anvilogic Blueprints

Content
David Lugo
Head of Product Marketing

Resources

No items found.

Build Detection You Want,
Where You Want

Build Detection You Want,
Where You Want

Product Vision
|
April 3, 2026
|
4 min read

A Technical Guide to Automating SOC Operations with Anvilogic Blueprints

This is some text inside of a div block.

| Author

Every SOC has the same invisible problem: your best analysts are doing the same tedious tasks over and over and over and over.

An alert comes in, the analyst enriches it manually, checks three tools, writes up findings, closes or escalates, then repeats that process like 200 times a day. This situational expertise is really what makes that process work. They're intimately familiar with the specific order of lookups and the thresholds that matter, as well as the context that separates a real threat from noise.

All of that context exists entirely in that analyst's head. When they leave for the day, or leave altogether, that knowledge leaves with them.

Blueprints is Anvilogic's answer to this problem.

Blueprints is an AI-assisted workflow layer built natively into the Anvilogic AI SOC platform that lets analysts encode their methods as reusable, automated workflows — called "Blueprints" — that automate triage, tuning workflows, investigation, and response around detections. You can build it once, run it everywhere, and hand it to every analyst on your team.

This guide walks through four concrete use cases:

  • Alert triage automation
  • Detection tuning at scale
  • Investigation runbooks
  • Response orchestration

We'll show you what the problem looks like before using Blueprints, how to build the workflow, and what it looks like after.

What's Blueprints?

A Blueprint is a structured, reusable workflow that runs inside the Anvilogic AI SOC platform. Think of it as a living runbook. It has triggers (an alert fires, a schedule runs, a user clicks execute), logic steps (enrich this field, query this dataset, evaluate this condition), and outputs (create a ticket, send a notification, suppress the alert, escalate).

What separates Blueprints from traditional SOAR playbooks is how you build them. Instead of writing connector code or navigating a drag-and-drop spaghetti diagram, you describe what you want to automate in natural language, and the AI-assisted builder helps generate a structured workflow that you can review, refine, and deploy.

Blueprints are also data-agnostic. Anvilogic can sit across your SIEM and/or data lake, so a Blueprint you build today works whether your data is in Splunk, Microsoft Sentinel, or Snowflake — all without rewriting the workflow for each platform.

A Blueprint has three parts:

  • Triggers (what starts it): Triggers can be alert-based, schedule-based, or on-demand.
  • Workflows (what it does): Workflows can branch on conditions, loop over results, and call external APIs.
  • Outputs (what it produces): Outputs can be annotations, tickets, notifications, suppression rules, or response actions.
Blueprints key concept diagram

Use Case 1: Automated Alert Triage

The Problem

Alert triage is where analysts' time goes to die. When an alert fires, the analyst opens it, copies the source IP into VirusTotal, checks it against internal asset inventory, looks up the user in Active Directory, queries the SIEM for related events in the last 24 hours, and finally makes a judgment call — benign, suspicious, or incident.

That sequence, done manually, takes an average 8 to 15 minutes per alert. At 100 alerts a day, that's a full-time job just for first-pass triage.

The bigger problem is that the process is inconsistent. Different analysts check different things in different orders, weight evidence differently, and document differently. You get inconsistent outcomes and no repeatable standard.

Building the Blueprint

Here's how you'd build an automated triage Blueprint for a phishing alert use case:

  1. Define the trigger. Set the Blueprint to fire on any alert with detection rule category = "Phishing" or "Suspicious Email Link." This scopes the Blueprint to the right alert type without broad over-triggering.
  2. Pull context automatically. Configure enrichment steps: query your email gateway for the full message headers and attachment hash, check the hash against your threat intel feed, look up the recipient's role and access tier in your identity provider, and pull the last 30 days of email activity for that user.
  3. Apply scoring logic. Add a conditional scoring step: assign points based on what the enrichment returns — known-bad hash (+3), external sender to finance/exec (+2), first-time sender to recipient (+1), link to newly registered domain (+2). Set thresholds: score ≥ 4 auto-escalates to Tier 2, score 1–3 flags for analyst review, score 0 suppresses with audit log.
  4. Define outputs. For auto-escalations: create a ticket in your case management system pre-populated with all enrichment context. For analyst-review cases: annotate the alert with a triage summary card. For suppressions: log the decision with a reason code for later QA.

The Anvilogic Advantage

Triage time for that alert type drops from 8–15 minutes to under 60 seconds. Every analyst follows the same process. Context is pre-populated before they even open the alert. And suppression decisions are auditable, not invisible.

For Detection Engineers

The scoring thresholds in step 3 are tunable. After running for two weeks, pull the suppression log and review the false-negative rate. If too many real threats are scoring below threshold, adjust the weights. The Blueprint is a living document where you own the logic.

Detection engineer triage Blueprint scoring thresholds

Use Case 2: Detection Tuning at Scale

The Problem

Noisy detections are one of the most stubborn problems in detection engineering. A rule fires 500 times a day, almost always on legitimate activity, and the team quietly learns to ignore it. The rule is still "on," still generating alerts, still consuming SIEM ingest budget, but nobody's looking at it. Even worse, if a real threat ever fires that same rule, it's buried.

Manual tuning is slow. Someone has to pull the false-positive data, identify the common pattern, write an exclusion, test it, push it, and document it. For a team managing hundreds of rules across multiple SIEMs and a data lake, this is a months-long backlog item that never gets prioritized.

Building the Blueprint

A detection tuning Blueprint runs on a schedule — say, weekly — and automatically identifies and processes candidates for tuning:

  1. Identify high-volume, low-fidelity rules. Query your alert data for rules that fired more than 200 times in the past 7 days with a close/suppression rate above 85%. These are your tuning candidates.
  2. Analyze the pattern. For each candidate rule, run a frequency analysis on the alert fields. Which source IPs, user accounts, hostnames, or process names appear in the suppressed alerts most often? The Blueprint extracts the top recurring values automatically.
  3. Generate an exclusion draft. Using the pattern data, the Blueprint drafts a proposed exclusion — for example: "exclude alerts where process_name = 'svchost.exe' AND parent_process = 'services.exe' AND user = 'SYSTEM'." The draft is written in your SIEM's native query language, ready to review.
  4. Route for approval. Instead of auto-applying, the Blueprint creates a review task in your ticketing system with the proposed exclusion, the supporting data, and a one-click approve/reject. This keeps a human in the loop for the actual rule change while eliminating all the manual analysis work.

The Anvilogic Advantage

Your detection engineers stop spending time finding tuning candidates and start spending time reviewing and approving them. The analysis that used to take 2–3 hours per rule now takes minutes. And because it runs weekly on a schedule, your rule hygiene stays current instead of becoming a once-a-year project.

Use Case 3: Consistent Investigation Runbooks

The Problem

Ask ten analysts how they investigate a suspected compromised account, and you'll get ten different answers. Some check login history first, some go to VPN logs, some look at email activity. Some document their findings in the ticket, some in a separate doc, some not at all. The inconsistency isn't laziness — it's the natural result of tribal knowledge with no enforcement mechanism.

For compliance-heavy environments like finance, healthcare, or tech, this is also an audit problem. "Show me how you investigated this incident" should have a consistent, documented answer. Without runbooks baked into your process, it never will.

Building the Blueprint

An investigation Blueprint works as an on-demand runbook that analysts launch from an open alert:

  1. Define the investigation scope. When the Blueprint is triggered on a "Compromised Account" alert, it automatically scopes the investigation by pulling the account name, associated device IDs, and alert timestamp from the alert fields.
  2. Run the investigation steps in sequence. The Blueprint executes a structured sequence: (a) pull authentication logs for the account ±4 hours around the alert timestamp, (b) query VPN and remote access logs for the same window, (c) check for new persistence mechanisms like scheduled tasks, registry modifications, or new services on the associated device, (d) pull email activity for forwarding rules or new delegates, (e) check for lateral movement indicators by querying access logs on adjacent systems.
  3. Build the investigation summary. Each step appends its findings to a structured investigation card attached to the alert. The analyst sees a consolidated timeline and evidence summary — not just raw log results, but a readable, structured narrative with relevant findings highlighted.
  4. Present a disposition recommendation. Based on what the investigation steps surface, the Blueprint outputs a recommended disposition: "No malicious activity found — suggest close," "Suspicious activity — escalate to IR," or "Confirmed compromise — initiate containment." The analyst makes the final call, but arrives at it with full context already assembled.

The Anvilogic Advantage

Every analyst runs the same investigation for every compromised account alert. Investigation time drops because the data gathering is automated — analysts spend their time on judgment, not lookups. Every investigation produces the same structured output, which satisfies audit requirements and makes post-incident review dramatically faster.

For Managers

Investigation Blueprints also give you visibility you didn't have before. Because every investigation follows the same documented steps and produces the same output format, you can measure investigation quality, identify gaps in data coverage, and compare analyst performance against a consistent baseline — instead of each other's subjective process.

Use Case 4: Response Orchestration

The Problem

When containment needs to happen, every minute counts. Disabling a compromised account should take seconds, not the ten minutes it takes to open the identity management portal, find the account, verify it's the right one, and disable it while simultaneously trying to stay on the incident bridge. Blocking a malicious IP at the firewall, revoking a session token, or isolating an endpoint are all 2–5 minute manual tasks that shouldn't require a human hand.

A fully automated response carries risk too. Auto-block the wrong IP and you might cut off a business-critical system. Auto-disable the wrong account and you lock out someone mid-incident response. The challenge is automating the safe, unambiguous actions while keeping humans in the loop for the high-stakes ones.

Building the Blueprint

A response Blueprint uses conditional logic to separate the high-confidence automated actions from the human-review ones:

  1. Define the confidence threshold. Set the Blueprint to trigger only on confirmed incidents (severity = Critical AND analyst-confirmed = true, OR auto-escalation from a triage Blueprint with score ≥ 6). This ensures response actions only fire on cases with sufficient evidence.
  2. Automate the safe actions. For confirmed compromised accounts: trigger actions via integrated identity providers to disable the account, revoke active sessions, and enforce MFA re-enrollment. These actions are unambiguous, reversible, and time-sensitive — good candidates for full automation.
  3. Queue the high-stakes actions for approval. For endpoint isolation or network-level blocks: create an approval request in your ticketing system with the proposed action, the supporting evidence, and a 15-minute SLA. If the approval window expires without a response, the Blueprint escalates to a senior analyst automatically.
  4. Document everything. Every action the Blueprint takes — whether automated or approved — is logged with a timestamp, the analyst or system that authorized it, and the evidence that triggered it. The incident timeline builds itself.

The Anvilogic Advantage

Response time for high-confidence, reversible actions drops from minutes to seconds. The high-stakes actions that need human judgment still get it — but they arrive pre-packaged with context and an SLA, instead of waiting in a queue with no deadline. And the incident timeline is complete and auditable without anyone having to reconstruct it after the fact.

Getting Started with Blueprints

If you're already running on the Anvilogic AI SOC platform, Blueprints is available in your environment today. Here's the recommended sequence for teams just getting started:

  1. Start with one triage Blueprint for your highest-volume, lowest-fidelity alert type. This gives you fast ROI and a concrete example to learn from before building out more complex workflows.
  2. Use the AI-assisted builder. Describe the workflow in plain language first, review what it generates, then refine. Most teams find the first usable draft takes under 30 minutes.
  3. Run in audit mode before deploying. Blueprints can run silently for a period, logging what they would have done without actually doing it. Use this to validate logic before going live.
  4. Share Blueprints across your team. Once a Blueprint is working well, publish it as a shared template. Every analyst benefits from the first analyst's work.
  5. Review and iterate weekly. Blueprints are not set-and-forget. Review suppression and escalation logs weekly, especially in the first month, and tune thresholds based on what you see.

Resources

No items found.