Workflow Patterns

Common patterns that combine triggers, AI investigation, conditional branching, and actions into complete workflows.

Overview

This page walks through complete workflow patterns that combine multiple node types into real automation. Each pattern shows the full chain from trigger to action, including how to configure each node and how data flows between them.

For individual node configuration details, see the dedicated node pages linked from the Workflows Overview.

Incident automation

Without this workflow, an SRE responding to a monitor alert must manually check the dashboard, search logs for related errors, review recent deployments in GitHub, create a Slack channel for the incident, invite the right responders, and post the initial context — a sequence that can take 10–15 minutes before investigation even begins. This pattern automates that entire sequence.

A teammate investigates the alert and returns structured output. An If/Else node evaluates the output and routes to different action paths based on severity and confidence.

Workflow structure

  1. Start node — Monitor trigger
  2. Teammate node — Investigates the alert, returns JSON
  3. If/Else node — Branches on severity and confidence
  4. Critical path — PagerDuty incident, Slack channel, on-call notification
  5. Low-severity path — Jira backlog ticket, team channel notification

Step 1: Configure the Start node

Set the trigger type to Monitors. Connect a monitor by navigating to the monitor’s notification settings and typing @ to select this workflow.

When the monitor fires, the alert data becomes available as variables for downstream nodes. This includes the monitor name, severity, evaluated metric values, and group-by attributes.

Step 2: Add a Teammate node

Select a teammate with access to the tools needed for investigation — for example, an SRE teammate connected to Edge Delta, PagerDuty, and GitHub.

Set the Output format to JSON and click Edit JSON Schema to define the fields the teammate must return — for example, severity, affected service, root cause hypothesis, and a recommended action. In the prompt, use {{{ toJson data }}} to pass the trigger payload so the teammate has the full alert context.

The teammate uses its connected tools to investigate, then returns a structured JSON response conforming to your schema. These fields become available to downstream nodes.

Step 3: Add an If/Else node

Create conditions that evaluate the teammate’s structured output. The field names in your conditions must match the fields defined in your JSON Schema. For example:

PathCondition
CriticalSeverity is critical and confidence exceeds your team’s threshold
HighSeverity is high
Lowtrue (catch-all)

The Critical path handles confirmed high-impact incidents. The High path handles elevated alerts that may need attention. The Low catch-all handles everything else.

Step 4: Configure critical path actions

Connect the following action nodes in sequence to the Critical output port:

  1. Create PagerDuty Incident — Set the service, title, and body using variables from the teammate’s output. Set urgency to high.
  2. Create Slack Channel — Name the channel after the incident. Store the result in a result field so downstream nodes can reference the new channel.
  3. Get PagerDuty On-call User — Look up who is on-call for the relevant schedule. Store the result in a result field.
  4. Invite Slack Users — Invite the on-call user to the new channel using the stored result field.
  5. Send Slack Message — Post the investigation summary to the incident channel, mentioning the on-call user.

Step 5: Configure low-severity path actions

Connect the following action nodes to the Low output port:

  1. Create Jira Issue — Create a backlog ticket with the alert details and investigation summary.
  2. Send Slack Message — Post a brief notification to the team channel with the summary.

Result

When a monitor fires:

  • If the teammate assesses critical severity with high confidence, the workflow creates a PagerDuty incident, spins up a dedicated Slack channel, pulls in the on-call engineer, and posts the investigation context.
  • Otherwise, the workflow files a Jira ticket and notifies the team channel. No page, no incident channel.

The same monitor event produces different operational outcomes based on the teammate’s analysis — with guaranteed execution at every step.

Post-deploy verification

This pattern uses a connector trigger to verify service health after a deployment.

Workflow structure

  1. Start node — GitHub connector, deployment status event
  2. Teammate node — Checks service health post-deploy, returns JSON
  3. If/Else node — Branches on whether degradation is detected
  4. Degradation path — Notify deployer via Slack
  5. Healthy path — Log success (optional)

Configuration

Start node: Select the GitHub connector and choose the deployment status event type.

Teammate node: Set output to JSON and define a schema with fields indicating whether degradation was detected and what metrics are affected. Prompt the teammate to check error rates, latency, and key metrics for the deployed service over the last 15 minutes.

If/Else node: Branch on whether the teammate detected degradation. Add a catch-all path for the healthy case.

Degradation path: Send a Slack message to the deployer’s team channel with the affected metrics and the teammate’s recommendation. Optionally create a Jira issue for tracking.

Healthy path: Optionally send a confirmation message or take no action.

Result

Every deployment automatically gets a health check. If the teammate detects degradation correlated with the deploy, the team is notified immediately. No manual monitoring needed after each release.

Scheduled health check

This pattern runs on a cron schedule to produce a daily operations summary.

Workflow structure

  1. Start node — Periodic run (daily at 9:00 AM)
  2. Teammate node — Reviews overnight alerts and system health
  3. Action node — Posts morning summary to Slack

Configuration

Start node: Set the trigger to Periodic run with the cron expression 0 9 * * *.

Teammate node: Set output to Text (since the output goes directly to a Slack message, not through conditional logic). Prompt the teammate to review alerts from the past 12 hours, identify unresolved issues, and summarize risk areas.

Example prompt:

Review all alerts from the past 12 hours. For each unresolved alert, note the
service, severity, and current status. Identify any risk trends or recurring
issues. Format the output as a morning ops summary suitable for posting in Slack.

Send Slack Message: Post the teammate’s text output to your ops channel.

Result

The team gets a daily briefing every morning without manual effort. The teammate reviews overnight activity and surfaces what needs attention.

Periodic AI review

This pattern runs a teammate on a schedule and creates an AI conversation thread for interactive follow-up. Unlike the scheduled health check pattern that posts a static summary to Slack, this pattern creates a full AI-assisted thread where team members can ask follow-up questions.

Workflow structure

  1. Start node — Periodic run (e.g., daily at 9:00 AM)
  2. Teammate node — Reviews overnight alerts and system health, returns text summary
  3. Action node — Start AI Conversation in a designated channel

Configuration

Start node: Set the trigger to Periodic run with the desired cron expression (for example, 0 9 * * * for daily at 9:00 AM).

Teammate node: Set output to Text. Prompt the teammate to review alerts from the past 12 hours, identify unresolved issues, and summarize risk areas.

Start AI Conversation: Select the target channel, set a descriptive thread title (for example, “Morning ops review”), and pass the teammate’s text output as the message content.

Result

The team gets a daily AI-assisted review thread. Unlike a static Slack message, team members can reply in the thread and the AI teammate continues the conversation — answering follow-up questions, running additional queries, and drilling into specific issues raised by the initial review.

This pattern is useful for teams migrating from periodic tasks, as it provides the same scheduled execution with richer interactive capability.