Integrations
Connect Runframe to your existing monitoring, alerting, and communication tools.
Overview
Runframe integrates with the tools you already use to create incidents automatically, sync updates, and streamline incident response. Reduce toil and ensure no alert goes unnoticed.
Why integrate?
| Benefit | Explanation |
|---|---|
| Automatic incident creation – Alerts create incidents without manual intervention | |
| Bi-directional sync – Updates in Runframe reflect in your tools | |
| Faster response – Page the right people immediately | |
| Single pane of glass – Manage incidents from Slack without tool-switching |
Slack
Runframe’s deepest integration is with Slack, your incident command center.
Features
| Feature | Description |
|---|---|
| Slash commands – Create and manage incidents from Slack | |
| Incident channels – Dedicated channels for each incident | |
| Notifications – Real-time updates in Slack | |
| Interactive modals – Rich forms for incident creation | |
| Status syncing – Incident status updates post to channels |
Installation
- Visit the Slack App Directory or use the link from your Runframe dashboard
- Click Add to Slack
- Grant Runframe permissions:
- Create channels
- Send messages
- View user information
- Complete installation and sign in to Runframe
Slash commands
Once installed, use these commands from any Slack channel:
| Command | Description |
|---|---|
/inc create | Create a new incident |
/inc assign | Assign responders |
/inc status | Update incident status |
/inc severity | Change severity |
/inc update | Post a status update |
/inc resolve | Mark as resolved |
/inc close | Archive the channel |
/inc page | Page the on-call |
See the Slash Commands guide for complete documentation.
Webhook configuration
For bi-directional sync, configure the Slack webhook URL in Runframe:
- Navigate to Settings → Integrations → Slack
- Copy the webhook URL provided by Runframe
- Paste into your Slack app configuration
- Test the connection
Incident channel naming
Runframe creates incident channels with a consistent naming pattern:
inc-YYYY-MM-DD-###
Example: inc-2025-01-15-001
Customize the naming pattern in Settings if needed.
Datadog
Create incidents automatically from Datadog monitors and alerts.
Setup from Runframe
- Navigate to Settings → Integrations → Datadog
- Click Connect Datadog
- Enter your Datadog API key and application key
- Click Test Connection to verify
- Configure which monitors should create incidents
Setup from Datadog
- In Datadog, navigate to Integrations → Webhooks
- Create a new webhook
- Use the Runframe webhook URL:
https://api.runframe.io/webhooks/datadog - Customize the payload to include incident details
- Associate the webhook with your monitors
Datadog to Runframe field mapping
| Datadog field | Runframe field |
|---|---|
| Monitor name | Incident title |
| Monitor query | Description |
| Alert priority | Severity (high → P0/P1, medium → P2, low → P3) |
| Hosts and tags | Affected services |
| Alert snapshot | Link to Datadog dashboard |
Severity mapping
Configure how Datadog alert priorities map to Runframe severities:
| Datadog priority | Default Runframe severity |
|---|---|
| Critical | P0 |
| High | P1 |
| Medium | P2 |
| Low | P3 |
Customize severity mapping
Adjust severity mapping per monitor based on your service’s priorities. A “low” priority database alert might be P1 if database is business-critical.
Bi-directional sync
When enabled, Runframe status updates post back to Datadog:
- Investigating → Alert acknowledges
- Identified → Alert updates with root cause
- Monitoring → Alert notes fix deployed
- Resolved → Alert resolves
Sentry
Create incidents from Sentry error events and alerts.
Setup
- Navigate to Settings → Integrations → Sentry
- Click Connect Sentry
- Enter your Sentry organization slug and authentication token
- Click Test Connection
- Configure which projects and alert rules should create incidents
Sentry to Runframe field mapping
| Sentry field | Runframe field |
|---|---|
| Issue title | Incident title |
| Issue message | Description |
| Issue level | Severity (error → P1, warning → P2) |
| Project | Affected service |
| First seen | Detection time |
| Event count | Impact indicator |
Severity mapping
| Sentry level | Default Runframe severity |
|---|---|
| Fatal | P0 |
| Error | P1 |
| Warning | P2 |
| Info | P3 |
Bi-directional sync
Enable bi-directional sync to link Runframe incidents to Sentry issues:
- Runframe creates a Sentry issue note when incident is created
- Status updates in Runframe post to Sentry issue comments
- Resolved Runframe incidents resolve Sentry issues (optional)
Prometheus
Create incidents from Prometheus alert rules via Alertmanager.
Setup
- Navigate to Settings → Integrations → Prometheus
- Copy the Runframe webhook URL
- In Alertmanager, configure a webhook receiver:
receivers:
- name: 'runframe'
webhook_configs:
- url: 'https://api.runframe.io/webhooks/prometheus'
send_resolved: true
- Associate the receiver with your alert routes
Prometheus to Runframe field mapping
| Prometheus field | Runframe field |
|---|---|
| Alert name | Incident title |
| Alert annotations | Description |
| Alert labels | Service and environment tags |
| Alert severity | Severity (critical → P0, warning → P2) |
| Firing time | Detection time |
Severity mapping
| Prometheus severity | Default Runframe severity |
|---|---|
| Critical | P0 |
| High | P1 |
| Warning | P2 |
| Info | P3 |
Alertmanager configuration example
route:
group_by: ['alertname', 'cluster', 'service']
group_wait: 10s
group_interval: 10s
repeat_interval: 12h
receiver: 'runframe'
receivers:
- name: 'runframe'
webhook_configs:
- url: 'https://api.runframe.io/webhooks/prometheus?api_key=YOUR_KEY'
send_resolved: true
Keep your API key secure
Treat Runframe API keys like passwords. Use Alertmanager templates or secrets management to avoid hardcoding keys in config files.
PagerDuty
Migrate from PagerDuty or run both tools in parallel during transition.
Migration mode
Runframe can receive PagerDuty webhooks to create incidents while you transition:
- Navigate to Settings → Integrations → PagerDuty
- Copy the Runframe webhook URL
- In PagerDuty, create a webhook extension:
- Go to Service → Integrations → Extensions
- Add a Generic Webhook extension
- Paste the Runframe URL
- Configure which services should send webhooks
PagerDuty to Runframe field mapping
| PagerDuty field | Runframe field |
|---|---|
| Incident title | Incident title |
| Incident description | Description |
| Incident urgency | Severity (high → P0/P1, low → P2/P3) |
| Service name | Affected service |
| Assigned to | Responder assignment |
Bi-directional sync
Enable bi-directional sync to:
- Update PagerDuty incidents when Runframe status changes
- Acknowledge PagerDuty alerts from Runframe
- Resolve PagerDuty incidents when Runframe incidents resolve
Run PagerDuty and Runframe in parallel
During migration, run both tools simultaneously. Start with non-critical services, then migrate fully once confident.
Custom webhooks
Create incidents from any tool that supports webhooks.
Setup
- Navigate to Settings → Integrations → Custom Webhooks
- Click New Webhook
- Copy the unique webhook URL
- Configure your tool to POST to this URL
Webhook payload format
Send a JSON payload with these fields:
{
"title": "Incident title",
"description": "What's happening",
"severity": "P1",
"service": "service-name",
"affected_services": ["api", "database"],
"customer_impact": true,
"tags": {
"environment": "production",
"region": "us-east-1"
}
}
Required and optional fields
| Field | Required | Description |
|---|---|---|
title | Yes | Brief incident summary |
description | No | What’s happening, symptoms observed |
severity | No | P0 through P4 (default: P2) |
service | No | Primary affected service |
affected_services | No | Array of affected services |
customer_impact | No | Boolean (default: false) |
tags | No | Object with key-value pairs |
Example: Custom monitoring script
curl -X POST https://api.runframe.io/webhooks/custom/YOUR_WEBHOOK_ID \
-H "Content-Type: application/json" \
-d '{
"title": "High memory usage on production servers",
"description": "Memory usage above 90% on 3 servers",
"severity": "P1",
"service": "backend-api",
"customer_impact": true
}'
API
Build custom integrations with the Runframe REST API.
Authentication
All API requests require an API key:
curl https://api.runframe.io/v1/incidents \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json"
Create an incident
curl https://api.runframe.io/v1/incidents \
-X POST \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "API latency spike",
"severity": "P1",
"description": "API response times > 5s",
"affected_services": ["api-backend"],
"customer_impact": true
}'
Update an incident
curl https://api.runframe.io/v1/incidents/INC-042 \
-X PATCH \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"status": "identified"
}'
See the API reference for complete documentation.
Testing integrations
After setting up an integration, verify it works correctly.
Test incident creation
- Trigger a test alert from your monitoring tool
- Verify Runframe creates the incident
- Check that field mapping is correct
- Confirm the right people were notified
Test bi-directional sync
- Create an incident in Runframe
- Update the status
- Verify the update reflects in the integrated tool
- Resolve the incident and confirm sync
Monitoring integration health
Runframe tracks integration health:
| Metric | Description |
|---|---|
| Webhook success rate | Percentage of successful webhook deliveries |
| Last successful sync | Timestamp of last bi-directional sync |
| Error logs | Recent integration failures |
View integration health in Settings → Integrations.
Best practices
Integration strategy
Do:
- Start simple – Begin with 1 to 2 critical integrations
- Test thoroughly – Verify integrations work before relying on them
- Document customizations – Keep records of field mappings and severity rules
- Monitor health – Check integration status regularly
Don’t:
- Don’t create duplicate incidents – Deduplicate alerts from multiple sources
- Don’t over-automate – Manual control for high-severity incidents
- Don’t ignore errors – Failed webhooks need investigation
- Don’t set and forget – Review integration quarterly
Reducing alert noise
Integrations can create too many incidents. Reduce noise:
- Aggregate alerts – Group similar alerts into a single incident
- Set thresholds – Only create incidents for alerts exceeding severity
- Time-based windows – Suppress repeated alerts within a time window
- Smart deduplication – Link duplicate incidents instead of creating new ones
Alert fatigue is real
Too many incidents train teams to ignore notifications. Be selective about which alerts create incidents.
Need more?
- Slash Commands – Complete
/inccommand reference - Incidents – Incident lifecycle and severity
- On-Call – Scheduling and rotations
- Webhooks – Custom webhook integrations
- Web Dashboard – Integration management UI