Your agents don’t just wait to be asked
Most AI tools are reactive — you ask, they answer. Scheduled Jobs make your agents proactive. Set a schedule, point it at any agent, and it fires automatically — running queries, checking metrics, scanning for anomalies, and posting results directly to a Slack channel or DM. No dashboards to check. No reports to pull. The insight comes to you.What you can build
GMV & Revenue Alerts
If GMV drops more than 10% hour-over-hour, DM the CEO immediately with a breakdown by channel and cohort.
Fraud Monitoring
Run a fraud check every hour. If anomalies are detected, post a summary to
#fraud-alerts with transaction IDs and risk scores.Daily Business Digest
Every morning at 8 AM, post yesterday’s key metrics — bookings, revenue, churn, NPS — to
#analytics.Weekly Engineering Report
Every Monday, summarize last week’s PRs, incidents, and deploy count. Post to
#engineering.Inventory Alerts
Check stock levels every 4 hours. DM the ops lead if any SKU drops below threshold.
Support Queue Health
Every 30 minutes during business hours, check ticket queue depth. Alert
#support-ops if SLA is at risk.How it works
You set up a job
Pick an agent, write a prompt, set a schedule (cron), and choose where results go — a Slack channel or a direct message.
The schedule fires
At the configured time, SlackHive sends your prompt to the agent — exactly like a Slack message, but automated.
The agent runs
The agent executes fully — it can run queries, call APIs, check metrics, compare against thresholds, and reason about the results.
Any agent, any schedule
Jobs are not limited to boss agents. Assign a job to any agent:| Agent | Job example |
|---|---|
@data-analyst | Run revenue query every morning, post to #finance |
@devops | Check error rates every 15 min, DM on-call if spiking |
@fraud-bot | Hourly transaction anomaly scan, post to #fraud-alerts |
@support-agent | Check ticket backlog every 30 min, alert if queue > 50 |
@boss | Daily summary across all teams, post to #general |
Creating a job
Go to Jobs in the sidebar → New Job.Choose an agent
Select which agent should run this job. Pick the specialist best suited for the task.
Write the prompt
Write exactly what you’d say to the agent in Slack. Be specific about what to check, what thresholds matter, and what to include in the response.
Set the schedule
Use a cron expression or pick a preset:
| Preset | Cron |
|---|---|
| Every hour | 0 * * * * |
| Every 15 minutes | */15 * * * * |
| Daily at 8 AM | 0 8 * * * |
| Weekdays at 9 AM | 0 9 * * 1-5 |
| Weekly on Monday | 0 9 * * 1 |
Set the target — Channel or DM
Choose where the agent posts its results.
- Post to a Channel
- Send a DM
Post results publicly to any Slack channel —
#analytics, #fraud-alerts, #engineering, etc.How to get the channel ID:- Open Slack and right-click the channel name in the sidebar
- Click Copy link
- The link looks like:
https://app.slack.com/client/T.../C08ABCD1234 - The channel ID is the part starting with
C— e.g.C08ABCD1234
Make sure the agent’s Slack bot has been invited to the channel first. If not, invite it with
/invite @agent-name.Channel vs DM — when to use which
| Use case | Channel | DM |
|---|---|---|
| Team-wide reports (daily digest, weekly summary) | ✅ | |
Public monitoring dashboards (#fraud-alerts, #ops) | ✅ | |
| Personal alerts (CEO wants GMV drop notification) | ✅ | |
| On-call paging (engineer gets error spike alert) | ✅ | |
| Sensitive metrics (revenue, churn — exec-only) | ✅ | |
| Cross-team awareness (PM, data, eng all need to know) | ✅ |
Writing effective job prompts
The prompt is the most important part. A few principles: Be specific about what to checkMonitoring job runs
Every job run is logged — status, output, duration, and timestamp — visible in Jobs → [job name] → Run History.| Status | Meaning |
|---|---|
| ✅ Success | Agent ran and posted to Slack |
| ❌ Error | Agent not running, Claude error, or Slack API failure |
| ⏭ Skipped | Agent was offline when the job fired |
If a job fails, the error is logged in run history. Common causes: the agent’s Slack app token expired, the agent wasn’t running, or the MCP server the agent needed was down.
Example: GMV drop alert
Here’s a complete setup for a GMV monitoring job:| Field | Value |
|---|---|
| Name | Hourly GMV Alert |
| Agent | @data-analyst |
| Schedule | 0 * * * * (every hour) |
| Target | DM → CEO’s Slack user ID |
| Prompt | Compare GMV for the last hour vs the same hour last week. If it’s down more than 15%, send an urgent alert with a breakdown by channel, device type, and acquisition source. If it’s within normal range, send a single line: “GMV on track — Y last week.” |
Next steps
Creating Agents
Build the specialist agents that power your jobs.
MCP Servers
Connect agents to your data sources so jobs can query real data.