2026-03-27 · 9 min read · PerformanceHuman-in-the-Loop

Human-in-the-Loop Without the Slowdown

The most common objection to approval workflows is latency. Here's how to design them so they move fast — without sacrificing the safety you added them for.

When you tell an engineering team "we're adding human approval before AI agents execute commands," the first response is usually one word: latency.

It's a fair concern. An agent that pauses for 45 seconds waiting for a reviewer to click "approve" is worse than no agent at all. You've added friction without adding intelligence. The pipeline stalls. People start approving everything without reading. The control degrades into theater.

But the response isn't to remove human oversight — it's to design it well.

Here's how to build approval workflows that are genuinely fast.

The Latency Problem Is Mostly Solved by Whitelisting

The key insight: not every command needs a human. Most commands in a typical agent session are repetitive and predictable. They're the same docker commands, the same git operations, the same log tails.

A well-tuned whitelist means 90%+ of commands execute instantly — no human involved, zero latency. The human reviewer only sees the genuinely novel or risky commands.

~5ms
Whitelist hit latency
91%
Typical whitelist hit rate (after 1 week)
8.4s
Avg approval time for novel commands

That 8.4 second approval time is real — from our own internal usage. And novel commands deserve that pause. If an AI agent is doing something it hasn't done before on a production system, a human glancing at it for 8 seconds is exactly the right safety net.

The math: If 91% of commands take 5ms and 9% take 8.4s, your average latency per command is about 760ms. For most workflows, that's completely acceptable — and the 9% that get reviewed are the ones you actually want eyes on.

Five Techniques That Actually Reduce Approval Latency

1. Start with a tighter whitelist, loosen with data

The temptation is to start broad — whitelist entire command families like "all docker commands" or "all git commands." Resist it. Start with exact-match rules for commands you've actually seen.

After a week of real usage, your whitelist has coverage for 80-90% of what your agents actually do. The remaining 10% are genuinely novel — and those you want to review.

Expacti's AI suggestions engine helps here: after each session, it analyzes your approved commands and suggests regex/glob patterns to cover variations. You see: "We noticed you approved rm /tmp/build-abc, rm /tmp/build-def, and rm /tmp/build-ghi three times this week. Want to add rm /tmp/build-*?" You decide.

2. Use risk-gated timeouts

Not all unwhitelisted commands are equal. A docker ps that doesn't match the whitelist should have a short timeout and auto-approve if the reviewer doesn't respond. A rm -rf /var/lib/postgresql should auto-deny.

[policy]
# Low-risk commands: auto-approve if reviewer doesn't respond in 30s
timeout_seconds = 30
timeout_action = "allow"
timeout_min_risk = 0
timeout_max_risk = 25

# Medium risk: deny on timeout
# (configure per-org via API)

This means your reviewer only truly blocks the pipeline on genuinely risky novel commands — everything else flows through within 30 seconds.

3. Put reviewers on mobile

Approval latency is mostly reviewer availability latency. If your reviewer is staring at the dashboard when a command arrives, approval is 2-3 seconds. If they're in a meeting, it's 5 minutes.

The solution: mobile push notifications. When a novel command hits the queue, a push notification goes to the reviewer's phone with the command, the risk score, and one-tap approve/deny. They review it while the coffee is brewing.

Expacti's PWA ships with push notifications out of the box. For iOS/Android native apps, the web push standard works in all modern mobile browsers — no App Store required.

4. Slack-native approval for ops teams

For teams that live in Slack, routing approvals through a Slack message is dramatically faster than context-switching to a separate tab. A Slack DM arrives, you see the command, you click ✅ or ❌, done.

⚠️ Approval required
Command: psql prod -c "UPDATE users SET..."
Risk: HIGH (SQL write, 72/100)
Agent: ci-deploy@prod-db
Session: #4ab2f8

[✅ Approve]  [❌ Deny]

Average approval time via Slack: 4.2 seconds in our testing. The message arrives in the same flow as your other work — there's no context switch, no tab to open.

5. Backup reviewer chains

Single points of failure destroy uptime. If your primary reviewer is unavailable (sleeping, traveling, in the middle of an incident), the pipeline stalls.

Configure escalation chains: if the primary reviewer doesn't respond within 60 seconds, notify the backup reviewer. If they don't respond within 30 more seconds, page the on-call.

[policy]
reviewer_timeout_secs = 60
escalate_to = "[email protected]"
escalate_timeout_secs = 30
final_timeout_action = "deny"  # safe default

With a two-person chain, your window for pipeline stalls shrinks from "however long until reviewer returns" to a worst-case of ~90 seconds — after which it auto-denies safely.

What You Should Never Speed Up

Not everything should be optimized for throughput. Some commands should always require deliberate human attention:

For these, Expacti's risk scoring will flag them as CRITICAL (76-100 range) automatically. You can configure CRITICAL commands to require explicit confirmation — no auto-approve, no 30-second timeout, requires active reviewer action.

The throughput trap: When everything auto-approves, reviewers stop reading. The human-in-the-loop becomes a rubber stamp. Protect the critical path — make high-risk commands genuinely require attention, and your reviewers will stay sharp on the other 99%.

The Psychological Design of Fast Approval UIs

Approval latency isn't just a technical problem — it's a UX problem. A reviewer who feels confident about what they're approving acts faster than one who's uncertain.

A few things that make reviewers act faster:

Context surfacing

Show the reviewer not just the command, but the context: what session is this? What were the last 3 commands? What's the agent trying to accomplish? A command in context is easier to evaluate quickly than an isolated string.

Similarity hints

If a command is similar to one that's been whitelisted before, surface that: "≈ similar to whitelisted: docker logs container-*". This dramatically reduces cognitive load — the reviewer can pattern-match against previous decisions rather than evaluating from scratch.

Keyboard-first approval

A to approve, D to deny, K to kill session. No mouse movement needed. For power users, this makes the difference between 2-second and 5-second approvals.

Risk scores, not risk alerts

Show a risk score (0-100) rather than a binary "safe/unsafe." A score of 15 reads differently than a score of 72. Reviewers internalize the scale over time and stop reading the same way they stop reading generic "caution" warnings.

Measuring Your Approval Workflow Health

Track these metrics to know if your workflow is healthy:

MetricHealthyWarning
Whitelist hit rate>85%<70%
Avg approval latency (novel)<30s>120s
Timeout auto-deny rate<5%>15%
Reviewer response rate>95%<80%
CRITICAL commands auto-approved0%any

Expacti's analytics dashboard tracks all of these. If your whitelist hit rate drops, you need more rules. If timeout auto-deny is climbing, reviewers are unavailable when agents are running — schedule your agents better or add reviewers.

The Right Mental Model

Here's the reframe: human-in-the-loop isn't a safety tax. It's a risk-tiered access system.

Your agents operate at full speed for everything they've been explicitly cleared to do. The whitelist is the pre-approved zone. Only when an agent steps outside that zone does a human get involved — and that's exactly when a human should be involved.

The question isn't "how do we add human oversight without slowing things down?" The question is "what should agents be able to do without asking, and what requires a human?" Once you've answered that clearly, the latency design follows naturally.

Fast humans are a feature, not a performance optimization. Design the workflow so reviewers can act in 5 seconds when a command deserves it. Reserve that attention for commands that actually warrant it. Let the whitelist handle everything else.

See It In Action

Try the interactive demo — approve and deny commands with realistic risk scores, timing, and context. No signup required.

Open Interactive Demo