AI Agents Orchestration Platform

Wakeel

/wa·keel/ — Arabic for agent, representative

Orchestrate AI Agents at Scale

A self-hosted platform to configure, trigger, and orchestrate AI agents across our delivery pipeline — from code reviews to deployments, all through the tools our team already uses.

GitHub Jira Slack Email Kubernetes
The Approach

The Deterministic Sandwich

Wakeel wraps AI capabilities in predictable, reliable operations. Minimize AI unpredictability to only where you need it.

Before Hooks

DETERMINISTIC

  • Clone repository
  • Create branch
  • Set up workspace
  • Pull context

AI Agent Task

NON-DETERMINISTIC

  • Write code
  • Review changes
  • Generate tests
  • Analyze code

After Hooks

DETERMINISTIC

  • Commit & push
  • Open PR
  • Update Jira
  • Post summary

AI unpredictability is sandwiched between reliable, deterministic operations. The blast radius is minimized to only the part where you actually need AI.

How It Works

Trigger from anywhere.
Respond through the same channels.

Entry Points

@wakeel on GitHub issue
Assign Jira ticket to Wakeel
Add wakeel-qa Jira label
@wakeel in Slack
CC wakeel@almosafer.com
Request review from Wakeel

Wakeel Platform

Event Router Task Queue Runner

Self-hosted on our K8s cluster

Outputs

Opens PR with code changes
Posts review comments
Commits test scenarios & automation
Posts summary in Slack
Updates Jira ticket status

No new tools to learn. Wakeel lives where our team already works.

Design & Safeguards

Built to adapt, built to last

Every concern is addressed at the architecture level — not as an afterthought.

Security

Self-hosted on our K8s cluster. Code context is sent only to the configured model API.

Orchestration, config, and data stay on our infrastructure — only inference calls go external
Per-agent tool allow/deny lists
Scoped repo access per agent
Full audit trail in our own database
Every output goes through human review before merge

Cost Control

Hard ceiling on concurrent runners. Queue absorbs bursts. We control the spend.

6/10 pods
3 queued
Queue-based — no runaway scaling
Configurable model per task complexity
Per-run token & cost tracking with dashboard overview

Reliability

Every run is observable, recoverable, and graded by the people who use it.

End users can grade every agent run — feedback drives continuous improvement
Real-time observability — every step streamed to dashboard
Guardrails: max PR size, required tests, kill switch from dashboard
Automatic retries with exponential backoff on transient failures

Zero Vendor Lock-in

Every external dependency is behind an adapter. Swap providers without touching core logic.

Local / on-prem models planned — model is a config value, not code
Swap AI provider, harness, or integration via adapter pattern
Jira ↔ Linear, GitHub ↔ GitLab, Slack ↔ Teams — one adapter each
Agent config lives in our DB — new agent = new config, no code changes
One Platform, Many Agents

Configure any agent. Orchestrate everything.

The same engine powers every agent — only the config changes.

Reviewer

1 Review requested
2 Checkout PR branch
3 Analyze diff + standards
4 Post line comments
5 Approve / request changes
  • Instant first-pass review
  • Catches style & missing tests
  • Review cycle: days → hours

QA

1 Ticket labeled for QA
2 Read ticket + linked PRs
3 Write scenarios + tests
4 Open PR with tests
5 Report coverage to Jira
  • Instant test scenarios on entry
  • Follows our framework & patterns
  • Consistent coverage, no gaps

Reports

1 Scheduled or on-demand
2 Gather data from sources
3 Summarize & format report
4 Deliver via email / Slack
5 Archive for audit trail
  • Sprint & project summaries
  • Stakeholder status updates
  • Dev activity digests

Your Agent Here

Same engine, new config — developer, DevOps, security, documentation, or anything your team needs.

Pilot Agents

What we're shipping first

Two agents, two repos, full end-to-end visibility — expanding from results.

PR Reviewer

Live — running on staging

Active

Triggers

Jira ticket moved to In Code Review with label wakeel-review
PR opened, reopened, review requested, or ready for review

What happens

1 React with 👀 & post "on it" comment with dashboard link
2 Clone repo & checkout PR branch
3 Fetch PR context (diff, comments, metadata) & Jira ticket for business context
4 AI reviews: correctness, security, maintainability, performance, tests & functional alignment with ticket requirements
5 Submit GitHub review with inline comments (approve / request changes)
6 Ask PR author for feedback & label PR reviewed_by_wakeel

Auto-abort

PR closed or converted to draft → run cancelled
Claude 4.6 Opus Context7 MCP Brave Search MCP Read-only tools

QA Test Writer

Up next — planned for next sprint

Planned

Triggers

Jira ticket moved to Ready for QA with label wakeel-qa
PR merged to main on target repos

What will happen

1 Read Jira ticket details & linked PRs / acceptance criteria
2 Clone repo & explore changed modules and existing tests
3 AI generates test scenarios from ticket & acceptance criteria
4 Run scenarios in a real browser to validate flows before writing code
5 Generate Cypress / Playwright automation from validated scenarios
6 Commit, push, and open PR with test suite
7 Post coverage summary as Jira comment — QA reviews and merges

Human gate

QA sign-off required before merge — agent writes, humans approve
Claude 4.6 Opus Cypress E2E skill Context7 MCP Read + Write tools
Live Demo

Let's see it in action

A walkthrough of what's been built so far — the dashboard, agent configuration, and a live agent run.

Dashboard
Agent Config
Agent Run

Wakeel — وكيل — Our AI Representative