PR Review Agent
AI-powered GitHub pull request reviewer that analyzes diffs, retrieves relevant code context (RAG), and posts production-grade feedback directly on PRs.
Walkthrough
Video
End-to-end view of how the workflow behaves in practice.
GitHub
Enter your email to unlock the repo link — we'll log access for our records.
Architecture
How pieces connect
GitHub webhooks enter FastAPI; LangGraph coordinates retrieval, review, and publishing with durable SQLite checkpoints.
Step 01
GitHub
pull_request webhook: opened, synchronize, reopened.
Step 02
FastAPI
Ingress, validation, and webhook security.
Step 03
LangGraph
Fetch → validate → chunk diff → RAG → review → merge findings → publish.
Step 04
Gemini + FAISS
LLM analysis with semantic code retrieval.
Step 05
Publish
GitHub PR comments; optional Slack digest.
Foundation
SQLite
Stateful checkpoints for long runs and restarts.
Problem
What it helps with
Code reviews create bottlenecks and inconsistency. Reviewers spend time on obvious issues and context gathering instead of focusing on architecture and correctness. PRs wait in the queue, quality varies, and production risks slip through.
Behaviors
- Receives GitHub pull_request events (opened, synchronize, reopened) via webhook
- Validates and chunks large diffs to stay efficient and reliable
- Uses RAG to retrieve relevant repository code for better context
- Reviews changes across a 9-dimension framework (critical issues, reliability, database safety, resources, code quality, validation, performance, architecture, production readiness)
- Posts review findings directly on the PR (and optionally sends a Slack summary)
- Gracefully degrades on timeouts/quota issues and posts partial results when needed
Flow
How it works
- — 01GitHub webhook triggers the FastAPI webhook receiver for PR events
- — 02LangGraph orchestrates the state machine: fetch PR metadata → validate → analyze diff → build RAG index → review chunks → merge + validate findings → publish results
- — 03FAISS is used for semantic retrieval of relevant code context
- — 04Google Gemini performs the LLM analysis on each chunk
- — 05Results are published to GitHub PR comments and optionally to Slack; SQLite checkpoints preserve progress across restarts
Want this for your team?
We adapt triggers, approvals, and integrations to your environment — then ship for production discipline, not demos.
Next step
Something similar in your stack?
Tell us what systems and policies matter—we'll map a pragmatic path to a pilot.