Field notes on AI-assisted delivery.
Short, reproducible write-ups of techniques we've found useful while building AI-assisted code-review and delivery pipelines. Each article is self-contained and stack-agnostic — take what's useful, adapt what isn't.
-
A multi-domain parallel AI review pipeline with dismissal-based merge gating.
How to fan multiple specialized AI reviewers out across a single PR, consolidate their structured findings into one review, and manage the branch-protection merge gate without letting a bot-authored approval leak into your required-approvals count.
-
Per-datum prompt-injection isolation for AI review inputs.
A small, robust pattern for defusing prompt-injection attacks when your CI pipeline feeds untrusted reviewer replies or PR content into an LLM: isolate each datum, declare it untrusted, and encode past the delimiter-stuffing attack surface.
-
Detecting empty-but-blocking AI review output.
A subtle failure mode of structured LLM output:
has_blocking_findings=truewith an emptyfindingsarray — silently blocking a merge with nothing actionable to act on. Here's how to catch it at two layers. -
A bounded, provably-terminating state machine for AI review loops.
AI reviewers will cheerfully re-raise the same finding forever if you let them. A four-action taxonomy plus a round-count escalation trigger gives you a provable upper bound on cycles per thread — no more hand-wavy "eventually converges" review bots.
-
Scope-based merge gating (and closing the severity-downgrade loophole).
Making the merge-gate signal scope-driven rather than severity-driven, with a JSON-Schema constraint that rejects the "raise as
informationalto sidestep the gate" evasion path. Includes the three-waycurrent_diff/required_by_current_diff/outside_difftaxonomy.