Stochastic Macro
Engineering notes

Field notes on AI-assisted delivery.

Short, reproducible write-ups of techniques we've found useful while building AI-assisted code-review and delivery pipelines. Each article is self-contained and stack-agnostic — take what's useful, adapt what isn't.

  1. · Review-pipeline architecture

    A multi-domain parallel AI review pipeline with dismissal-based merge gating.

    How to fan multiple specialized AI reviewers out across a single PR, consolidate their structured findings into one review, and manage the branch-protection merge gate without letting a bot-authored approval leak into your required-approvals count.

  2. · Security

    Per-datum prompt-injection isolation for AI review inputs.

    A small, robust pattern for defusing prompt-injection attacks when your CI pipeline feeds untrusted reviewer replies or PR content into an LLM: isolate each datum, declare it untrusted, and encode past the delimiter-stuffing attack surface.

  3. · Protocol design

    Detecting empty-but-blocking AI review output.

    A subtle failure mode of structured LLM output: has_blocking_findings=true with an empty findings array — silently blocking a merge with nothing actionable to act on. Here's how to catch it at two layers.

  4. · Convergence

    A bounded, provably-terminating state machine for AI review loops.

    AI reviewers will cheerfully re-raise the same finding forever if you let them. A four-action taxonomy plus a round-count escalation trigger gives you a provable upper bound on cycles per thread — no more hand-wavy "eventually converges" review bots.

  5. · Merge-gate design

    Scope-based merge gating (and closing the severity-downgrade loophole).

    Making the merge-gate signal scope-driven rather than severity-driven, with a JSON-Schema constraint that rejects the "raise as informational to sidestep the gate" evasion path. Includes the three-way current_diff / required_by_current_diff / outside_diff taxonomy.