Skip to content

perf(reaper,cache): batch tx seen#3286

Merged
julienrbrt merged 5 commits intomainfrom
julien/less-mutexes
Apr 24, 2026
Merged

perf(reaper,cache): batch tx seen#3286
julienrbrt merged 5 commits intomainfrom
julien/less-mutexes

Conversation

@julienrbrt
Copy link
Copy Markdown
Member

@julienrbrt julienrbrt commented Apr 24, 2026

Overview

Some mutex optimization when hitting cache (hit mutex once per batch instead of once per tx).

Summary by CodeRabbit

  • Performance Improvements
    • Optimized transaction validation through efficient batch cache operations, improving throughput when processing multiple transactions
    • Enhanced memory pool draining efficiency with improved bulk transaction processing, reducing pipeline latency
    • Reduced lock contention during batch validation operations, improving overall system responsiveness

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 24, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedApr 24, 2026, 5:53 PM

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Warning

Rate limit exceeded

@julienrbrt has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 58 minutes and 38 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 58 minutes and 38 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 22c65570-a2a1-4247-be10-675dde93a3b3

📥 Commits

Reviewing files that changed from the base of the PR and between 7f5251d and a790d8b.

📒 Files selected for processing (2)
  • CHANGELOG.md
  • block/internal/cache/generic_cache.go
📝 Walkthrough

Walkthrough

Adds batch cache operations (areSeen, setSeenBatch) to enable efficient multi-hash lookups and updates. Updates the CacheManager interface with an AreTxsSeen method. Optimizes Reaper.drainMempool to replace per-hash filtering with a single bulk cache check and batch submission, reducing lock contention. Adds a benchmark to measure mempool draining throughput.

Changes

Cohort / File(s) Summary
Cache Batch Operations
block/internal/cache/generic_cache.go, block/internal/cache/manager.go
Added areSeen() method for bulk-checking hash presence and setSeenBatch() for marking multiple hashes as seen under single lock. Updated CacheManager interface with AreTxsSeen() method and optimized SetTxsSeen() to use batch insertion instead of per-hash loops.
Reaper Optimization
block/internal/reaping/reaper.go
Refactored drainMempool() flow: replaced per-hash filtering pipeline with bulk hash computation and single cache check via AreTxsSeen(). Submits unseen transactions as a batch to sequencer, then records only newly submitted hashes. Enhanced error handling with additional context wrapper.
Benchmark
block/internal/reaping/bench_test.go
Added BenchmarkReaperFlow_DrainOnly() to measure mempool draining performance across batch size and transaction size combinations, reporting throughput in txs/sec.

Sequence Diagram(s)

sequenceDiagram
    participant Reaper
    participant Cache
    participant Sequencer
    
    Reaper->>Reaper: Fetch transactions from mempool
    Reaper->>Reaper: Compute hashes for all txs
    Reaper->>Cache: AreTxsSeen(hashes) — bulk check
    Cache-->>Reaper: []bool (presence status)
    Reaper->>Reaper: Filter to unseen indices
    Reaper->>Sequencer: Submit unseen txs (batch)
    Sequencer-->>Reaper: Success
    Reaper->>Cache: SetTxsSeen(unseen hashes)
    Cache-->>Reaper: Done
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Hop, hop, hop—no more one-by-one!
Batch checks and batch writes, oh what fun!
The reaper now gleams with optimized grace,
Bulk cache lookups keep locks in their place!
Mempool flows swiftly—the throughput has won!

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description provides a clear explanation of the optimization goal (hitting cache mutex once per batch instead of once per tx) but lacks implementation details, rationale context, and any linked issues despite the template requesting these. Expand the description to include background context, specific performance improvements expected, and link any related issues. Use the full Overview section provided in the template.
✅ Passed checks (3 passed)
Check name Status Explanation
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Title check ✅ Passed The title 'perf(reaper,cache): batch tx seen' directly describes the main change: adding batch operations for transaction seen-checking across reaper and cache components for performance.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch julien/less-mutexes

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 24, 2026

Claude encountered an error —— View job


I'll analyze this and get back to you.

@julienrbrt julienrbrt requested a review from tac0turtle April 24, 2026 15:31
@julienrbrt julienrbrt changed the title perf: reaper perf(reaper,cache): batch tx seen Apr 24, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@block/internal/reaping/bench_test.go`:
- Around line 269-285: The benchmark reuses the same txs byte slices across
iterations so SetTxsSeen/AreTxsSeen caches make only the first iteration submit;
to fix, regenerate unique tx contents for each b.N iteration (or clear the
seen-tx cache) before calling r.drainMempool so exec.batch contains fresh random
bytes each loop; update the loop that assigns exec.batch (refer to
variables/functions exec.batch, txs, b.N, r.drainMempool, SetTxsSeen/AreTxsSeen,
seq.submitted) to either recreate txs per iteration outside the timed section or
reset the seen-cache between iterations so the txs/sec metric reflects actual
throughput.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5a5acabe-41ba-49c7-8165-9965ad811cf3

📥 Commits

Reviewing files that changed from the base of the PR and between a793471 and 7f5251d.

📒 Files selected for processing (4)
  • block/internal/cache/generic_cache.go
  • block/internal/cache/manager.go
  • block/internal/reaping/bench_test.go
  • block/internal/reaping/reaper.go

Comment thread block/internal/reaping/bench_test.go Outdated
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 24, 2026

Codecov Report

❌ Patch coverage is 51.06383% with 23 lines in your changes missing coverage. Please review.
✅ Project coverage is 62.47%. Comparing base (a793471) to head (a790d8b).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
block/internal/cache/generic_cache.go 0.00% 20 Missing ⚠️
block/internal/cache/manager.go 0.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3286      +/-   ##
==========================================
- Coverage   62.62%   62.47%   -0.16%     
==========================================
  Files         122      122              
  Lines       13029    13047      +18     
==========================================
- Hits         8160     8151       -9     
- Misses       3984     4010      +26     
- Partials      885      886       +1     
Flag Coverage Δ
combined 62.47% <51.06%> (-0.16%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@julienrbrt julienrbrt added this pull request to the merge queue Apr 24, 2026
@julienrbrt julienrbrt removed this pull request from the merge queue due to a manual request Apr 24, 2026
@julienrbrt julienrbrt merged commit 49ef5c9 into main Apr 24, 2026
12 of 17 checks passed
@julienrbrt julienrbrt deleted the julien/less-mutexes branch April 24, 2026 17:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants