Skip to content

refactor: replace context.WithCancel with t.Context#3284

Merged
julienrbrt merged 1 commit intoevstack:mainfrom
purelualight:main
Apr 23, 2026
Merged

refactor: replace context.WithCancel with t.Context#3284
julienrbrt merged 1 commit intoevstack:mainfrom
purelualight:main

Conversation

@purelualight
Copy link
Copy Markdown
Contributor

@purelualight purelualight commented Apr 23, 2026

Overview

Inspired by #3183 and replace all of the case.

The addition of the Context method to Go's testing.T provides significant improvements for writing concurrent tests. It allows better management of goroutines, ensuring they properly exit and preventing issues like deadlocks and unfinished processes.

More info: golang/go#18368

Summary by CodeRabbit

  • Tests
    • Updated tests to use Go's native test context instead of manual context creation, improving test reliability and maintainability.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

📝 Walkthrough

Walkthrough

Tests in the submitter module are refactored to use Go's built-in test context (t.Context()) instead of manually creating cancellable background contexts with context.WithCancel(). Two test functions now rely on the test framework's context lifecycle.

Changes

Cohort / File(s) Summary
Test Context Refactoring
block/internal/submitting/submitter_test.go
Removed manual context.WithCancel() and cancel() calls in TestSubmitter_processDAInclusionLoop_advances and TestSubmitter_CacheClearedOnHeightInclusion, replacing them with t.Context() for context management.

Possibly related PRs

Poem

🐰 No more cancel() calls to make,
t.Context() lighter as we wake,
Tests run cleaner, code less wide,
Go's test framework as our guide!

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: replacing context.WithCancel with t.Context in tests, which is the core refactoring in the changeset.
Description check ✅ Passed The description explains the motivation and benefits of the change, but the Overview section lacks specific detail about which tests or files are modified.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
block/internal/submitting/submitter_test.go (2)

450-450: Good use of t.Context() for test lifecycle integration.

The change to use t.Context() correctly aligns with the PR objectives and improves test robustness by ensuring the context is automatically cancelled when the test completes.

However, consider using t.Cleanup() to ensure Stop() is always called, even if the test fails or panics before line 529. This pattern is already used in TestSubmitter_daSubmissionLoop at line 396.

🛡️ Recommended: Add t.Cleanup for guaranteed Stop() execution
 func TestSubmitter_CacheClearedOnHeightInclusion(t *testing.T) {
 	ctx := t.Context()
 
 	cm, st := newTestCacheAndStore(t)
 
 	cfg := config.DefaultConfig()
 	cfg.DA.BlockTime.Duration = 5 * time.Millisecond
 	metrics := common.NopMetrics()
 
 	exec := testmocks.NewMockExecutor(t)
 	exec.On("SetFinal", mock.Anything, uint64(1)).Return(nil).Once()
 	exec.On("SetFinal", mock.Anything, uint64(2)).Return(nil).Once()
 
 	daClient := testmocks.NewMockClient(t)
 	daClient.On("GetHeaderNamespace").Return([]byte(cfg.DA.Namespace)).Maybe()
 	daClient.On("GetDataNamespace").Return([]byte(cfg.DA.DataNamespace)).Maybe()
 	daClient.On("GetForcedInclusionNamespace").Return([]byte(nil)).Maybe()
 	daClient.On("HasForcedInclusionNamespace").Return(false).Maybe()
 	daSub := NewDASubmitter(daClient, cfg, genesis.Genesis{}, common.BlockOptions{}, metrics, zerolog.Nop(), nil, nil)
 	s := NewSubmitter(st, exec, cm, metrics, cfg, genesis.Genesis{}, daSub, nil, nil, zerolog.Nop(), nil)
 
 	// Create test blocks
 	h1, d1 := newHeaderAndData("chain", 1, true)
 	h2, d2 := newHeaderAndData("chain", 2, true)
 	h3, d3 := newHeaderAndData("chain", 3, true)
 
 	sig := types.Signature([]byte("sig"))
 
 	// Save blocks to store
 	blocks := []struct {
 		header *types.SignedHeader
 		data   *types.Data
 		height uint64
 	}{
 		{h1, d1, 1},
 		{h2, d2, 2},
 		{h3, d3, 3},
 	}
 
 	for _, block := range blocks {
 		batch, err := st.NewBatch(ctx)
 		require.NoError(t, err)
 		require.NoError(t, batch.SaveBlockData(block.header, block.data, &sig))
 		require.NoError(t, batch.SetHeight(block.height))
 		require.NoError(t, batch.Commit())
 	}
 
 	// Set up cache with headers and data seen for all heights
 	cm.SetHeaderSeen(h1.Hash().String(), 1)
 	cm.SetDataSeen(d1.DACommitment().String(), 1)
 	cm.SetHeaderSeen(h2.Hash().String(), 2)
 	cm.SetDataSeen(d2.DACommitment().String(), 2)
 	cm.SetHeaderSeen(h3.Hash().String(), 3)
 	cm.SetDataSeen(d3.DACommitment().String(), 3)
 
 	// Verify items are seen in cache before processing
 	assert.True(t, cm.IsHeaderSeen(h1.Hash().String()))
 	assert.True(t, cm.IsDataSeen(d1.DACommitment().String()))
 	assert.True(t, cm.IsHeaderSeen(h2.Hash().String()))
 	assert.True(t, cm.IsDataSeen(d2.DACommitment().String()))
 	assert.True(t, cm.IsHeaderSeen(h3.Hash().String()))
 	assert.True(t, cm.IsDataSeen(d3.DACommitment().String()))
 
 	// Set DA inclusion for heights 1 and 2 only (height 3 will remain unprocessed)
 	cm.SetHeaderDAIncluded(h1.Hash().String(), 100, 1)
 	cm.SetDataDAIncluded(d1.DACommitment().String(), 100, 1)
 	cm.SetHeaderDAIncluded(h2.Hash().String(), 101, 2)
 	cm.SetDataDAIncluded(d2.DACommitment().String(), 101, 2)
 
 	require.NoError(t, s.initializeDAIncludedHeight(ctx))
 	require.Equal(t, uint64(0), s.GetDAIncludedHeight())
 
 	// Start submitter to process DA inclusions
 	require.NoError(t, s.Start(ctx))
+	t.Cleanup(func() { _ = s.Stop() })
 
 	// Wait for heights 1 and 2 to be processed
 	require.Eventually(t, func() bool {
 		return s.GetDAIncludedHeight() == 2
 	}, 1*time.Second, 10*time.Millisecond)
 
-	require.NoError(t, s.Stop())

As per coding guidelines: Be mindful of goroutine leaks in Go code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/submitting/submitter_test.go` at line 450, Add a t.Cleanup
registration to guarantee Stop() is invoked even on test failure: after you
create the submitter instance (the variable used in this test, e.g., s) and
before the test can fail, call t.Cleanup to schedule s.Stop() so the submitter
is always stopped; follow the same pattern used in
TestSubmitter_daSubmissionLoop to place the t.Cleanup call immediately after
submitter setup and before starting goroutines or long-running operations.

238-238: Good use of t.Context() for test lifecycle integration.

The change to use t.Context() correctly aligns with the PR objectives and improves test robustness by ensuring the context is automatically cancelled when the test completes.

However, consider using t.Cleanup() to ensure Stop() is always called, even if the test fails or panics before line 300. This pattern is already used in TestSubmitter_daSubmissionLoop at line 396.

🛡️ Recommended: Add t.Cleanup for guaranteed Stop() execution
 func TestSubmitter_processDAInclusionLoop_advances(t *testing.T) {
 	ctx := t.Context()
 
 	// Clean up any existing visualization server
 	defer server.SetDAVisualizationServer(nil)
 	server.SetDAVisualizationServer(nil)
 
 	cm, st := newTestCacheAndStore(t)
 
 	// small block time to tick quickly
 	cfg := config.DefaultConfig()
 	cfg.DA.BlockTime.Duration = 5 * time.Millisecond
 	cfg.RPC.EnableDAVisualization = false // Ensure visualization is disabled
 	metrics := common.PrometheusMetrics("test")
 
 	exec := testmocks.NewMockExecutor(t)
 	exec.On("SetFinal", mock.Anything, uint64(1)).Return(nil).Once()
 	exec.On("SetFinal", mock.Anything, uint64(2)).Return(nil).Once()
 
 	daClient := testmocks.NewMockClient(t)
 	daClient.On("GetHeaderNamespace").Return([]byte(cfg.DA.Namespace)).Maybe()
 	daClient.On("GetDataNamespace").Return([]byte(cfg.DA.DataNamespace)).Maybe()
 	daClient.On("GetForcedInclusionNamespace").Return([]byte(nil)).Maybe()
 	daClient.On("HasForcedInclusionNamespace").Return(false).Maybe()
 	daSub := NewDASubmitter(daClient, cfg, genesis.Genesis{}, common.BlockOptions{}, metrics, zerolog.Nop(), nil, nil)
 	s := NewSubmitter(st, exec, cm, metrics, cfg, genesis.Genesis{}, daSub, nil, nil, zerolog.Nop(), nil)
 
 	// prepare two consecutive blocks in store with DA included in cache
 	h1, d1 := newHeaderAndData("chain", 1, true)
 	h2, d2 := newHeaderAndData("chain", 2, true)
 	require.NotEqual(t, h1.Hash(), h2.Hash())
 	require.NotEqual(t, d1.DACommitment(), d2.DACommitment())
 
 	sig := types.Signature([]byte("sig"))
 
 	// Save block 1
 	batch1, err := st.NewBatch(ctx)
 	require.NoError(t, err)
 	require.NoError(t, batch1.SaveBlockData(h1, d1, &sig))
 	require.NoError(t, batch1.SetHeight(1))
 	require.NoError(t, batch1.Commit())
 
 	// Save block 2
 	batch2, err := st.NewBatch(ctx)
 	require.NoError(t, err)
 	require.NoError(t, batch2.SaveBlockData(h2, d2, &sig))
 	require.NoError(t, batch2.SetHeight(2))
 	require.NoError(t, batch2.Commit())
 
 	cm.SetHeaderDAIncluded(h1.Hash().String(), 100, 1)
 	cm.SetDataDAIncluded(d1.DACommitment().String(), 100, 1)
 	cm.SetHeaderDAIncluded(h2.Hash().String(), 101, 2)
 	cm.SetDataDAIncluded(d2.DACommitment().String(), 101, 2)
 
 	require.NoError(t, s.initializeDAIncludedHeight(ctx))
 	require.Equal(t, uint64(0), s.GetDAIncludedHeight())
 
 	// when
 	require.NoError(t, s.Start(ctx))
+	t.Cleanup(func() { _ = s.Stop() })
 
 	require.Eventually(t, func() bool {
 		return s.GetDAIncludedHeight() == 2
 	}, 1*time.Second, 10*time.Millisecond)
-	require.NoError(t, s.Stop())

As per coding guidelines: Be mindful of goroutine leaks in Go code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/submitting/submitter_test.go` at line 238, The test currently
uses ctx := t.Context() but does not guarantee submitter Stop() runs on failure;
register a t.Cleanup to always call s.Stop() (or the appropriate teardown
method) after the submitter is created so Stop() is invoked even on
panic/failure—locate the submitter instance (e.g., variable s in the test) and
add t.Cleanup(func(){ s.Stop() }) similar to the pattern used in
TestSubmitter_daSubmissionLoop to prevent goroutine leaks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@block/internal/submitting/submitter_test.go`:
- Line 450: Add a t.Cleanup registration to guarantee Stop() is invoked even on
test failure: after you create the submitter instance (the variable used in this
test, e.g., s) and before the test can fail, call t.Cleanup to schedule s.Stop()
so the submitter is always stopped; follow the same pattern used in
TestSubmitter_daSubmissionLoop to place the t.Cleanup call immediately after
submitter setup and before starting goroutines or long-running operations.
- Line 238: The test currently uses ctx := t.Context() but does not guarantee
submitter Stop() runs on failure; register a t.Cleanup to always call s.Stop()
(or the appropriate teardown method) after the submitter is created so Stop() is
invoked even on panic/failure—locate the submitter instance (e.g., variable s in
the test) and add t.Cleanup(func(){ s.Stop() }) similar to the pattern used in
TestSubmitter_daSubmissionLoop to prevent goroutine leaks.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8093ac73-6447-46b1-b41b-874c1b4914f5

📥 Commits

Reviewing files that changed from the base of the PR and between 3efcf93 and f4d0438.

📒 Files selected for processing (1)
  • block/internal/submitting/submitter_test.go

@julienrbrt julienrbrt enabled auto-merge April 23, 2026 16:13
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 23, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 62.61%. Comparing base (3efcf93) to head (f4d0438).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3284      +/-   ##
==========================================
+ Coverage   61.84%   62.61%   +0.77%     
==========================================
  Files         122      122              
  Lines       16241    13029    -3212     
==========================================
- Hits        10044     8158    -1886     
+ Misses       5312     3985    -1327     
- Partials      885      886       +1     
Flag Coverage Δ
combined 62.61% <ø> (+0.77%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@julienrbrt julienrbrt added this pull request to the merge queue Apr 23, 2026
@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Apr 23, 2026
@julienrbrt julienrbrt added this pull request to the merge queue Apr 23, 2026
Merged via the queue into evstack:main with commit a793471 Apr 23, 2026
24 of 30 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants