Conversation
📝 WalkthroughWalkthroughAdds token-level NFT search: new Changes
Sequence DiagramsequenceDiagram
participant Client
participant SearchHandler as Search Handler
participant CollDB as Collections DB
participant TokenDB as NFT Tokens DB
participant ERC20DB as ERC-20 DB
Client->>SearchHandler: POST /search?query=xyz
SearchHandler->>SearchHandler: Validate query length ≥ 3
par Parallel Searches
SearchHandler->>CollDB: Search collections by name (ILIKE with escape)
SearchHandler->>TokenDB: Search nft_tokens by name (ILIKE with escape, trigram index)
SearchHandler->>ERC20DB: Search ERC-20 tokens (ILIKE with escape)
and
CollDB-->>SearchHandler: Collection results
TokenDB-->>SearchHandler: Token results (up to 5)
ERC20DB-->>SearchHandler: ERC-20 results
end
SearchHandler->>SearchHandler: Aggregate results into SearchResult variants
SearchHandler-->>Client: Combined SearchResult array (includes `nft` items)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/crates/atlas-server/src/api/handlers/search.rs (1)
210-226: Missing unit test coverage for the newsearch_nft_tokenspath.No
#[cfg(test)] mod testsblock was added for the new logic (struct shape, enum serde tag"nft", and helper behavior). Even a serialization smoke test that assertsSearchResult::Nft(...)serializes with"type":"nft"and the expected field set would guard against frontend-contract regressions, since the PR description relies on the existing frontend already consuming this shape.As per coding guidelines: "Add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/search.rs` around lines 210 - 226, Add a unit test module (#[cfg(test)] mod tests) in this file that covers the new search_nft_tokens path and serialization shape: create a small Vec<NftTokenResult> (or construct via the search_nft_tokens helper), wrap it into the SearchResult::Nft variant, serialize it with serde_json, and assert the output contains the expected "type":"nft" tag and the relevant fields (contract_address, token_id, name, image_url); ensure the test uses the same struct/enum names (NftTokenResult, SearchResult::Nft) so future changes to the frontend contract are caught.backend/migrations/20260424000001_nft_tokens_name_trgm.sql (1)
1-2: LGTM — appropriate trigram index for the new ILIKE search path.
gin_trgm_opsis the right choice for theILIKE '%query%'pattern used insearch_nft_tokens, and the partial predicate (WHERE name IS NOT NULL) keeps the index tight sincenameis nullable in the schema.One operational note:
CREATE INDEXtakes anACCESS EXCLUSIVElock on writes tonft_tokensfor the duration of the build. If this table is already sizable in production, consider running withCREATE INDEX CONCURRENTLYin a separate out-of-transaction migration to avoid blocking the indexer. If your migration runner wraps each file in a transaction,CONCURRENTLYwon’t work here — in that case, it’s fine to leave as-is.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/migrations/20260424000001_nft_tokens_name_trgm.sql` around lines 1 - 2, The migration creates idx_nft_tokens_name_trgm on nft_tokens with gin_trgm_ops which is correct for search_nft_tokens, but building the index can take an ACCESS EXCLUSIVE lock; if nft_tokens is large in production, create the index with CREATE INDEX CONCURRENTLY instead and run that migration outside of a transaction (or as a separate non-transactional migration file) so the CONCURRENTLY option is permitted; if your migration runner wraps files in a transaction and cannot run non-transactional files, leave the current migration as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-server/src/api/handlers/search.rs`:
- Around line 210-226: The ILIKE pattern currently uses format!("%{}%", query)
which lets user-supplied '%' and '_' act as wildcards and degrade trigram index
effectiveness; create a shared helper (e.g., escape_like_pattern) and use it
from search_nft_tokens, search_nft_collections, and search_erc20_tokens to
escape '%' and '_' in the input by prefixing them with the Postgres default
escape '\\', then wrap the escaped string with '%' on both sides before binding
to the ILIKE parameter so the query uses a literal match with surrounding
wildcards.
- Around line 104-111: The text-search gate currently uses query.len() >= 2
which allows 2-char queries and defeats the pg_trgm index; change the
conditional that guards the token/collection/ERC-20 parallel searches (the block
that calls search_nft_collections, search_nft_tokens, and search_erc20_tokens
where results.is_empty() && query.len() >= 2) to require a minimum length of 3
(query.len() >= 3) so short 2-character queries skip the expensive trigram-based
text searches.
---
Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/search.rs`:
- Around line 210-226: Add a unit test module (#[cfg(test)] mod tests) in this
file that covers the new search_nft_tokens path and serialization shape: create
a small Vec<NftTokenResult> (or construct via the search_nft_tokens helper),
wrap it into the SearchResult::Nft variant, serialize it with serde_json, and
assert the output contains the expected "type":"nft" tag and the relevant fields
(contract_address, token_id, name, image_url); ensure the test uses the same
struct/enum names (NftTokenResult, SearchResult::Nft) so future changes to the
frontend contract are caught.
In `@backend/migrations/20260424000001_nft_tokens_name_trgm.sql`:
- Around line 1-2: The migration creates idx_nft_tokens_name_trgm on nft_tokens
with gin_trgm_ops which is correct for search_nft_tokens, but building the index
can take an ACCESS EXCLUSIVE lock; if nft_tokens is large in production, create
the index with CREATE INDEX CONCURRENTLY instead and run that migration outside
of a transaction (or as a separate non-transactional migration file) so the
CONCURRENTLY option is permitted; if your migration runner wraps files in a
transaction and cannot run non-transactional files, leave the current migration
as-is.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f6902436-63a7-40da-a09f-d3d4d7d71525
📒 Files selected for processing (2)
backend/crates/atlas-server/src/api/handlers/search.rsbackend/migrations/20260424000001_nft_tokens_name_trgm.sql
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (2)
backend/crates/atlas-server/src/api/handlers/search.rs (2)
193-198: Add unit tests forlike_escapeand consider extracting a pattern builder.
like_escapeis a pure, easily-testable function and no#[cfg(test)] mod testsblock exists in this file. A few cases are worth locking down so a later refactor doesn't silently regress wildcard safety: plain input, input containing%/_, input containing\, and combinations (e.g.\_foo%). The escape order matters — backslash must be escaped first (as it already is) or%→\%would get re-escaped to\\%on the subsequent pass.Also,
format!("%{}%", like_escape(query))is now repeated in three call sites; a small helper would DRY it up.♻️ Proposed refactor
fn like_escape(input: &str) -> String { input .replace('\\', "\\\\") .replace('%', "\\%") .replace('_', "\\_") } + +fn ilike_contains_pattern(query: &str) -> String { + format!("%{}%", like_escape(query)) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn like_escape_passes_through_plain_text() { + assert_eq!(like_escape("abc"), "abc"); + } + + #[test] + fn like_escape_escapes_wildcards() { + assert_eq!(like_escape("a%b_c"), "a\\%b\\_c"); + } + + #[test] + fn like_escape_escapes_backslash_first() { + // Must not double-escape: `\_` input becomes `\\\_`, not `\\\\_`. + assert_eq!(like_escape("\\_"), "\\\\\\_"); + } + + #[test] + fn ilike_contains_pattern_wraps_escaped_value() { + assert_eq!(ilike_contains_pattern("50%"), "%50\\%%"); + } +}As per coding guidelines: "Add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/search.rs` around lines 193 - 198, Add a #[cfg(test)] mod tests block in this file and add unit tests for like_escape that cover plain strings, strings with '%', '_' and '\\', and mixed cases (e.g. "\\_foo%") to lock down escape order; also extract the repeated format!("%{}%", like_escape(query)) into a small helper (e.g. like_pattern or build_like_pattern) and update the three call sites to use that helper, keeping like_escape unchanged but ensuring tests assert exact returned strings for all cases.
218-234: Consider returning collection name/symbol, and drop the unreachableNULLS LAST.Two observations on
search_nft_tokens:
NftTokenResultexposes onlycontract_addressfor the parent collection. In a global search dropdown, a hit like"CoolApe#123"without the collection name/symbol forces the frontend to do a follow-up lookup (or render just an address), which is a worse UX than the siblingNftCollectionresults that carryname/symbol. A smallJOIN nft_contractswould let the token result include collectionname/symbolin a single round-trip.Per
backend/migrations/20240101000001_initial.sql(lines 113–128),nft_tokens.last_transfer_blockisBIGINT NOT NULL, soNULLS LASTon that column is a no-op. Minor, but it's misleading to readers who may assume the column is nullable.♻️ Proposed shape (illustrative)
#[derive(Debug, Clone, Serialize, Deserialize, sqlx::FromRow)] pub struct NftTokenResult { pub contract_address: String, pub token_id: String, pub name: Option<String>, pub image_url: Option<String>, + pub collection_name: Option<String>, + pub collection_symbol: Option<String>, } @@ - sqlx::query_as( - "SELECT contract_address, token_id::text AS token_id, name, image_url - FROM nft_tokens - WHERE name ILIKE $1 - ORDER BY last_transfer_block DESC NULLS LAST - LIMIT 5", - ) + sqlx::query_as( + "SELECT t.contract_address, + t.token_id::text AS token_id, + t.name, + t.image_url, + c.name AS collection_name, + c.symbol AS collection_symbol + FROM nft_tokens t + JOIN nft_contracts c ON c.address = t.contract_address + WHERE t.name ILIKE $1 + ORDER BY t.last_transfer_block DESC + LIMIT 5", + )Please confirm the frontend's
nftresult renderer actually has the collection context it needs from justcontract_address, or whether it expectsname/symbolalongside (similar to thenft_collectionpayload).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/api/handlers/search.rs` around lines 218 - 234, search_nft_tokens currently only returns contract_address and uses a misleading "NULLS LAST" on a non-nullable column; update the SQL in search_nft_tokens to JOIN nft_contracts to include collection name and symbol in the SELECT, remove "NULLS LAST" from the ORDER BY, and update the NftTokenResult type (and any places that construct/consume it) to include the new collection_name and collection_symbol fields so the handler returns those values in one round-trip.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/crates/atlas-server/src/api/handlers/search.rs`:
- Around line 193-198: Add a #[cfg(test)] mod tests block in this file and add
unit tests for like_escape that cover plain strings, strings with '%', '_' and
'\\', and mixed cases (e.g. "\\_foo%") to lock down escape order; also extract
the repeated format!("%{}%", like_escape(query)) into a small helper (e.g.
like_pattern or build_like_pattern) and update the three call sites to use that
helper, keeping like_escape unchanged but ensuring tests assert exact returned
strings for all cases.
- Around line 218-234: search_nft_tokens currently only returns contract_address
and uses a misleading "NULLS LAST" on a non-nullable column; update the SQL in
search_nft_tokens to JOIN nft_contracts to include collection name and symbol in
the SELECT, remove "NULLS LAST" from the ORDER BY, and update the NftTokenResult
type (and any places that construct/consume it) to include the new
collection_name and collection_symbol fields so the handler returns those values
in one round-trip.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 91dcc91f-e7c6-4dc5-87d9-df3e489141ab
📒 Files selected for processing (1)
backend/crates/atlas-server/src/api/handlers/search.rs
Summary
Extends the search API to query
nft_tokens.nameviaILIKE, returning up to 5 matching tokens asnft-typed results. Adds a GIN trigram index migration for fast name searches. The frontend already supports thenftresult type end-to-end.Summary by CodeRabbit
New Features
Bug Fixes
Chores