Comprehensive Guide to Stockfish Search Development

Core Search Algorithm Architecture

  1. Search is aspirated PVS with unsound tricks - The core algorithm is principal variation search with aspiration windows, enhanced by numerous heuristics that are technically unsound but work well in practice. These "tricks" gain Elo despite violating theoretical search principles.
  2. Branching factor is extremely low (~1.35-1.7) - Due to aggressive pruning and reductions, the effective branching factor is far below chess's theoretical ~36. This means most time is spent at very low depths, making shallow search optimization crucial.
  3. Depth distribution is highly skewed - Approximately 55% of evaluations occur at depths 0-1, 15% at depths 2-3, 5% at depths ≥4, and 25% at negative depths (qsearch). This distribution should guide where to apply expensive operations.
  4. Search and qsearch have fundamentally different purposes - Main search handles strategic decisions while qsearch resolves tactical sequences. Qsearch only generates captures, promotions, and checks (at depth 0), making it unsuitable for general position evaluation.
  5. Fail-soft search provides ~30 Elo - Stockfish returns values outside alpha-beta bounds, enabling more accurate position assessment. This affects statistics collection throughout search and allows better move ordering decisions.
  6. Search depth vs actual depth diverge significantly - The "depth" parameter in iterative deepening is just an iteration counter. With extensions, actual search depth can be much deeper (typically 2x nominal depth), affecting performance measurements and depth-dependent heuristics.
  7. Search instability is normal and expected - A search failing high on [-1,1] then failing low on [1,4] is not a bug but a natural consequence of TT reuse, iterative deepening, and extensions. The same nominal depth can yield different results due to extensions, reductions, and TT knowledge.
  8. Search is self-correcting through iteration - Iterative deepening and transposition tables provide natural error correction for many search mistakes. This self-correcting nature means handling every edge case perfectly is less important than overall behavior.
  9. Asymmetry is fundamental to modern search - Modern search is inherently asymmetric due to optimism, history tables, and NNUE evaluation. Forcing symmetry often fails as the engine benefits from treating positions differently based on context.
  10. Node distribution guides optimization - With most time at low depths, optimize shallow search aggressively while being conservative at high depths where accuracy matters more. This principle affects all search modifications.

Node Types and Search Windows

  1. Only PV nodes spawn PV nodes - In the search tree, only PV nodes create other PV nodes as children. This property can be used to simplify conditional logic since (ss-1)->ttPv is always true at PV nodes.
  2. Non-PV nodes use zero-window search - By definition, non-PV nodes have alpha + 1 == beta, making them zero-window searches. Avoid writing complex bounds like -(beta - 1) when simple zero-window calls suffice.
  3. Cut nodes vs all nodes have different characteristics - Cut nodes have 88% LMR success rate and twice the fail-high rate on first moves compared to all nodes (97% LMR success). Yet cut nodes receive larger reductions, showing counter-intuitive optimization works.
  4. Root nodes are always PV nodes - The root position is always searched as a PV node, affecting how search modifications behave at the root. Root-specific logic often needs special handling, and root node pruning is controversial as it can pollute training data.
  5. PV nodes require different search behavior - PV nodes get full-window searches after LMR, different history updates, and special handling for extensions and reductions that non-PV nodes don't receive. PV updates should only occur when value > alpha, never when value == alpha.
  6. cutNode represents expected behavior - cutNode represents an "expected cut node" that predicts whether a node will likely fail high, not whether it actually did. This affects many pruning decisions and node type alternation prevents same-color repeated pruning.
  7. All cutNodes are non-PV but not vice versa - There are also allNodes, so conditions like !PvNode are broader than cutNode. Different node types scale differently with search techniques often behaving differently between PV and non-PV nodes.
  8. Zero-window searches in non-PV nodes - Since alpha + 1 == beta in non-PV nodes, many complex window calculations can be simplified to zero-window searches. The choice between (alpha-1, alpha) vs (alpha, alpha+1) significantly impacts performance.
  9. MultiPV search is ~200 Elo weaker - MultiPV 2 costs significant strength compared to single PV search due to the overhead of maintaining multiple lines. Each PV line needs separate storage and management, with excluded moves preventing line interference.
  10. Node type affects extension eligibility - Extensions like singular extensions consider node type with different thresholds for PV vs non-PV contexts. Different threads can use different search parameters based on node types to explore the search space more diversely.

Transposition Table Management

  1. TT entries can be overwritten between probe and use - In multi-threaded search, TT entries may be replaced by other positions between initial probe and later access. Always verify tt->key() matches expected position before using cached pointers.
  2. TT depth encoding handles negative values - Depths are offset by DEPTH_OFFSET (-7) to handle negative qsearch depths in uint8_t storage. This offset must be considered when comparing or storing depths.
  3. Extensions are NOT included in TT depth - Always write base search depth to TT regardless of extensions applied. Attempts to store actual extended depths have consistently performed poorly, and TT depth saving affects extension behavior in subsequent searches.
  4. TT replacement considers multiple factors - The replacement algorithm weighs depth, bound type, PV status, and generation in a complex formula. It's not just simple depth comparison, with TB scores stored with depth + 6 to boost replacement priority.
  5. TT moves bypass normal move generation - In both main search and qsearch, TT moves are tried first and can be any move type, even if that stage wouldn't normally generate such moves. MovePicker explicitly filters out ttMove with *cur != ttMove.
  6. Hash collisions can produce illegal moves - Always validate TT moves with pos.pseudo_legal() and pos.legal() before use. Invalid moves can corrupt history tables and cause crashes, requiring move.is_ok() checks before indexing.
  7. TT cutoffs mainly benefit move ordering - The primary value of transposition table is memorizing good moves for future searches (7:1 ratio), not direct cutoffs. TT move agreement decreases with depth from ~85% at depth 1 to ~69% at depth 20+.
  8. Different bound types have specific meanings - BOUND_UPPER means real value ≤ stored value, BOUND_LOWER means real value ≥ stored value, and BOUND_EXACT means equality. Guard against using inappropriate bounds for specific pruning or cutoff decisions.
  9. TT cutoff conditions require careful depth handling - TT cutoffs should only occur when ttDepth > depth and bounds align properly. The condition ttData.depth > depth - (ttData.value <= beta) exists specifically because it passed testing.
  10. TT mate scores require special handling - Mate scores from TT cannot be trusted for scoring purposes due to hash collisions. Always include proper mate range checks (VALUE_TB_WIN_IN_MAX_PLY) when using TT values, and distinguish TB wins from regular wins.
  11. TT generation tracking is crucial for reproducibility - The generation counter must be properly reset to ensure deterministic search behavior across runs. Refreshing TT generation on reads can interfere with replacement strategy.
  12. TT collisions can cause tactical blunders - Hash collisions with smaller hash sizes can lead to fake TT results, particularly in qsearch where stand-pat cutoffs may use incorrect values from colliding positions.
  13. TT move exclusion optimization - When excluding moves in singular search, filter out excludedMove at TT lookup, move generation, and probcut stages to skip unnecessary legality checks and improve performance.
  14. TT is the only shared structure - Threads communicate primarily through transposition table, with atomic operations for other shared data like node counts. This creates inherent race conditions requiring careful handling.
  15. Empty TT entry handling - Ensure empty entries (depth == 0) are prioritized in replacement over entries with useful data. The TT can store mate scores from aborted searches without proper guards.

Depth Management and Calculations

  1. Qsearch uses negative depths - Depth 0 searches captures+checks, depths -1 to -5 search captures only, and depths < -5 search only recaptures on the previous move square. Each stage has different move generation.
  2. Depth in reduction formulas must stay positive - Always ensure depth > 0 in reduction calculations. Use std::max(depth, 1) guards when aggressive reductions might push depth to zero, as this triggers assertions.
  3. Singular extension depth must be at least 1 - When depth = 1, ensure singularDepth >= 1 using std::max(). Qsearch lacks machinery for excluded moves and will behave incorrectly with singularDepth <= 0.
  4. Null move can create extreme negative depths - With R = 10 (4 base + 6 from formula), null move search can dive to depth -10 or lower. This is normal behavior that goes directly into deep qsearch, with verification search depth negative ~70% of the time.
  5. Root depth can reach ~250 in forced sequences - In positions with forced moves or high rule50 counters, rootDepth can reach extreme values. Formulas using rootDepth must account for this upper bound, and depth can exceed 200 causing sign flip.
  6. Search explosion is prevented by depth guards - Extensions use ss->ply > 2 * rootDepth or depth < 2 * thisThread->rootDepth to prevent runaway search. Multiple safeguards exist as single mechanisms proved insufficient, though current limits "don't work at all" for preventing explosions.
  7. Depth thresholds are time-control sensitive - Conditions like depth >= 6 for extensions often need different values for different time controls. What works at STC may fail at LTC due to exponential tree growth.
  8. Depth reduction formulas should scale - Flat reductions (like constant 4 plies) don't scale well from STC to LTC, while proportional formulas (like depth/3) tend to scale better. This affects all reduction-based techniques.
  9. Depth increment affects extensions - Skipping depths by incrementing by 2 instead of 1 reduces singular extensions because TT depth conditions are triggered less frequently. This creates depth gaps that break scaling.
  10. Always guard against negative depths - Depth going negative often leads to qsearch but patches reducing depth aggressively tend to scale poorly to longer time controls. Use formulas like std::max(1, std::min(newDepth - r, newDepth + 1)).
  11. Depth-dependent conditions scale non-linearly - Conditions like depth >= 18 rarely trigger at STC but become relevant at longer time controls, affecting scaling behavior. This makes depth-based feature switching common.
  12. Depth bounds checking is critical - Always ensure depth cannot go below 1 in search functions using std::max(depth, 1) to prevent undefined behavior in move loops. Off-by-one errors in depth conditions can cause significant regressions.
  13. Depth reporting must be consistent - Distinguish between rootDepth (current iteration) and completedDepth (fully completed iterations) to avoid reporting issues. Ensure completedDepth >= 1 before relying on thread results.
  14. Depth guards prevent mate tracking issues - Most pruning techniques should have depth bounds (e.g., depth <= 7) because unbounded pruning can cause infinite loops or mate issues. Limits like lmrDepth < 12 are primarily for mate tracking.
  15. Stack depth calculations need consistency - When accessing (ss-N), ensure N doesn't exceed available stack depth. Relationship between MAX_PLY and stack size must be maintained, with stack overflow occurring around depth 95-97.

Move Ordering Implementation

  1. Move ordering affects entire search tree - Better move ordering enables more aggressive pruning and reductions throughout the tree. Even small improvements can yield significant Elo gains, and improvements tend to scale well across time controls.
  2. Different search phases use different ordering - Main search uses full history tables including continuation history, while qsearch uses basic capture ordering. Probcut generates only captures with specific ordering.
  3. Partial insertion sort is well-optimized - The current implementation performs well against more sophisticated sorting algorithms. Changes should be made cautiously as this is performance-critical code.
  4. Captures are pushed ahead of quiet moves - Except for TT moves, all captures (even bad ones like QxP defended by bishop) come before quiet moves with a -1<<28 bonus. Bad captures come after quiet moves reflecting lower success probability.
  5. Killer moves have specific success rates - Killer[0] fails high ~45% of time, killer[1] ~15%, countermove ~18%. The optimal order is killer0→countermove→killer1, with update order critical for effectiveness.
  6. Refutation moves need duplicate filtering - Killers and countermoves can appear in both capture and quiet arrays if not properly filtered, causing incorrect statistics updates and move ordering corruption.
  7. History values can reach ~100,000 - Statistics-based scores are much larger than evaluation (~1,000), allowing aggressive pruning decisions. Total history values can reach ~184k, so scale appropriately when using in search decisions.
  8. Queen promotions appear in captures stage - Even quiet queen promotions are processed during captures, while under-promotions are treated as quiet moves since they're rarely useful.
  9. Staged move generation reduces work - The staged approach allows delaying expensive move generation until needed. Early cutoffs can save significant computation by avoiding later stages.
  10. Move ordering uses partial sorting - Stockfish uses partial move ordering with depth-dependent thresholds (quiet_threshold(depth) = -3560 * d) rather than full sorting for efficiency. Full quiet move sorting loses substantial Elo.
  11. Captures scored differently in evasions - In check, captures are sorted by captured piece value minus moving piece type, with fewer heuristics applied than normal search. Evasion generation is expensive making early cutoffs valuable.
  12. Move ordering affects pruning effectiveness - The interaction between move ordering (via history tables) and pruning decisions creates feedback loops where improving one affects the other's effectiveness.
  13. TT move ordering supersedes other systems - The TT move is always tried first, regardless of history or other ordering heuristics. TT moves bypass most pruning heuristics and get special treatment throughout.
  14. Quiet moves use multiple history types - Main history, continuation histories, and piece-square bonuses combine for quiet move ordering. Different history types serve distinct purposes and shouldn't be conflated.
  15. Capture ordering beyond MVV-LVA - Modern engines enhance basic Most Valuable Victim - Least Valuable Aggressor with history and other signals. SEE ordering in qsearch is crucial for performance.
  16. Killer slots have specific update patterns - Killer[0] gets new moves, killer[1] gets previous killer[0], maintaining recency ordering. Killer moves are stored at ss+1 and ss+2 in search stack.
  17. Countermove tables are piece-type specific - Storing countermoves by piece type and target square provides more specific refutation information. Countermove effectiveness follows the established hierarchy.
  18. Distance to king valuable for ordering - Move ordering bonuses based on distance to enemy king provide effective guidance. Discovered attacks detectable efficiently using blockersForKing().
  19. Recapture bonuses require infrastructure changes - When adding special move bonuses like recapture bonuses, you must modify movepicker.h constructors and all search.cpp call sites to pass prevSq.
  20. Move generation stage optimization critical - The staged system exists because you often don't need all moves - early cutoffs save significant computation. Different stages have different counts affecting pruning differently.

Pruning Techniques

  1. Futility pruning is evaluation-dependent - As NNUE evaluation improved, futility pruning became more aggressive with higher depth limits (8→9→11). Better evaluation enables more confident static cutoffs.
  2. Futility margins use depth-dependent formulas - Implement as Value(factor * (depth - improving)) rather than fixed constants. Complex formulas incorporate d, improving, oppWorsening with different multipliers for node types.
  3. Razoring uses counter-intuitive bounds - The implementation uses (alpha-1, alpha) instead of logical (alpha, alpha+1), but fixing this performs worse. Sometimes bugs that pass testing should remain.
  4. Probcut searches for opponent blunders - This ~100 line technique yields only ~4 Elo by checking if opponent might blunder a piece. The value proposition is narrow but measurable.
  5. SEE pruning thresholds vary by context - Captures have different SEE thresholds (like -200 maximum) and pruning behavior differs between regular search and qsearch. Consider position improvement when setting thresholds.
  6. Move count pruning becomes self-limiting - The threshold grows quadratically with depth, making aggressive pruning automatically less impactful where it matters most. Formula (3 + depth * depth) / 2 for improving positions.
  7. History-based pruning must be depth-dependent - Constant thresholds perform poorly because positions repeatedly fail low. Use scaling like history < -threshold * depth to avoid repeated pruning of same positions.
  8. Different pruning techniques have different accuracy - Razoring achieves 99%+ accuracy, null move ~99.5%, futility ~94-95%. More accurate techniques can be applied more aggressively.
  9. Pruning before extensions is crucial - The historical change from "pruning after extensions" to "pruning before extensions" was significant for correctness. Maintain this ordering.
  10. Root node pruning is controversial - Pruning at root can pollute training data and make behavior harder to reason about. Very shallow futility pruning might be acceptable with safeguards.
  11. Futility pruning depth limits serve mate tracking - Depth limits like lmrDepth < 12 are primarily for mate tracking, not just performance. They prevent pruning moves that could lead to or escape from mate sequences.
  12. bestValue adjustment for pruned moves is powerful - When pruning moves, updating bestValue to the futility value (if better) can gain significant Elo, as the pruned move may beat bestValue even if it can't beat alpha.
  13. Null move pruning requires eval >= beta - This fundamental condition ensures giving the opponent an extra move is only tested when position is already good. The formula R = std::min(int(eval - beta) / 152, 6) + depth / 3 + 4 requires this guard.
  14. Pruning conditions often combine multiple history types - Modern pruning uses continuation history, pawn history, and other heuristics together, requiring careful balancing of their relative contributions.
  15. Different node types need different pruning thresholds - PV nodes, cut nodes, and all nodes benefit from different pruning aggressiveness. Conditions like beta - alpha == 1 can be more precise than !PvNode.
  16. Reverse futility pruning has high success rates - Well-implemented RFP can achieve 88%+ success rates when properly guarded. It uses highly engineered formulas with multiple parameters heavily optimized over time.
  17. Move count pruning should cache conditions - Once moveCountPruning is true for a position, it remains true for all subsequent moves, so cache the result rather than recalculating.
  18. Pruning should guard against in-check positions - Add && !ss->inCheck && !(ss-1)->inCheck when implementing pruning based on static eval, as evaluation returns VALUE_NONE (32001) in check.
  19. Capture futility mirrors quiet futility - Same principles apply to both, including history blending in margins and bestValue adjustment techniques. Futility base can be <= alpha with bonuses.
  20. Different multipliers for node types - Futility margin calculation applies different multipliers based on noTtCutNode and other node properties. Position-dependent futility multipliers affect aggressiveness.
  21. Guard against VALUE_TB_WIN_IN_MAX_PLY - Futility pruning should use guards like eval < 24923 to prevent incorrect pruning in tablebase positions. Consistency between search and qsearch futility important.
  22. Skip futility if ttMove has bad history - This helps with mate finding, as moves with bad history may still be critical in tactical sequences. Linear margins hurt mate finding unless combined with depth guards.
  23. Razoring logic often inverted - If a position fails low in razoring, the previous quiet move was likely good for opponent, so update opponent statistics positively. Razoring fails in singular search since alpha is typically much lower than ttValue.
  24. Avoid redundant work with razoring - Don't waste time on razoring in singular search since it will likely just return TT cutoff immediately. Razoring effectiveness improved by history information.
  25. Prior capture conditions affect razoring - The interaction between capture sequences and razoring decisions is crucial for effectiveness. Razoring can be applied at root with proper aspiration window handling.
  26. Razoring threshold placement matters - Razoring should be attempted before other pruning techniques to save time, using significantly larger thresholds. TT moves affect razoring decisions.
  27. Razoring introduces mate score bugs - Can cause incorrect mate announcements because qsearch only searches captures and may miss quiet move refutations. Historical razoring issues suggest removal.
  28. SEE margins combined with history - When using SEE pruning with history adjustments, clamp history bonus to not exceed base SEE margin. Different thresholds used in different contexts.
  29. SEE pruning requires careful tuning - The threshold for pruning captures with negative SEE varies by depth and position characteristics. SEE pruning in qsearch critical for performance.
  30. Pruning heuristics heavily overlap - When removing one pruning mechanism, others often catch the same moves, so individual pruning elo measurements are misleading. Multi-stage pruning is effective.

Late Move Reductions (LMR)

  1. LMR is more like "Late Move Modifications" - Modern LMR includes extensions (negative reductions), post-LMR extensions, and complex logic beyond simple reductions. It's a general move search adjustment mechanism.
  2. LMR can be applied almost universally - Stockfish applies LMR to nearly every move, with 98%+ satisfying traditional conditions. The !captureOrPromotion condition is nearly obsolete.
  3. LMR formulas affect multiple contexts - The same reduction calculation is used in main search, singular extensions, and other places. Changes cascade through multiple search components.
  4. First move in non-PV nodes shouldn't be reduced - Allowing LMR for the first move is typically very bad, as this move usually deserves full depth search. Always exclude with appropriate conditions.
  5. Node type affects reduction amounts - PV nodes use different (lower) reduction formulas than non-PV nodes. Cut nodes get r += 2 despite higher fail-high rates due to re-search guarantees.
  6. Reduction tables include thread count - Tables incorporate std::log(Threads.size()) / 2 to counteract lazy SMP search widening effects, maintaining similar tree shapes across thread counts.
  7. Stat bonuses can be negative at low depths - The stat_bonus function returns negative values at depth 1, discovered through trial and error. Linear alternatives consistently fail at LTC.
  8. Post-LMR extensions affect even PV nodes - The doDeeperSearch and doEvenDeeperSearch mechanisms can extend PV nodes, contributing to high selDepth values beyond normal extensions.
  9. LMR failure rate is expected to be high - LMR fails in ~98% of cases and truncates searches of almost all moves, which is the intended behavior for efficient search. The high failure rate is a feature, not a bug.
  10. LMR formulas use complex depth and history interactions - Base reductions include thread-scaling components and are modified by history values via r -= ss->statScore / 14767. Large divisors require bounds checking.
  11. Post-LMR history updates require careful timing - History updates should be delayed until after the full search completes. If the zero-window search succeeds but the full search fails low, giving an immediate bonus is counterproductive.
  12. TT moves have special LMR handling - The moveCount condition in LMR is always 1 for TT moves since they're tried first. TT moves can still be reduced but with different parameters, typically r = std::max(0, r - 2).
  13. Reduction clamping prevents invalid depths - Late move reductions must be clamped to ensure reduced_depth >= 1 using formulas like std::max(1, std::min(newDepth - r, newDepth + 1)) to handle cases where max < min.
  14. LMR depth calculations can overflow - When implementing reduction formulas, be careful with integer arithmetic. Division toward zero can cause different results for positive vs negative values, making seemingly equivalent formulas actually different.
  15. History affects LMR through statScore - The statScore combines multiple history tables and significantly impacts reduction amounts. Changes to history mechanisms cascade through LMR effectiveness.
  16. PV vs non-PV node LMR differs significantly - PV nodes get full-window searches after LMR, while non-PV nodes use zero-window searches. This fundamental difference affects how reductions should be applied.
  17. Singular extension interaction with LMR - When implementing reductions based on singular extensions, reduce OTHER moves when the singular move got extended, not extend them. In reduction code, r++ means extra reduction, not extension.
  18. LMR research patterns predictable - LMR research failure probability increases predictably with previous failures at same node. Probabilistic LMR research patterns can be exploited.
  19. Move count influences reduction formulas - Many reduction and pruning decisions depend on move count, with coefficients carefully tuned for interaction effects. First few moves need special handling.
  20. Reduction persistence across re-searches - When LMR fails high, the re-search depth affects whether time was saved or wasted. Statistical reductions supplement formula-based reductions.

Extensions Implementation

  1. Singular extensions test for "only good move" - When TT value significantly exceeds other moves at reduced depth, the position likely has only one good move worth exploring deeper. About 59% of singular searches actually result in extensions.
  2. Extension explosion requires multiple safeguards - Track with ss->doubleExtensions and limit with depth < 2 * rootDepth. Single protection mechanisms proved insufficient historically, though current limits "don't work at all" for preventing explosions.
  3. Check extensions need depth thresholds - Implemented as givesCheck && depth > 9, they only trigger at deeper searches. They help avoid horizon effects in tactical endgame sequences and can be worth ~10,000 Elo.
  4. Extensions reshape search tree structure - More extensions create "tunnel vision" where search goes deeper in promising lines but has lower average depth. Works well for deep but poorly for shallow searches.
  5. Double extensions use specific counting - Track via ss->doubleExtensions = (ss-1)->doubleExtensions + (extension == 2). Modifications must preserve this mechanism to prevent explosion.
  6. Singular extension margin scales with depth - Formulas like ttValue - (82 + 65 * (ss->ttPv && !PvNode)) * depth / 64 show different components may have different time control scaling behaviors.
  7. Extensions range from -3 to +2 - Negative extensions (reductions in special cases) go down to -3, while positive extensions cap at +2. Guards prevent exceeding these limits.
  8. Extensions generally scale positively with time control - Patches that increase extensions tend to perform better at longer time controls compared to shorter ones. However, extension patches often show "Fail STC Pass LTC Fail VLTC" behavior.
  9. Singular extension depth is particularly problematic - Avoid tweaking singularDepth as it "behaves god only knows how" and has unpredictable scaling behavior. Conservative depth requirements (>= 6 or 7) work better.
  10. Double/triple/quadruple extensions require careful conditions - Multiple extensions can be effective, but conditions triggering them need precise tuning. Use formulas like depth += (depth < 7) rather than unconditional depth += 2.
  11. Extension scaling varies dramatically by time control - Extensions that work well at short time controls may scale poorly at longer ones. Higher singularBeta values (closer to ttValue) generally scale better.
  12. The !ttCapture condition is crucial - For triple extensions, this specific condition is worth >20 Elo and prevents extensions on tactical positions where they may be less beneficial.
  13. Extension depth calculations must handle overflow - When depth can exceed 200, the sign can flip. Use formulas like (MAX_PLY - depth) * factor / MAX_PLY to avoid arithmetic overflow.
  14. PV node extension strategy differs - Consider extending PV nodes when encountering TT moves in qsearch by "resurfacing" to regular search rather than staying in qsearch.
  15. Singular extensions at low depths are problematic - Allowing singular extensions at depth 2 causes singular search depth to become 0, dropping into qsearch which lacks excluded move machinery.
  16. Singular beta calculations are complex - Use formulas like singularBeta = ttValue - (65 + 52 * (ss->ttPv && !PvNode)) * depth / 63, where PV status affects the beta margin.
  17. TT bounds predict singular search results - If TT entry has singular bound information and sufficient depth, you can skip expensive singular search with high accuracy (89%+ correlation).
  18. Singular extensions disable probcut - Probcut is never triggered in singular search because the depth and value conditions are always true in singular extension contexts.
  19. Move count corruption after singular search - ss->moveCount gets corrupted after singular search because it reflects the excluded search's move count rather than the current search.
  20. Singular margin calculation is critical - Formulas like singularBeta = ttValue - depth_factor determine extension triggers. Small changes cause large performance swings across time controls.
  21. Singular extensions avoid self-recursion - The !excludedMove guard prevents recursive singular searches in SMP where TT moves can be reassigned by other threads.
  22. Singular depth requirements are strict - Requires specific TT depth relationships and proper bounds checking. The "depth - 3" threshold relates to probcut calculations.
  23. Different singular components scale differently - Search window, extension decision, and multicut components within singular logic may have different time control scaling behaviors.
  24. Singular extensions test reduces tree - The (alpha-s, alpha-s+1) window, not (alpha-s, beta-s), is used for efficiency in determining move singularity.
  25. Check extensions are most valuable in endgames - They help avoid horizon effects in tactical sequences, particularly with limited piece mobility. Less useful in complex middlegames.
  26. Killer move extensions need careful validation - Extensions should verify the move matches both killer and TT move, with appropriate history thresholds to avoid explosion.
  27. Passed pawn extensions have declined - Modern engines rely less on specific positional features for extensions, favoring statistical approaches instead.
  28. Recapture extensions prevent horizon effect - Extending recaptures helps complete tactical sequences, though modern qsearch handles much of this automatically.
  29. Threat-based extensions are position-dependent - Extensions based on threats work better when combined with proper threat detection from move patterns.
  30. Extension conditions should combine multiple indicators - Effective extensions use complex boolean expressions combining depth, node type, statScore, moveCount, and evaluation conditions for graduated responses.

Null Move Pruning

  1. Null move requires careful state management - Variables like currentMove and continuationHistory must be properly set. Use do_null_move() and undo_null_move() which aren't inverses of each other.
  2. Verification search targets zugzwang - Though Elo-neutral, verification prevents permanent tactical holes in zugzwang positions that depth cannot overcome. It's a safety mechanism, not optimization.
  3. Null move eval calculation differs - After null moves, qsearch uses -ss->staticEval while main search calls evaluate(), creating inconsistency due to built-in tempo bonuses.
  4. Consecutive null moves need prevention - The guard (ss-1)->currentMove != MOVE_NULL exists for good reason, though it can be removed if search naturally terminates through other means.
  5. Null move depth can be extreme - Maximum reduction went from 3 to 7 over time. With R = 4 + 6, searches can dive to depth -10, which is normal behavior entering deep qsearch.
  6. Beta clamping prevents mate score issues - Clamp returned values with std::min(nullValue, VALUE_TB_WIN_IN_MAX_PLY-1) to avoid returning unproven mate scores from shallow searches.
  7. Excluded moves conflict with null move - The !excludedMove condition exists because null move is antagonistic to singular extension searches - testing move singularity doesn't benefit from giving opponent free moves.
  8. Null move verification prevents zugzwang - Verification search catches zugzwang positions where every move worsens the position. The thisThread->nmpMinPly mechanism prevents recursive verification.
  9. Null move pruning in endgames needs care - Special handling for pawn-only endgames exists because standard null move pruning may need different reduction values or conditions.
  10. Null move cut node logic is complex - The cutNode flag sequence can be disrupted by null moves since the same side plays twice - consider whether to pass cutNode or !cutNode.
  11. Tempo adjustment after null move - After null move pruning, static evaluation can be adjusted by a tempo value to account for side-to-move advantage, providing small but measurable gains.
  12. Null move depth reduction formula - The pattern depth - R can produce negative values, requiring careful bounds checking to prevent accessing invalid data structures.
  13. Sophisticated null move implementations - Modern implementations allow null move search even in TB win score ranges but don't return speculative values - this "nmp lite" approach scores positions in TT.
  14. statScore condition appears weak but is crucial - The statScore condition in null move pruning has only 1% hit rate but simplifying it consistently fails, suggesting subtle interaction effects.
  15. Null move pruning depth logic is complex - The nmpMinPly mechanism disables null move pruning in verification searches at high depths, with formulas controlling when to re-enable it.
  16. Don't return fake tablebase scores - Clamp the returned value from null move search to avoid propagating incorrect TB scores up the search tree, but still save the TT entry.
  17. Verification depth threshold mathematical - The depth < 16 check ensures depth - R >= 1 since max reduction R can be 15. Verification uses recursive depth logic with thisThread->nmpMinPly.
  18. Verification search uses cutNode=true - When doing verification, you're searching current node again, so cutNode should reflect current status. Verification is quality-of-life feature for analysis.
  19. Verification search depth negative often - Depth-R is negative ~70% of the time, causing falls into qsearch rather than full search. Verification overhead is acceptable for tactical reliability.
  20. Null move handling in history - Always check move.is_ok() before using moves to index history tables to avoid null move assertions. The null move encodes side-to-move changes requiring proper state management.

Search Stack Management

  1. Search stack is contiguous for performance - The array layout enables cache-friendly access patterns. Modifications must respect bounds checking and initialization requirements.
  2. Stack depth calculations need consistency - When accessing (ss-N), ensure N doesn't exceed available stack depth. Relationship between MAX_PLY and stack size must be maintained.
  3. Different ply variables serve different purposes - ss->ply measures distance from root, depth indicates remaining search depth. Don't confuse these in search modifications.
  4. Stack variables require proper initialization - Before recursive search, always set ss->currentMove and ss->continuationHistory. Uninitialized access causes crashes and undefined behavior.
  5. Search stack indexing maintains color consistency - When using stack offsets, ss-2 is always "our move" regardless of null moves, preserving color relationships through the stack.
  6. Continuation history needs adequate depth - Main search uses up to conthist[3], qsearch uses conthist[1], but movepicker needs conthist[5]. Allocate stack accordingly.
  7. Stack state can become "dirty" - Singular extensions passing ss to recursive search can leave incorrect values like moveCount that don't match actual position. This intentional inconsistency sometimes helps.
  8. Search stack provides critical context - The stack carries forward search state including ply count, static eval, and move history. Access patterns from ss-7 to ss+2 are safe with proper initialization.
  9. Stack variable initialization prevents bugs - New search stack variables must be properly initialized in null move search, probcut, and qsearch. The memset in iterative deepening only covers base stack entries.
  10. opponentWorsening uses different temporal perspective - While improving compares (ss-2) to ss, opponentWorsening compares (ss-1) to ss, representing different perspectives for pruning decisions.
  11. Search stack offset meanings matter - Using (ss + 2)->cutoffCnt instead of (ss + 1) gathers information from same-ply distant nodes (siblings' children), providing more general tree information.
  12. VALUE_NONE handling requires guards - When accessing (ss-2)->staticEval or similar, always check for VALUE_NONE to prevent using the sentinel value (32000) in calculations.
  13. Stack overflow occurs around depth 95-97 - The default 8MB stack size should handle MAX_PLY, but deep searches may require increasing stack size with ulimit -s 65536.
  14. Continuation history depth adjustment needed - When adding continuation history plies, decrease MAX_PLY by the number of added stack positions to prevent accessing uninitialized memory.
  15. Search stack is accessed asymmetrically - Different search components access different ranges of the stack, with continuation history needing up to ss-7 while other components may only need ss-1.
  16. Killer move storage uses search stack - Killer moves are stored at ss+1 and ss+2, requiring proper initialization and management across search iterations.
  17. Static eval propagation through stack - Static evaluation values propagate through the search stack and must be carefully managed to avoid using stale or invalid values.
  18. Initialize properly with goto - When using goto statements in search, ensure all variables are properly initialized before the goto target to avoid undefined behavior.
  19. Memory access patterns matter - Accessing search stack in predictable patterns improves cache performance. The contiguous layout is critical for performance.
  20. Search stack alignment affects NNUE - Search structures like StateInfo must be properly aligned for NNUE cache line efficiency, affecting overall performance.

Move Generation and Validation

  1. Move legality requires two checks - Use both pos.pseudo_legal(move) AND pos.legal(move). The former accepts any input but the latter only accepts pseudo-legal moves.
  2. Qsearch generates non-quiet moves - While designed for tactics, TT moves in qsearch can be quiet moves. These can be pruned if they already produce bestValue.
  3. Move validation prevents history corruption - Always check is_ok(move) before updating history tables. Invalid moves like MOVE_NULL or MOVE_NONE corrupt statistics and cause crashes.
  4. Capture classification inconsistencies exist - The pos.capture() function returns false for promotions, causing queen promotions to appear in both capture and quiet stages when they're killers.
  5. Different positions have different move limits - Legal positions have at most 218 moves, but the 256 buffer accounts for potential illegal positions during search.
  6. Move generation stages have specific purposes - Good captures, killers, quiets, and bad captures are generated in stages. Pruning must account for which stages are affected.
  7. Pseudo-legal generation is preferred - Only checking legality for moves that are actually searched provides better performance than generating all legal moves upfront.
  8. Root node moves are always legal - The search code distinguishes between rootNode and regular nodes for move legality, with root moves being pre-validated.
  9. Move legality checking context matters - Root nodes have all moves pre-validated legal while other nodes need runtime checking, affecting pruning and search decisions.
  10. En passant requires special handling - En passant captures evaluate to zero value in futility pruning, creating potential pruning inaccuracies that need compensation.
  11. Promotions need value adjustment - Promotion moves don't automatically account for the promotion piece value in futility calculations, requiring special handling.
  12. Move representation consistency crucial - When implementing features referencing previous moves, ensure consistent square representation for history updates.
  13. Specialized recapture generation possible - Recapture move generation can be effective but must avoid double-searching moves and integrate properly with move picker stages.
  14. Move generation stage optimization critical - The staged system exists because you often don't need all moves - early cutoffs save significant computation.
  15. Evasion move splitting has limited benefit - Separating evasion captures from quiets provides minimal gain because evasions are rare and we don't skip quiets in check.
  16. Move validation in special cases - When implementing move ordering based on special conditions, ensure moves are actually legal and properly validated.
  17. Castling corrupts history tables - Encoded as "king captures rook", castling corrupts continuation histories and countermoves with incorrect squares.
  18. Promotion handling inconsistent - Capturing piece may be NO_PIECE for non-capturing promotions, but this typically doesn't break stats since 0/7 remains 0.
  19. Recapture extensions need proper naming - Extensions checking to_sq(move) == prevSq aren't necessarily true recaptures - they trigger for any capture of just-moved piece.
  20. Special moves need consistent handling - Castling, en passant, and promotions should be handled consistently across all search components to avoid subtle bugs.

History Tables and Statistics

  1. History values use running average formula - Updates follow entry += bonus - entry * abs(bonus) / D, implementing a running average where larger bonuses have diminishing returns.
  2. Continuation history indexing is critical - Arrays use patterns like (2,2,1,1,1) where each position corresponds to specific plies. Aligning with movepicker patterns historically loses Elo.
  3. History tables are per-thread - Each thread maintains separate history tables, consuming significant memory but avoiding synchronization overhead. Updates don't cross thread boundaries.
  4. Different history types have different values - Main history contributes ~11 Elo, continuation histories ~63 Elo, making some optimizations more worthwhile than others.
  5. History scaling prevents overflow - When values approach limits, scale by constants like 1024 internally to maintain precision without integer overflow when divided by large numbers.
  6. Negative history indicates bad moves - Moves that repeatedly fail low accumulate negative history. This information is as valuable as positive history for move ordering.
  7. History updates need careful timing - Only update continuation history when LMR fails high, not on fail-low. Since ~97% of LMR searches fail low, unconditional updates make all histories negative.
  8. History updates should verify move legality - TT hash collisions can cause history updates for impossible piece-square combinations. Always check move.is_ok() before indexing history tables.
  9. Different history types serve distinct purposes - Main history (from-to), continuation history (piece-to pairs), and capture history each optimize different aspects and shouldn't be conflated.
  10. History heuristics behave differently by context - History values that work well in move ordering can "fail so freaking bad" when used in pruning, indicating context-specific optimization needs.
  11. Castling move encoding corrupts history - Castling encoded as "king captures rook" corrupts history tables since to_sq() returns the rook square rather than king's actual destination.
  12. History applies to all moves, not just quiets - Variables named quietMoveBonus/quietMoveMalus are actually used for all move types, representing a historical naming inconsistency.
  13. History table sizing should use power-of-2 - This enables efficient modulo operations and better cache alignment for history table indexing.
  14. Thread-local history prevents synchronization overhead - Pawn history and similar tables should be thread-local to avoid synchronization overhead and enable proper per-thread move ordering.
  15. History accumulation can reach extreme values - Total history values can reach ~184k, so thresholds must account for these large magnitudes when implementing history-based decisions.
  16. Low ply history extension is viable - Extending history tracking beyond the traditional 2-ply limit (e.g., to 4 plies) can provide search improvements without significant overhead.
  17. History initialization affects search behavior - If you change history initialization values, you must tune related search parameters accordingly as they're calibrated for specific average values.
  18. History table importance ranking - Main history significantly outweighs continuation histories. Prioritize main hist > continuation hist 0 > everything else when implementing improvements.
  19. History bucketing needs selective dimensions - Not all history dimensions are beneficial; tune from 0 rather than assuming all help. Some history mechanisms historically fail despite intuitive appeal.
  20. Statistical scoring systems dwarf evaluation - Statistics can reach ~100,000 vs eval's ~1,000 range, enabling very aggressive statistical pruning decisions.
  21. Statistical scores accumulate over time - Moves repeatedly failing low accumulate large negative scores, providing strong signals for pruning decisions.
  22. Different statistics have different scales - Main history, continuation history, and capture history use different ranges and scaling factors requiring careful normalization.
  23. Statistical decay prevents saturation - The running average formula with decay factor prevents statistics from saturating at maximum values too quickly.
  24. Statistics provide time-independent signals - Unlike evaluation which can change with new networks, statistics reflect actual search tree behavior patterns.
  25. Policy bonus asymmetry works - Using asymmetric scaling (bonus when positive, bonus/4 when negative) keeps average near zero while maintaining guidance.
  26. Second-order history approximations limited - First-order history updates are sufficient; second-order approximations don't provide significant improvements.
  27. History persistence enables learning - History tables persisting between moves allow the engine to learn from previous searches.
  28. Gradient considerations don't apply - Current history formulas aren't gradient-based, having their own convergence properties.
  29. Statistical confidence improves with samples - History table entries with more updates provide more reliable move ordering signals.
  30. Cross-position statistics work - History tables successfully generalize move patterns across different but related positions.

Quiescence Search (Qsearch)

  1. Qsearch depth is essentially boolean - The actual depth value in qsearch doesn't matter much; it's either 0 (inherited from search) or non-zero. Depth loses meaning in qsearch context.
  2. Qsearch mate scores can be incorrect - Qsearch only searches captures and checks, so if all generated moves lead to mate, this doesn't guarantee actual mate since quiet moves weren't tried.
  3. Stand pat is the default in qsearch - When staticEval >= beta, qsearch immediately returns. This happens frequently, making qsearch typically brief despite tactical complexity.
  4. Qsearch doesn't probe tablebases - This can lead to static evaluation maximum values being returned as PV scores, potentially causing move ordering issues.
  5. No depth limit exists in qsearch - Search continues until captures are exhausted and no TT moves exist. There's no hard depth limit despite extreme depths being possible.
  6. Futility pruning in qsearch uses static eval - Unlike main search which might use TT eval, qsearch futility should use static eval for consistency and logical clarity.
  7. Stand pat logic differs for PV vs non-PV nodes - Using bestValue = (alpha + beta) / 2 for PV nodes when bestValue >= beta while non-PV nodes return immediately is an effective heuristic.
  8. Qsearch pruning mechanisms are well-tuned - The "amount of pruning in qs is simply perfect," so attempts to increase pruning are likely counterproductive. Quiet checks in QS provide minimal benefit (~0.7 Elo).
  9. TT moves bypass qsearch restrictions - Even at depths where only recaptures are generated, TT moves of any type are searched. This exception is crucial for search effectiveness.
  10. Continuation history pruning only applies to non-captures - History-based pruning in quiescence search specifically excludes captures and uses thresholds around 5228 for quiet moves.
  11. Qsearch lacks excluded move handling - If implementing singular extensions at low depths that drop to qsearch, you must write machinery for excluded moves or use std::max(newDepth / 2, 1).
  12. PV consistency in qsearch matters - When calling qsearch from search, ensure PV node types are handled correctly to avoid update bugs in the principal variation.
  13. Qsearch can reuse static eval differently - Qsearch can reuse previous static eval after null moves, but main search cannot due to eval asymmetry between white and black perspectives.
  14. Capture futility pruning mirrors quiet pruning - The same principles that work for quiet move futility pruning (including history blending in margins) also apply to capture futility pruning.
  15. Qsearch PV initialization differs from main search - Qsearch initializes (ss+1)->pv at beginning while main search does it after LMR. Mixing these patterns can cause crashes.
  16. TT move handling in qsearch requires care - Most engines either skip TT moves in qsearch entirely or guard them with SEE checks. Trying TT moves without proper guards can lose significant Elo.
  17. SEE ordering in qsearch is crucial - Since qsearch processes many captures, efficient SEE-based ordering significantly impacts performance.
  18. Qsearch delta pruning filters bad captures - Captures that can't raise alpha even with optimistic assumptions are skipped entirely. The margin delta += delta / 4 + 5 is simplistic but effective.
  19. Recapture-only mode prevents explosion - At very negative depths, limiting to recaptures on the last moved-to square contains search explosion.
  20. Check generation at depth 0 is selective - Not all checks are generated, only "safe" checks that are unlikely to lose material.

Probcut Implementation

  1. ProbCut performs redundant qsearch - The algorithm does qsearch twice on the same position, which seems wasteful but removing this redundancy consistently fails.
  2. ProbCut beta calculation needs bounds - Ensure probCutBeta = beta + 220 - howImproving / 8 never becomes less than beta, as this causes fail-highs returning values ≤ alpha.
  3. ProbCut in check targets blunders - Small probcut works when in check with TT captures: if TT move captures far above beta, opponent likely blundered by giving check.
  4. History mechanisms fail in probcut - History-based improvements that work in regular search often fail in probcut for unknown reasons, requiring separate testing.
  5. ProbCut depth conditions are unstable - Conditions have alternated between < 3 and < 4 across tunings, with neither direction consistently gaining when changed.
  6. Probcut complexity is poorly understood - Probcut performs double qsearch for unclear reasons and has complex interactions with move ordering. Modifications require extreme caution.
  7. Probcut stores depth-3 in TT - This accounts for the move being made before the search, so effective depth is depth-3 even though search depth is depth-4.
  8. Probcut and promotions interaction - Probcut may skip promotion moves when pcBeta - eval > 0, which could be a significant oversight in current implementation.
  9. Probcut beta handling varies - Probcut with beta cutoff is preferred because you only need one move to cause a beta cutoff, creating intentional asymmetry.
  10. Probcut requires specific TT conditions - Checks include !(ttData.depth >= depth - 3 && ttData.value != VALUE_NONE && ttData.value < probCutBeta) for proper operation.
  11. IIR must precede probcut - Internal Iterative Reduction must come before probcut in search order, and their interactions are extremely rare but critical.
  12. Probcut TT optimization possible - In singular search contexts, moves that previously failed probcut will likely get immediate TT cutoffs, allowing you to skip expensive operations.
  13. Probcut excludedMove check necessary - The check cannot be removed despite low hit rates because it can trigger in edge cases with very deep searches or future modifications.
  14. Guard probcut against excluded moves - During singular search, prevent incorrect pruning of the move being tested for singularity.
  15. Probcut history context differs - Probcut doesn't need all history tables - it doesn't use pawn history, so passing unnecessary history pointers creates complexity.

Value and Score Management

  1. Alpha can only increase in search - The invariant bestValue > alpha immediately triggers alpha = bestValue. Conditions checking bestValue < alpha are meaningful.
  2. Maintain alpha < beta invariant - Search code must preserve this through assertions. Violations indicate fundamental algorithmic errors causing search failures.
  3. Aspiration windows interact with mate scores - When implementing aspiration windows, ensure proper bounds are maintained throughout search, especially with mate scores.
  4. Zero-window searches in non-PV nodes - Since alpha + 1 == beta in non-PV nodes, many complex window calculations can be simplified to zero-window searches.
  5. Fail-high resolution affects time controls differently - Resolving fail-highs improves precision (helps LTC) but costs depth (hurts STC), explaining differential scaling.
  6. Return value tweaks can be highly effective - Modifying return values in different search contexts can yield significant Elo gains, but requires careful implementation to avoid breaking mate detection.
  7. Always guard mate scores - When implementing return value modifications, check that both original and modified values are outside mate/TB win ranges to prevent false mate reports.
  8. Use int for overflow-prone calculations - The Value type (int16_t) can overflow during arithmetic like beta - alpha + (1<<28), so use int for computations and cast back to Value.
  9. Mate scoring requires depth consistency - TT mate scoring requires consistency between stored depth and search depth to avoid bugs in mate distance calculations.
  10. Static eval clamping is crucial - Static evaluations are clamped between TB loss/win boundaries, and this relationship is crucial for proper score interpretation.
  11. Alpha-beta window edge cases - When using (alpha + beta) / 2, ensure it doesn't equal beta to avoid assertion failures when alpha and beta differ by 1.
  12. Draw value randomization helps - Adjusting alpha when detecting draws helps ensure proper move ordering and prevents research after fail-lows in drawn positions.
  13. Value initialization pitfalls exist - Variables like value can contain garbage from previous moves - always initialize or check validity before using in conditions.
  14. Fail-soft vs fail-hard matters - Search behavior depends on failure modes - modifications that work in one engine may not transfer due to different failure handling.
  15. TT cutoff return values can be modified - Similar to other contexts, TT cutoffs can benefit from return value modifications like (ttValue * 3 + beta) / 4.
  16. VALUE_NONE is a flag, not a value - Never perform arithmetic on VALUE_NONE. It indicates absence of evaluation, not a score that can be manipulated.
  17. Guard against tablebase score corruption - When assigning evaluations, clamp values using std::clamp(v, VALUE_TB_LOSS_IN_MAX_PLY + 1, VALUE_TB_WIN_IN_MAX_PLY - 1).
  18. VALUE_KNOWN_WIN guards partially obsolete - With NNUE, output never reaches VALUE_KNOWN_WIN range, making some guards obsolete while others remain critical for TB handling.
  19. Use ss->ply for mate calculations - Always use ss->ply (distance from root) rather than game ply when calculating mate scores or tablebase win distances.
  20. Clamp qsearch mate scores - Consider clamping qsearch return values to avoid mate scores being returned when they may not be accurate.

Search Termination and Time Management

  1. Forced moves still require search - Even obvious only-moves go through minimal search to populate PV and verify legality.
  2. Search can't terminate mid-node - Once a node starts searching, it completes even if time expires, preventing inconsistent state.
  3. Node limits are precise termination - Unlike time-based termination, node limits stop search exactly when reached for reproducible testing.
  4. Aborted searches produce unreliable mate scores - When search stops prematurely, mate-in scores from incomplete iterations cannot be trusted and should fall back to previous iterations.
  5. Distinguish completed vs aborted searches - Add flags like Threads.abortedSearch to track early termination and avoid reporting unproven mate scores.
  6. Ensure at least depth 1 completion - For very low node counts, guarantee the main thread finishes at least depth 1 to avoid returning bestmove (none).
  7. Thread state synchronization critical - Changes like threads.stop = threads.abortedSearch = true must be atomic - forgetting either flag makes patches ineffective.
  8. Search termination affects PV consistency - When modifying how search terminates, ensure the principal variation remains consistent with the reported score.
  9. VALUE_ZERO return is dummy for aborted search - The actual logic depends on checking Threads.stop flag rather than the returned value from aborted searches.
  10. Depth completion requirements for mate scores - Ensure completedDepth >= 1 before relying on thread results, as incomplete iterations cannot provide reliable mate information.
  11. Search abortion handling in TT - The TT can store mate scores from aborted searches without proper guards, requiring careful handling to prevent incorrect retrieval.
  12. Multi-threading termination coordination - Critical thread state changes must be properly synchronized across all threads to ensure consistent termination behavior.
  13. Mate detection bounds are crucial - Only trust exact bounds for mate reporting to avoid false mate announcements from lowerbound/upperbound values.
  14. Time checks happen at specific points - Checking time too frequently hurts performance; checks occur at predetermined node count intervals.
  15. Search instability affects time allocation - Unstable search with frequent best move changes triggers different time management behavior.
  16. Complexity influences time usage - Complex positions receive more time based on search tree characteristics and evaluation complexity.
  17. Best move stability is tracked - The bestMoveChanges counter directly influences whether search continues or terminates.
  18. Time checks avoid critical sections - Time polling happens at safe points to prevent mid-operation termination.
  19. Iterative deepening provides time flexibility - Each iteration can be abandoned if time runs low, providing natural time control adaptation.

Move Loop Implementation

  1. Prefetch after pruning decisions - Only prefetch data for moves that will actually be searched. Prefetching before pruning wastes memory bandwidth.
  2. Value variable persists across moves - The value variable inherits from previous moves via value = bestValue and is never reset, affecting conditions using it.
  3. Move counting affects multiple decisions - Track moveCount carefully as it influences LMR, pruning, and extension decisions throughout the move loop.
  4. First move gets special treatment - Many heuristics check moveCount == 1 to handle the first move differently, typically with less aggressive reductions.
  5. Excluded moves affect move counting - Excluded searches cause ss->moveCount to behave differently than local moveCount, creating subtle LMR interactions.
  6. Best move existence changes code paths - Code paths like else if (bestMove) mean certain conditions are only reachable when no best move exists, affecting logic flow.
  7. Move loop continues after cutoff - Even after finding a move that causes cutoff, the loop may continue for statistics updates or other bookkeeping.
  8. Late move pruning increases with move count - The later a move appears in ordering, the more aggressively it can be pruned or reduced.
  9. Move validation timing matters - Validate moves before expensive operations like prefetch or history lookups to avoid wasted computation.
  10. Special move handling in loop - Castling, promotions, and en passant require special handling within the move loop for correct behavior.

Implementation Best Practices

  1. Avoid floating-point in hot paths - Use integer arithmetic and scale by powers of 2 for performance. Convert expressions like 1.5 * improving to 3 * improving / 2.
  2. Prefer integer arithmetic for scaling - Use pre-computed tables and integer operations. For logarithmic functions, use msb(depth) instead of log() as it's essentially equivalent.
  3. Use std::clamp carefully - Using std::clamp with extensions can cause undefined behavior when upper bound becomes less than lower bound, leading to compiler-dependent results.
  4. Operator overloading masks issues - The Value enum has overloaded operators that promote to int during arithmetic but can hide overflow problems when truncated back.
  5. Condition ordering affects performance - Place most restrictive conditions first in reduction logic to avoid unnecessary computation. Order-dependent conditions matter for correctness.
  6. Parentheses placement is critical - In complex pruning conditions, parentheses placement can accidentally change logic while still compiling correctly.
  7. Template vs runtime parameters - Consider templating frequently-checked boolean parameters like cutNode for compile-time optimization, balanced against instruction cache pressure.
  8. Variable declaration placement matters - While declaring variables at function start aids readability, consider declaring them closer to usage for better code organization.
  9. Assert placement requires care - Be careful with assertions that assume specific conditions as they may not hold in all search contexts, especially at high depths.
  10. Template parameters enable optimization - PvNode as template parameter creates two function versions, allowing aggressive compiler optimization of conditional branches.
  11. Integer division by powers of 2 is optimal - Use /512 instead of /500 as compilers optimize powers of 2 to bit shifts, providing measurable performance gains.
  12. Floating point is isolated to initialization - The hot search path is entirely integer-based. Float operations only appear in reduction table setup and time management.
  13. Group conditions by logic, not operation - When writing LMR conditions, group by logical relationship (e.g., keep node type conditions together) rather than by increase/decrease effect.
  14. Organize search steps by complexity - Structure functions from simplest to most complex operations, similar to main search's numbered steps, improving readability.
  15. Check for reachable conditions - Bugs like PROBCUT_TT + 2 ≠ GOOD_CAPTURES show how conditions can never be true due to incorrect constants.
  16. Integer division precedence matters critically - (3 + 2*x) / 2 is NOT equivalent to 3/2 + x due to integer truncation. This breaks reduction formulas.
  17. Depth comparisons need correct operators - Ensure consistent use of > vs < throughout to avoid confusion and maintain logical consistency.
  18. Branch prediction matters more than arithmetic - Complex conditional branches causing CPU misprediction are often worse than simple integer division. Optimize for predictable branches.
  19. Hot path operations need careful optimization - Avoid expensive operations like sqrt() or complex bitboard calculations in frequently called code paths.
  20. Cache locality affects search performance - Search modifications increasing stack usage can degrade performance due to cache misses despite logical correctness.
  21. Move sorting performance is critical - MovePicker is performance-sensitive despite being "almost unreadable". Small changes can significantly impact overall speed.
  22. Compiler optimizations affect search code - Modern compilers optimize boolean arithmetic to conditional moves. Manual "branchless" code is often unnecessary.
  23. Inline functions for hot paths - Frequently called small functions should be inlined to reduce function call overhead.
  24. Precomputed tables beat runtime calculation - Mathematical operations should be precomputed and stored in lookup tables rather than calculated during search.
  25. Quantization preferred over floating point - Converting floating-point parameters to integers typically performs better in frequently called functions.
  26. Code organization follows strict patterns - The numbered search steps use complementary conditions requiring careful analysis when modifying.

Edge Cases and Algorithm Limits

  1. Horizon effect is real - Best static eval after search doesn't always match best search move. Don't assume evaluation ordering matches search ordering.
  2. Search tree shape affects strength - Wider shallow trees vs narrower deeper trees have different strengths. Balance through pruning and extension parameters.
  3. Branching factor changes reveal improvements - Different algorithms fundamentally alter tree shape. Lower branching at low depths but higher at deep levels indicates successful selective search.
  4. Integer division truncation matters - Division toward zero causes different results for positive vs negative values in reduction formulas.
  5. Undefined behavior with bounds - Conditions creating situations where max < min in clamping operations cause undefined behavior.
  6. Depth can exceed expected ranges - When depth exceeds 200, sign can flip requiring formulas avoiding arithmetic overflow.
  7. History value overflow prevention - History values reaching ~184k require careful clamping to prevent overflow in calculations.
  8. Move validity in edge cases - Functions like givesCheck() require valid moves and will assert on null/invalid moves.
  9. Alpha-beta equality edge cases - When alpha and beta differ by 1, special handling needed to avoid division issues or assertions.
  10. Negative depth handling varies - Different search components handle negative depths differently, some dropping to qsearch, others asserting.
  11. MAX_PLY boundary conditions - Search near MAX_PLY requires special handling to avoid accessing beyond allocated stack space.
  12. Value type boundaries critical - int16_t Value type can overflow requiring int for intermediate calculations before casting back.
  13. Depth 245 is practical maximum - Search reaches this limit in forced sequences, requiring parameter awareness of this bound.
  14. Exponential growth is fundamental - No pruning technique completely eliminates exponential complexity of game tree search.
  15. Perfect ordering remains impossible - Even optimal move ordering can't prevent examining multiple moves per position.
  16. Tactical horizon always exists - Some tactical sequences extend beyond any practical search depth.
  17. Search can't fix all eval errors - Fundamental evaluation mistakes may persist regardless of search depth.
  18. Move generation buffers need headroom - The 256-move buffer handles pathological positions beyond normal chess requirements.
  19. Edge cases matter less than expected - Due to self-correcting nature, handling every edge case perfectly is less important than overall behavior.
  20. Position key updates affect detection - Any changes to how position keys are calculated can affect repetition detection accuracy.

Special Position Handling

  1. Stalemate detection when no legal moves - Search checks for check when no legal moves exist to distinguish mate (return mate score) from stalemate (return 0).
  2. Repetition handling was simplified - Two-fold repetitions are no longer scored as draws, which surprisingly gained Elo when removed from the codebase.
  3. Rule50 affects search behavior - High rule50 values cause unusual search patterns and extreme depths. Consider this when implementing termination logic.
  4. Illegal positions need graceful handling - While optimizing for legal positions, ensure illegal positions with excess pieces don't cause buffer overflows.
  5. Check positions require special logic - Positions in check have different evaluation, pruning, and extension behaviors consistently applied across all heuristics.
  6. Cycle detection affects search correctness - The has_game_cycle() function detects upcoming repetitions and "no progress" situations requiring careful implementation.
  7. Game cycle detection must check piece ownership - Incorrectly detecting cycles with opponent pieces can block legitimate search paths creating false positives.
  8. Never artificially boost alpha in cycles - Raising alpha without ensuring returned value meets it leads to truncated PVs and incorrect search results.
  9. Repetition detection can reduce shuffling - Adding reduction when same move repeated can improve search efficiency by discouraging repetitive play.
  10. Fast repetition detection trades accuracy - Can be ~94% accurate compared to full verification, providing good performance-correctness tradeoff.
  11. 50-move rule interacts with pruning - Creates issues with TT cutoffs and history-based decisions requiring explicit handling via rule50 count checks.
  12. Cycle detection in null move context - Special care needed when detecting cycles after null moves due to side-to-move considerations.
  13. Three-fold repetition vs two-fold - Search typically uses two-fold repetition detection for efficiency though rules specify three-fold.
  14. Repetition tables need careful bounds - Accessing repetition history requires proper bounds checking to avoid accessing invalid memory.
  15. Graph History Interaction requires guards - The 50-move rule creates issues with history-based pruning since positions can repeat with different history contexts.
  16. Material imbalance changes search character - Positions with unusual material distribution may benefit from different search parameters.
  17. King safety influences search decisions - Dangerous king positions might disable certain pruning to ensure tactical accuracy.
  18. Endgame positions allow different techniques - With fewer pieces, certain pruning methods become safer while others lose effectiveness.
  19. Pawn structure affects history validity - Pawn moves change structure, requiring storing previous position state to avoid assigning bonuses to invalid references.
  20. Complex positions need different parameters - Tactical complexity affects pruning safety - in highly tactical positions, aggressive pruning is more dangerous.
  21. Mate distance pruning prevents longer mate searches - When shorter mate is found, search should focus on finding even shorter mates rather than exploring longer sequences.
  22. Zugzwang positions require special care - Null move pruning fails in zugzwang where any move worsens position. Verification search specifically targets these cases.
  23. Pawn endgames benefit from modifications - Special handling for pawn-only endgames exists because standard techniques may need different parameters.
  24. Fortress positions challenge search - Positions where one side can hold despite material deficit require special handling to avoid incorrect evaluations.
  25. Perpetual check detection limited - Stockfish finds perpetual checks well through qsearch extensions only when both sides are giving checks.
  26. Simple eval serves as safety mechanism - The shuffling coefficient in simple eval prevents getting stuck in positions where material advantage can't be converted.
  27. Phase-based corrections historically fail - Despite intuitive appeal, adjusting corrections based on game phase rarely succeeds in practice.
  28. Position complexity affects search stability - Complex positions with many candidate moves show more search instability between iterations.
  29. Endgame patterns need special handling - Pawn endgames and other special endgames benefit from modified search parameters.
  30. Position evaluation caching reduces computation - Storing eval in search stack avoids recalculation when positions repeat through transpositions, though eval perspective must be tracked.
Edit

Pub: 18 Jul 2025 01:17 UTC

Edit: 21 Jul 2025 17:07 UTC

Views: 58