Analysis of ProbCut Implementation
Based on example 15 in the historical patches, I'll focus on improving the ProbCut technique in the current Stockfish search code.
Current Implementation Analysis
The current ProbCut implementation works as follows:
- Calculates a threshold value
probCutBeta = beta + 215 - 60 * improving
- For each capture/promotion move:
- Makes the move
- Performs a preliminary qsearch with a null window around probCutBeta
- If the qsearch passes (value >= probCutBeta), conducts a deeper search with fixed depth reduction of 5 ply (
depth - 5
) - If the deeper search also passes, prunes the original move
The key aspect I want to improve is the fixed depth reduction of 5 ply for the ProbCut verification search. This fixed reduction doesn't account for how decisive the position is - when the static evaluation is significantly better than beta, we can afford to be more aggressive with the depth reduction since we're more confident in the ProbCut result.
Critical Analysis for Improvement
The fixed depth reduction of 5 ply is suboptimal because:
- At deeper search depths (LTC), the evaluation is more accurate, so we can safely increase reduction when the position is clearly winning
- When the static evaluation is much better than beta, the ProbCut is more likely to succeed, so we can reduce more aggressively
- Conversely, when the position is close to beta, we should be more conservative with reductions to avoid false positives
I noticed that historical patches (like examples 4 and 5) often adjust search depth dynamically based on position characteristics. However, none of them adjust ProbCut depth based on the static evaluation margin.
The optimal approach should:
- Scale with depth for LTC performance (making it a true "scaler")
- Be more aggressive when the position is clearly winning (larger margin above beta)
- Remain conservative when the position is close to beta
- Avoid introducing search explosions or tactical blind spots
Proposed Patch
I'll implement a dynamic depth reduction that scales with the evaluation margin above beta. The formula will be:
This means:
- For every 3 pawn advantage in static evaluation, we reduce 1 ply more
- When the position is 6 pawns better than beta, we reduce by 7 ply instead of 5
- When the position is worse than beta, we reduce less (or even increase depth)
This approach is a true scaler because:
- It becomes more effective at deeper depths where evaluation is more accurate
- It maintains safety by reducing less when the position is close to beta
- It saves nodes in clearly winning positions, allowing more resources for critical lines
search.cpp:
Improved search.cpp:
Branch name: dynamic_prob_cut_depth_scaling