Analysis of ProbCut Implementation

Based on example 15 in the historical patches, I'll focus on improving the ProbCut technique in the current Stockfish search code.

Current Implementation Analysis

The current ProbCut implementation works as follows:

  1. Calculates a threshold value probCutBeta = beta + 215 - 60 * improving
  2. For each capture/promotion move:
    • Makes the move
    • Performs a preliminary qsearch with a null window around probCutBeta
    • If the qsearch passes (value >= probCutBeta), conducts a deeper search with fixed depth reduction of 5 ply (depth - 5)
    • If the deeper search also passes, prunes the original move

The key aspect I want to improve is the fixed depth reduction of 5 ply for the ProbCut verification search. This fixed reduction doesn't account for how decisive the position is - when the static evaluation is significantly better than beta, we can afford to be more aggressive with the depth reduction since we're more confident in the ProbCut result.

Critical Analysis for Improvement

The fixed depth reduction of 5 ply is suboptimal because:

  1. At deeper search depths (LTC), the evaluation is more accurate, so we can safely increase reduction when the position is clearly winning
  2. When the static evaluation is much better than beta, the ProbCut is more likely to succeed, so we can reduce more aggressively
  3. Conversely, when the position is close to beta, we should be more conservative with reductions to avoid false positives

I noticed that historical patches (like examples 4 and 5) often adjust search depth dynamically based on position characteristics. However, none of them adjust ProbCut depth based on the static evaluation margin.

The optimal approach should:

  • Scale with depth for LTC performance (making it a true "scaler")
  • Be more aggressive when the position is clearly winning (larger margin above beta)
  • Remain conservative when the position is close to beta
  • Avoid introducing search explosions or tactical blind spots

Proposed Patch

I'll implement a dynamic depth reduction that scales with the evaluation margin above beta. The formula will be:

dynamicReduction = (ss->staticEval - beta) / 300
probCutDepth = max(depth - 5 - dynamicReduction, 0)

This means:

  • For every 3 pawn advantage in static evaluation, we reduce 1 ply more
  • When the position is 6 pawns better than beta, we reduce by 7 ply instead of 5
  • When the position is worse than beta, we reduce less (or even increase depth)

This approach is a true scaler because:

  1. It becomes more effective at deeper depths where evaluation is more accurate
  2. It maintains safety by reducing less when the position is close to beta
  3. It saves nodes in clearly winning positions, allowing more resources for critical lines

search.cpp:

Depth      probCutDepth = std::max(depth - 5, 0);

while ((move = mp.next_move()) != Move::none())
{
    assert(move.is_ok());

    if (move == excludedMove || !pos.legal(move))
        continue;

    assert(pos.capture_stage(move));

    movedPiece = pos.moved_piece(move);

    do_move(pos, move, st, ss);

    // Perform a preliminary qsearch to verify that the move holds
    value = -qsearch<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1);

    // If the qsearch held, perform the regular search
    if (value >= probCutBeta && probCutDepth > 0)
        value = -search<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1, probCutDepth,
                               !cutNode);

    undo_move(pos, move);

    if (value >= probCutBeta)
    {
        // Save ProbCut data into transposition table
        ttWriter.write(posKey, value_to_tt(value, ss->ply), ss->ttPv, BOUND_LOWER,
                       probCutDepth + 1, move, unadjustedStaticEval, tt.generation());

        if (!is_decisive(value))
            return value - (probCutBeta - beta);
    }
}

Improved search.cpp:

Depth dynamicReduction = (ss->staticEval - beta) / 300;
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

while ((move = mp.next_move()) != Move::none())
{
    assert(move.is_ok());

    if (move == excludedMove || !pos.legal(move))
        continue;

    assert(pos.capture_stage(move));

    movedPiece = pos.moved_piece(move);

    do_move(pos, move, st, ss);

    // Perform a preliminary qsearch to verify that the move holds
    value = -qsearch<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1);

    // If the qsearch held, perform the regular search
    if (value >= probCutBeta && probCutDepth > 0)
        value = -search<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1, probCutDepth,
                               !cutNode);

    undo_move(pos, move);

    if (value >= probCutBeta)
    {
        // Save ProbCut data into transposition table
        ttWriter.write(posKey, value_to_tt(value, ss->ply), ss->ttPv, BOUND_LOWER,
                       probCutDepth + 1, move, unadjustedStaticEval, tt.generation());

        if (!is_decisive(value))
            return value - (probCutBeta - beta);
    }
}

Branch name: dynamic_prob_cut_depth_scaling

Edit

Pub: 08 Aug 2025 22:08 UTC

Views: 48