Let me analyze the task carefully. I need to create a new patch for Stockfish's search code, focusing on a technique similar to example 15, which involves ProbCut. I'll follow the workflow:

  1. Select the roughly same search technique step as in example 15
  2. Describe what the current implementation does
  3. Analyze potential improvements
  4. Create a patch that's more than just parameter changes, preferably with scalers that work well at long time controls

First, let's identify where ProbCut is implemented in the current code:

// Step 11. ProbCut
// If we have a good enough capture (or queen promotion) and a reduced search
// returns a value much above beta, we can (almost) safely prune the previous move.
probCutBeta = beta + 215 - 60 * improving;
if (depth >= 3
    && !is_decisive(beta)
    // If value from transposition table is lower than probCutBeta, don't attempt
    // probCut there
    && !(is_valid(ttData.value) && ttData.value < probCutBeta))
{
    assert(probCutBeta < VALUE_INFINITE && probCutBeta > beta);

    MovePicker mp(pos, ttData.move, probCutBeta - ss->staticEval, &captureHistory);
    Depth      probCutDepth = std::max(depth - 5, 0);

    while ((move = mp.next_move()) != Move::none())
    {
        assert(move.is_ok());

        if (move == excludedMove || !pos.legal(move))
            continue;

        assert(pos.capture_stage(move));

        movedPiece = pos.moved_piece(move);

        do_move(pos, move, st, ss);

        // Perform a preliminary qsearch to verify that the move holds
        value = -qsearch<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1);

        // If the qsearch held, perform the regular search
        if (value >= probCutBeta && probCutDepth > 0)
            value = -search<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1, probCutDepth,
                                   !cutNode);

        undo_move(pos, move);

        if (value >= probCutBeta)
        {
            // Save ProbCut data into transposition table
            ttWriter.write(posKey, value_to_tt(value, ss->ply), ss->ttPv, BOUND_LOWER,
                           probCutDepth + 1, move, unadjustedStaticEval, tt.generation());

            if (!is_decisive(value))
                return value - (probCutBeta - beta);
        }
    }
}

This is the ProbCut implementation in the current code. Let's compare it with example 15 from the historical examples:

Example 15:

1
2
3
4
5
6
7
// Perform a preliminary qsearch to verify that the move holds
value = -qsearch<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1);

// If the qsearch held, perform the regular search
if (value >= probCutBeta)
    value = -search<NonPV>(pos, ss + 1, -probCutBeta - 1, -probCutBeta, depth - 4,
                           !cutNode);

The historical example changed the search window from (-probCutBeta, -probCutBeta + 1) to (-probCutBeta - 1, -probCutBeta). This is essentially changing from a null window search to a slightly wider window.

Let me analyze the current ProbCut implementation:

  1. ProbCut is a technique that tries to prune the move that led to the current node by checking if there's a capture/promotion that leads to a position with a value much better than beta.
  2. It first calculates probCutBeta = beta + 215 - 60 * improving.
  3. It then iterates through captures/promotions (queen promotions included) using MovePicker.
  4. For each move:
    a. It makes the move
    b. Does a preliminary qsearch with a null window around probCutBeta: (-probCutBeta, -probCutBeta + 1)
    c. If the qsearch passes (value >= probCutBeta), it does a deeper search with depth reduction: depth - 5
    d. If the deeper search also passes (value >= probCutBeta), it returns value - (probCutBeta - beta), effectively pruning the original move

The historical example 15 changed the deeper search from a null window to a slightly wider window (effectively making it a normal search rather than a null window search).

Now, let's think about potential improvements to this technique:

  1. Window adjustment: Similar to example 15, we could adjust the search window for the deeper ProbCut search. Instead of using a null window, we could use a slightly wider window to get more accurate results.
  2. Depth adjustment: The current code uses probCutDepth = std::max(depth - 5, 0). We could make this more dynamic, perhaps based on how much better the position is (based on static eval or other factors).
  3. Selective ProbCut: We could be more selective about which captures to try for ProbCut. For example, we could prioritize captures with high SEE (Static Exchange Evaluation) scores.
  4. History-based ProbCut: We could use history scores to prioritize which captures to try first for ProbCut.
  5. Scaling with depth: We could make the ProbCut beta margin scale with depth to improve LTC performance.
  6. Improving ProbCut verification: The current implementation does a qsearch followed by a deeper search. We could add more verification steps to reduce false positives.

Let me focus on #5 - scaling with depth. This would be a "scaler" as requested, meaning it should work well at long time controls.

Looking at the current ProbCut beta calculation:

probCutBeta = beta + 215 - 60 * improving;

This is a fixed margin. For long time controls, we might want this margin to scale with depth, since deeper searches provide more accurate evaluations.

For example, we could change it to:

probCutBeta = beta + 215 - 60 * improving + 10 * depth;

However, this would make ProbCut more aggressive at deeper depths, which might not be desirable because deeper searches should be more accurate, so we might want ProbCut to be more conservative.

Actually, let's think carefully. At deeper depths, the search is more accurate, so false positives from ProbCut would be more costly. Therefore, we probably want ProbCut to be less aggressive (larger margin) at deeper depths.

So maybe:

probCutBeta = beta + 215 - 60 * improving + 15 * depth;

Wait, no. A larger margin means ProbCut is less aggressive (needs a bigger score to trigger). So if we want ProbCut to be less aggressive at deeper depths, we should increase the margin with depth.

Let me rephrase:

  • probCutBeta = beta + margin
  • Larger margin = higher probCutBeta = harder to satisfy value >= probCutBeta = less aggressive ProbCut
  • Smaller margin = lower probCutBeta = easier to satisfy value >= probCutBeta = more aggressive ProbCut

At deeper depths, we want less aggressive ProbCut (to avoid missing good moves), so we want a larger margin. So we should add a positive term with depth.

But looking at the current code: probCutBeta = beta + 215 - 60 * improving;

To make the margin larger with depth, we would do:
probCutBeta = beta + 215 - 60 * improving + 15 * depth;

This would make ProbCut less aggressive at deeper depths, which is what we want for LTC.

However, I'm not sure if this is the best approach. Let me think about example 1 from the historical examples:

Value singularBeta  = ttData.value - (55 + 81 * (ss->ttPv && !PvNode)) * depth / 58;

was changed to:

Value singularBeta  = ttData.value - (55 + 81 * (ss->ttPv && !PvNode) - (ttData.depth > depth) * 25) * depth / 58;

This is adjusting the singularBeta margin based on whether the TT depth is greater than current depth.

Another approach could be to make the ProbCut margin scale non-linearly with depth, like in example 1 where they have a division by 58.

Let me consider example 3:

value < singularBeta - 93

was changed to:

value < singularBeta - 73 - 40 * cutNode

This is making the margin smaller (more aggressive) for cut nodes.

Similarly, for ProbCut, we could make the margin dependent on whether we're in a cut node or not.

Actually, looking more carefully at the current ProbCut code, I see:

probCutBeta = beta + 215 - 60 * improving;

This already has a term that depends on improving, which is a measure of whether the position is improving.

Let me think about another angle. In example 15, they changed the search window from a null window to a slightly wider window. This suggests that using a null window for the deeper ProbCut search might be too aggressive, leading to false positives.

So, instead of:

value = -search<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1, probCutDepth, !cutNode);

We could use:

value = -search<NonPV>(pos, ss + 1, -probCutBeta - 1, -probCutBeta, probCutDepth, !cutNode);

This is exactly what example 15 did. But I need to create a new patch, not just copy example 15.

Let me think of a variation on this idea. Instead of always using a window of size 1, we could make the window size depend on depth or other factors.

For example:

int windowSize = 1 + (depth > 8);
value = -search<NonPV>(pos, ss + 1, -probCutBeta - windowSize, -probCutBeta, probCutDepth, !cutNode);

This would use a slightly wider window at deeper depths, which might reduce false positives for LTC.

Another idea: instead of always using the same window, we could base it on how decisive the beta value is:

int windowSize = is_decisive(beta) ? 1 : 2;
value = -search<NonPV>(pos, ss + 1, -probCutBeta - windowSize, -probCutBeta, probCutDepth, !cutNode);

If beta is already decisive (close to mate score), we can use a narrower window because we're more certain. If beta is not decisive, we use a wider window to get a more accurate result.

Let me check the definition of is_decisive in the code:

1
2
3
bool is_decisive(Value v) {
  return abs(v) >= VALUE_MATE_IN_MAX_PLY - MAX_PLY;
}

Actually, I don't see this function in the provided code. Let me look for it... I don't see it, but there are checks like !is_decisive(beta) in the ProbCut condition.

Let me look at the ProbCut condition again:

1
2
3
4
5
if (depth >= 3
    && !is_decisive(beta)
    // If value from transposition table is lower than probCutBeta, don't attempt
    // probCut there
    && !(is_valid(ttData.value) && ttData.value < probCutBeta))

So there is an is_decisive function, but it's not shown in the provided code. Based on context, it probably checks if the value is close to a mate score.

So my idea would be to use a wider window when beta is not decisive, because we need more accuracy in those cases.

However, this is still very similar to example 15. Let me think of something more innovative.

Another idea: the current ProbCut does a qsearch followed by a deeper search. We could add a condition to skip the deeper search if the qsearch result is significantly above probCutBeta, because then we're more confident in the result.

For example:

1
2
3
4
5
6
7
8
// Perform a preliminary qsearch to verify that the move holds
value = -qsearch<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1);

// If the qsearch held with a significant margin, we can skip the deeper search
if (value >= probCutBeta + 50 && probCutDepth > 0)
    value = probCutBeta + 50; // Just use the qsearch result
else if (value >= probCutBeta && probCutDepth > 0)
    value = -search<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1, probCutDepth, !cutNode);

This would save time by skipping the deeper search when the qsearch result is already significantly above probCutBeta.

But this might increase false positives, which would hurt strength.

Let me think about the trade-off. In ProbCut, we're trying to prove that there's a capture that leads to a position with value > beta + margin. If the qsearch already shows value > beta + margin + 50, then it's very likely true, so skipping the deeper search might be safe.

This could be a good scaler for LTC because it reduces unnecessary deeper searches, focusing resources on more critical parts of the tree.

Let me refine this idea. Instead of a fixed margin of 50, we could make it dependent on depth:

1
2
3
4
5
6
Value margin = 50 + 10 * depth;
// If the qsearch held with a significant margin, we can skip the deeper search
if (value >= probCutBeta + margin && probCutDepth > 0)
    value = probCutBeta + margin; // Just use the qsearch result
else if (value >= probCutBeta && probCutDepth > 0)
    value = -search<NonPV>(pos, ss + 1, -probCutBeta, -probCutBeta + 1, probCutDepth, !cutNode);

This makes the margin larger at deeper depths, which is good because deeper searches should be more accurate, so we need a larger margin to be confident.

Actually, wait. At deeper depths, we want to be more careful, so we should require a larger margin to skip the deeper search. Yes, that makes sense.

But probCutBeta = beta + 215 - 60 * improving, so probCutBeta + margin would be beta + 215 - 60 * improving + margin.

Let me check the values. Beta is the current lower bound. If we have a value > beta + 215 + margin, that's a very strong indication.

Actually, margin of 50 + 10 * depth might be too large. Let's think about typical values.

In centipawns, 50 is half a pawn. At depth 10, margin would be 150, which is 1.5 pawns. That seems reasonable.

But let's look at example 1 for reference:

Value singularBeta  = ttData.value - (55 + 81 * (ss->ttPv && !PvNode)) * depth / 58;

They're scaling with depth / 58, which is quite slow.

Maybe a better approach would be:

Value margin = 50 + depth * depth / 10;

This would grow quadratically but slowly.

However, I'm not sure this is the best approach. Let me think of something else.

Another idea: in the current code, ProbCut uses a fixed depth reduction of 5 (probCutDepth = std::max(depth - 5, 0)). We could make this reduction dependent on how much better the position is.

For example:

Depth reduction = 5 + (value - probCutBeta) / 100;
Depth probCutDepth = std::max(depth - reduction, 0);

But we don't have value yet because we're about to do the qsearch.

Wait, we could use the static evaluation difference:

1
2
3
4
Value evalDiff = ss->staticEval - beta;
Depth baseReduction = 5;
Depth dynamicReduction = std::min(evalDiff / 200, 3); // Cap at 3 to avoid excessive reduction
Depth probCutDepth = std::max(depth - (baseReduction + dynamicReduction), 0);

If the static evaluation is much better than beta, we can afford a larger depth reduction for ProbCut because we're more confident.

This could be a good scaler because at deeper depths (LTC), the static evaluation is more accurate, so this dynamic reduction would be more reliable.

Let me check if this makes sense. If evalDiff is large, meaning the position looks much better than beta, then ProbCut is more likely to succeed, so we can use a larger depth reduction to save time.

Yes, that seems reasonable.

Actually, let's refine this. We want to be more aggressive with ProbCut when the position looks strong, so larger depth reduction makes sense.

But we need to be careful not to make the depth negative.

Let me look at the current code:

Depth      probCutDepth = std::max(depth - 5, 0);

So we're reducing by 5. Let's make it:

1
2
3
Value evalMargin = ss->staticEval - beta;
Depth dynamicReduction = std::min(evalMargin / 250, 4); // Cap at 4 to avoid excessive reduction
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

If the static evaluation is much better than beta, we can reduce more.

But wait, ss->staticEval is the evaluation of the current position, before making the ProbCut move. The ProbCut move is a capture, so it should improve the evaluation.

Actually, probCutBeta = beta + 215 - 60 * improving, so we're looking for a move that gives at least 215 - 60 * improving centipawns improvement.

So maybe:

1
2
3
4
Value potentialGain = 215 - 60 * improving;
Value evalMargin = ss->staticEval + potentialGain - beta;
Depth dynamicReduction = std::min(evalMargin / 250, 4);
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

This is getting complicated. Let's simplify.

The key insight is: if the current position is already much better than beta, then ProbCut is more likely to succeed, so we can be more aggressive with the depth reduction.

So:

1
2
3
Value evalMargin = ss->staticEval - beta;
Depth dynamicReduction = std::min(evalMargin / 200, 3);
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

This is simpler and should work well.

Let's check if evalMargin could be negative. If ss->staticEval < beta, then evalMargin is negative, and dynamicReduction would be negative, meaning we'd reduce less. That makes sense because if the position isn't better than beta, ProbCut is less likely to succeed, so we should be more careful (less depth reduction).

Actually, the condition for ProbCut is !is_decisive(beta), but ss->staticEval could still be less than beta.

Wait, no. If ss->staticEval < beta, then the position isn't better than beta, so ProbCut is unlikely to succeed. But we still try it because captures could change the evaluation significantly.

So if evalMargin is negative, we want to reduce less (be more careful), so dynamicReduction should be negative, meaning we subtract a negative number, which is adding.

Let's work through an example:

  • depth = 10
  • evalMargin = -100 (position is 1 pawn worse than beta)
  • dynamicReduction = min(-100 / 200, 3) = min(-0.5, 3) = -0.5, but since it's Depth (int), it would be 0 or -1?

In C++, integer division of negative numbers truncates towards zero, so -100 / 200 = 0.

That's not what we want. We want negative evalMargin to result in negative dynamicReduction.

Let's redefine:

1
2
3
Value evalMargin = beta - ss->staticEval; // How much worse the position is than beta
Depth dynamicReduction = -std::min(evalMargin / 200, 3); // Negative if position is worse
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

If evalMargin is positive (position better than beta), dynamicReduction is negative, so we reduce more.
If evalMargin is negative (position worse than beta), dynamicReduction is positive, so we reduce less.

But let's simplify further. Let's just use:

Depth dynamicReduction = (ss->staticEval - beta) / 200;
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

If ssstaticEval > beta, dynamicReduction is positive, so we reduce more.
If ss
staticEval < beta, dynamicReduction is negative, so we reduce less.

Yes, that's what we want.

Let's test with examples:

  1. ss->staticEval = beta + 200 (position 2 pawns better than beta)
    dynamicReduction = 200 / 200 = 1
    probCutDepth = depth - 5 - 1 = depth - 6
  2. ss->staticEval = beta - 200 (position 2 pawns worse than beta)
    dynamicReduction = -200 / 200 = -1
    probCutDepth = depth - 5 - (-1) = depth - 4

This makes sense: if the position is better, we can reduce more; if worse, we reduce less.

But depth - 6 could be negative. The current code uses std::max(depth - 5, 0), so we should keep that.

Actually, the current code already has std::max(depth - 5, 0), so our version would be:

Depth dynamicReduction = (ss->staticEval - beta) / 200;
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

This could result in negative depth if dynamicReduction is large positive, but std::max(..., 0) takes care of that.

Actually, if ss->staticEval is much larger than beta, dynamicReduction could be large, making probCutDepth negative. But std::max(..., 0) ensures it's at least 0.

This seems reasonable.

Let me check if there's a similar approach in the historical examples.

Example 4:

value = search<NonPV>(pos, ss, rBeta - 1, rBeta, depth / 2, cutNode);

was changed to:

value = search<NonPV>(pos, ss, rBeta - 1, rBeta, depth / 3, cutNode);

They reduced the depth for the singular search.

Example 5:

value = search<NonPV>(pos, ss, rBeta - 1, rBeta, depth / 2, cutNode, true);

was changed to:

value = search<NonPV>(pos, ss, rBeta - 1, rBeta, sdepth / 2, cutNode, true);

where sdepth = depth - (!givesCheck) * ONE_PLY;

So they're adjusting the depth based on whether the move gives check.

This is similar to what I'm proposing, but based on static eval difference rather than whether the move gives check.

I think my proposal is novel and could be a good scaler for LTC.

Let me refine the constant. 200 might not be the best value. Looking at example 1, they divide by 58. But this is a different context.

Actually, let's look at the ProbCut beta calculation:

probCutBeta = beta + 215 - 60 * improving;

215 is about 2 pawns. So a reasonable scale would be in the range of pawns.

200 is 2 pawns, which seems reasonable.

But let's check example 14 for reference:

bonus2 = bestValue > beta ? std::min(bonus1, stat_bonus(depth) + bestValue - beta) // larger bonus
                          : stat_bonus(depth);                                     // smaller bonus

They're using bestValue - beta, which is the margin above beta.

In our case, ss->staticEval - beta is similar.

I think 200 is a reasonable constant, but to be more precise, let's use 250, which is a bit more conservative.

So:

Depth dynamicReduction = (ss->staticEval - beta) / 250;
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

This means for every 2.5 pawns better the position is, we reduce depth by 1 more.

Let me check if this could cause issues with negative depth. The current code has std::max(depth - 5, 0), so our version with the extra reduction could make it negative, but std::max(..., 0) takes care of that.

Actually, the current code is:

Depth      probCutDepth = std::max(depth - 5, 0);

So we'd change it to:

Depth dynamicReduction = (ss->staticEval - beta) / 250;
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

But wait, ss->staticEval could be VALUE_NONE if we're in check. Let me check the code:

// Step 6. Static evaluation of the position
Value      unadjustedStaticEval = VALUE_NONE;
const auto correctionValue      = correction_value(*this, pos, ss);
if (ss->inCheck)
{
    // Skip early pruning when in check
    ss->staticEval = eval = (ss - 2)->staticEval;
    improving             = false;
    goto moves_loop;
}

So if we're in check, we skip to moves_loop, which means ProbCut wouldn't be executed for positions in check. So ss->staticEval should always be valid in ProbCut.

Yes, because ProbCut is before the "moves_loop" label, and if ss->inCheck is true, we goto moves_loop, skipping ProbCut.

So ss->staticEval is always valid in ProbCut.

This seems safe.

Let me double-check the code flow:

// Step 6. Static evaluation of the position
Value      unadjustedStaticEval = VALUE_NONE;
const auto correctionValue      = correction_value(*this, pos, ss);
if (ss->inCheck)
{
    // Skip early pruning when in check
    ss->staticEval = eval = (ss - 2)->staticEval;
    improving             = false;
    goto moves_loop;
}

Then later:

// Step 11. ProbCut
// If we have a good enough capture (or queen promotion) and a reduced search
// returns a value much above beta, we can (almost) safely prune the previous move.
probCutBeta = beta + 215 - 60 * improving;
if (depth >= 3
    && !is_decisive(beta)
    // If value from transposition table is lower than probCutBeta, don't attempt
    // probCut there
    && !(is_valid(ttData.value) && ttData.value < probCutBeta))
{

So yes, if ssinCheck is true, we skip ProbCut, so ssstaticEval is always valid.

Great, so my patch is safe.

Let me write the final patch:

Current code:

1
2
3
4
Depth      probCutDepth = std::max(depth - 5, 0);

while ((move = mp.next_move()) != Move::none())
{

Proposed change:

1
2
3
4
5
Depth dynamicReduction = (ss->staticEval - beta) / 250;
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

while ((move = mp.next_move()) != Move::none())
{

This is more than just changing a parameter; it's adding a dynamic adjustment based on the position's evaluation.

It should scale well for LTC because at deeper depths, the static evaluation is more accurate, so this dynamic reduction will be more reliable.

Let me check if there's a similar patch in the historical examples.

Example 5 adjusted the depth for singular search based on whether the move gives check.

Example 4 changed the depth division from /2 to /3.

My patch is different because it's based on the evaluation difference, not a simple division or binary condition.

I think this is a good candidate for a new patch.

One more refinement: the division by 250 might be too sensitive. Let's look at the values.

In centipawns:

  • 250 is 2.5 pawns
  • A difference of 2.5 pawns would give a reduction of 1

This seems reasonable, but to be more conservative, let's use 300.

So:

Depth dynamicReduction = (ss->staticEval - beta) / 300;
Depth probCutDepth = std::max(depth - 5 - dynamicReduction, 0);

This means for every 3 pawns better the position is, we reduce depth by 1 more.

I think this is a good balance.

Let me write the final patch.

Edit

Pub: 08 Aug 2025 22:07 UTC

Views: 66