Generative Reward Models

Paper: https://www.synthlabs.ai/pdf/Generative_Reward_Models.pdf
arXiv: https://arxiv.org/abs/2410.12832
Official SynthLabs blog post: https://www.synthlabs.ai/research/generative-reward-models

introduction

synthlabs proposes Generative Reward Models (GenRM): instead of training a separate scalar reward head (e.g., Bradley–Terry), they use an LLM itself as the reward model—prompted to generate a decision token (and optionally a chain of thought) that selects the preferred response. they introduce two variants: GenRM (direct classifier via an answer indicator) and CoT-GenRM (produce reasoning, then the indicator). trained with STaR-style bootstrapping and a DPO objective (STaR-DPO), the judge matches classical reward models in-distribution and generalizes better out-of-distribution, with the strongest OOD gains coming from the reasoning-based STaR-DPO setup. (arXiv)

why it matters: this collapses the “policy vs. reward” model split, lets you scale alignment with more inference-time compute and reasoning (self-consistency / majority vote), and reduces reliance on bespoke reward heads—while improving OOD robustness and safety-related judgments.

Paper: https://www.synthlabs.ai/pdf/Generative_Reward_Models.pdf

  • core idea. treat the LLM as a generative judge. For a prompt $x$ and two answers $y_1, y_2$, the model outputs either: (a) an answer indicator token choosing the winner (GenRM), or (b) CoT reasoning $r$ then the indicator (CoT-GenRM). No explicit BT head required.
  • training recipes.

    • GenRM (no CoT): SFT to predict the indicator given $(x, y_1, y_2)$.
    • CoT-GenRM (Rationalization): generate/post-rationalize chains $r$ and train with maximum-likelihood on $(x, y_1, y_2, r)$.
    • CoT-GenRM (STaR): bootstrap correct reasoning chains; filter with self-consistency; train via STaR-SFT or STaR-DPO (DPO over reasoning that leads to wins vs. losses).
  • results (headline).

    • zero-shot LLM-as-a-judge underperforms trained BT-style reward models by 9–36% in-distribution; GenRM after training matches BT in-dist and beats it OOD by 10–45% (task-dependent). (arXiv)
    • prompting the judge with CoT lifts zero-shot accuracy from 52.25→67.75 (UltraFeedback) and 60.60→75.18 (RewardBench). Still, trained models (BT/PairRM/GenRM) sit ≈73–74% in-dist. STaR-DPO matches in-dist (73.9%) and wins OOD (see below).
    • OOD: STaR-DPO81.9%, beating the base prior (77.8%) and trained GenRM (\~78.9%). Reasoning-based setups especially help Safety.
  • compute helps. Majority-vote self-consistency of the judge improves accuracy as samples increase; using CoT in the evaluator adds further gains.
  • bootstrapping choices. Higher-capability sources (e.g., GPT-4) can improve UltraFeedback but may hurt OOD unless combined with on-policy critic training—supporting STaR-DPO over pure STaR-SFT.
  • takeaway. if you need robustness beyond the training domain—or stronger safety judgments—use CoT-GenRM + STaR-DPO, and budget inference-time votes for the judge.

arXiv: https://arxiv.org/abs/2410.12832

  • authors & claim. Mahan, Phung, Rafailov, et al. propose an RLHF↔RLAIF unification where a fine-tuned LLM-judge trained on self-generated reasoning reaches BT-like accuracy in-dist and outperforms OOD; LLM-as-judge zero-shot trails BT; GenRM beats LLM-as-judge both in-dist and OOD. (arXiv)
  • numbers (from abstract). in-dist: LLM-as-judge under BT by 9–36%; OOD: GenRM +10–45% vs. BT baselines; GenRM +9–31% vs. LLM-as-judge in-dist and +2–6% OOD. (arXiv)

Official SynthLabs blog post: https://www.synthlabs.ai/research/generative-reward-models

  • positioning. “a unified approach to RLHF & RLAIF,” emphasizing reasoning-centric GenRM/CoT-GenRM and reporting up to \~45% gains on OOD tasks—consistent with the paper’s abstract. (SynthLabs)

Perplexity -- Explain SynthLabs work on Generative Reward Models:

highlights: (1) LLM-judge replaces BT head, (2) CoT-GenRM + STaR-DPO drives OOD gains, (3) majority-vote/self-consistency lifts judge accuracy.

Expected emphasis: GenRM as classifier (indicator tokens) vs CoT-GenRM (reason then decide); comparisons against BT/PairRM/LLM-as-judge on UltraFeedback, UltraInteract, RewardBench.

Perplexity Pages:

The training losses (SFT vs Rationalization vs STaR-DPO) and the majority-vote effect.

Grok explains Generative Reward Models

A correct explanation should stress how reasoning traces make the evaluator more human-aligned than zero-shot judges, and why that helps on novel/OOD prompts.

Claude DeepResearch


quick mental model

  1. replace the scalar reward head with a generative judge. prompt the LLM with $(x, y_1, y_2)$ to output I₁/I₂; optionally have it think first (CoT), then commit.
  2. train with STaR-DPO. bootstrap correct reasoning chains; apply DPO over reasoning that leads to correct preferences. This yields best OOD.
  3. spend inference compute on evaluation. do multi-sample self-consistency/majority-vote for the judge; accuracy climbs with vote count.

Perplexity -- Explain SynthLabs work on Generative Reward Models:

Perplexity Pages:

Grok explains Generative Reward Models

Claude DeepResearch


Edit

Pub: 29 Aug 2025 05:10 UTC

Views: 29