Generative Reward Models
Paper: https://www.synthlabs.ai/pdf/Generative_Reward_Models.pdf
arXiv: https://arxiv.org/abs/2410.12832
Official SynthLabs blog post: https://www.synthlabs.ai/research/generative-reward-models
introduction
synthlabs proposes Generative Reward Models (GenRM): instead of training a separate scalar reward head (e.g., Bradley–Terry), they use an LLM itself as the reward model—prompted to generate a decision token (and optionally a chain of thought) that selects the preferred response. they introduce two variants: GenRM (direct classifier via an answer indicator) and CoT-GenRM (produce reasoning, then the indicator). trained with STaR-style bootstrapping and a DPO objective (STaR-DPO), the judge matches classical reward models in-distribution and generalizes better out-of-distribution, with the strongest OOD gains coming from the reasoning-based STaR-DPO setup. (arXiv)
why it matters: this collapses the “policy vs. reward” model split, lets you scale alignment with more inference-time compute and reasoning (self-consistency / majority vote), and reduces reliance on bespoke reward heads—while improving OOD robustness and safety-related judgments.
Paper: https://www.synthlabs.ai/pdf/Generative_Reward_Models.pdf
- core idea. treat the LLM as a generative judge. For a prompt $x$ and two answers $y_1, y_2$, the model outputs either: (a) an answer indicator token choosing the winner (GenRM), or (b) CoT reasoning $r$ then the indicator (CoT-GenRM). No explicit BT head required.
-
training recipes.
- GenRM (no CoT): SFT to predict the indicator given $(x, y_1, y_2)$.
- CoT-GenRM (Rationalization): generate/post-rationalize chains $r$ and train with maximum-likelihood on $(x, y_1, y_2, r)$.
- CoT-GenRM (STaR): bootstrap correct reasoning chains; filter with self-consistency; train via STaR-SFT or STaR-DPO (DPO over reasoning that leads to wins vs. losses).
-
results (headline).
- zero-shot LLM-as-a-judge underperforms trained BT-style reward models by 9–36% in-distribution; GenRM after training matches BT in-dist and beats it OOD by 10–45% (task-dependent). (arXiv)
- prompting the judge with CoT lifts zero-shot accuracy from 52.25→67.75 (UltraFeedback) and 60.60→75.18 (RewardBench). Still, trained models (BT/PairRM/GenRM) sit ≈73–74% in-dist. STaR-DPO matches in-dist (73.9%) and wins OOD (see below).
- OOD: STaR-DPO ≈ 81.9%, beating the base prior (77.8%) and trained GenRM (\~78.9%). Reasoning-based setups especially help Safety.
- compute helps. Majority-vote self-consistency of the judge improves accuracy as samples increase; using CoT in the evaluator adds further gains.
- bootstrapping choices. Higher-capability sources (e.g., GPT-4) can improve UltraFeedback but may hurt OOD unless combined with on-policy critic training—supporting STaR-DPO over pure STaR-SFT.
- takeaway. if you need robustness beyond the training domain—or stronger safety judgments—use CoT-GenRM + STaR-DPO, and budget inference-time votes for the judge.
arXiv: https://arxiv.org/abs/2410.12832
- authors & claim. Mahan, Phung, Rafailov, et al. propose an RLHF↔RLAIF unification where a fine-tuned LLM-judge trained on self-generated reasoning reaches BT-like accuracy in-dist and outperforms OOD; LLM-as-judge zero-shot trails BT; GenRM beats LLM-as-judge both in-dist and OOD. (arXiv)
- numbers (from abstract). in-dist: LLM-as-judge under BT by 9–36%; OOD: GenRM +10–45% vs. BT baselines; GenRM +9–31% vs. LLM-as-judge in-dist and +2–6% OOD. (arXiv)
Official SynthLabs blog post: https://www.synthlabs.ai/research/generative-reward-models
- positioning. “a unified approach to RLHF & RLAIF,” emphasizing reasoning-centric GenRM/CoT-GenRM and reporting up to \~45% gains on OOD tasks—consistent with the paper’s abstract. (SynthLabs)
Perplexity -- Explain SynthLabs work on Generative Reward Models:
highlights: (1) LLM-judge replaces BT head, (2) CoT-GenRM + STaR-DPO drives OOD gains, (3) majority-vote/self-consistency lifts judge accuracy.
Expected emphasis: GenRM as classifier (indicator tokens) vs CoT-GenRM (reason then decide); comparisons against BT/PairRM/LLM-as-judge on UltraFeedback, UltraInteract, RewardBench.
Perplexity Pages:
- https://www.perplexity.ai/page/synthlabs-generative-reward-mo-BGhA4apfQd.DQ5UNH7KRxQ
- https://www.perplexity.ai/page/synthlabs-generative-reward-mo-_qp5kTDTQtqV8bwBTkNRkA
The training losses (SFT vs Rationalization vs STaR-DPO) and the majority-vote effect.
Grok explains Generative Reward Models
A correct explanation should stress how reasoning traces make the evaluator more human-aligned than zero-shot judges, and why that helps on novel/OOD prompts.
Claude DeepResearch
quick mental model
- replace the scalar reward head with a generative judge. prompt the LLM with $(x, y_1, y_2)$ to output I₁/I₂; optionally have it think first (CoT), then commit.
- train with STaR-DPO. bootstrap correct reasoning chains; apply DPO over reasoning that leads to correct preferences. This yields best OOD.
- spend inference compute on evaluation. do multi-sample self-consistency/majority-vote for the judge; accuracy climbs with vote count.
Perplexity -- Explain SynthLabs work on Generative Reward Models:
- https://www.perplexity.ai/search/explain-synthlabs-work-on-gene-2Yo7a.AxTLaX0Mpr0Dd8Xw
- https://www.perplexity.ai/search/explain-synthlabs-work-on-gene-tPetvCuISG20z.5yO.TtEQ
Perplexity Pages:
- https://www.perplexity.ai/page/synthlabs-generative-reward-mo-BGhA4apfQd.DQ5UNH7KRxQ
- https://www.perplexity.ai/page/synthlabs-generative-reward-mo-_qp5kTDTQtqV8bwBTkNRkA
Grok explains Generative Reward Models
- https://grok.com/share/bGVnYWN5LWNvcHk%3D_169a5c5e-b6e0-48d2-ad7f-8bd4f6e341d7
- https://grok.com/share/bGVnYWN5LWNvcHk%3D_a74795d4-0bb1-4272-9f37-824cea188f44