Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
RLHFlow 's Collections
Online-DPO-R1
Decision-Tree Reward Models
RLHFlow MATH Process Reward Model
Standard-format-preference-dataset
Mixture-of-preference-reward-modeling
RM-Bradley-Terry
PM-pair
Online RLHF
RLHFLow Reward Models
SFT Models

Online-DPO-R1

updated Feb 28

This is the collection of the online-DPO-R1 project.

Upvote
-

  • RLHFlow/Qwen2.5-7B-PPO-Zero

    Updated Feb 17 • 42 • 2

  • RLHFlow/Qwen2.5-7B-DPO-Zero

    Updated Feb 17 • 12

  • RLHFlow/Qwen2.5-7B-DPO-NLL-Zero

    Updated Feb 17 • 2

  • RLHFlow/Qwen2.5-7B-RAFT-Zero

    Updated Feb 17 • 8

  • RLHFlow/numia_prompt_ppo

    Viewer • Updated Feb 13 • 404k • 55 • 1

  • RLHFlow/numia_prompt_dpo1

    Viewer • Updated Feb 11 • 20k • 156

  • RLHFlow/Qwen2.5-7B-DPO

    Updated Feb 17 • 41

  • RLHFlow/Qwen2.5-7B-SFT

    Updated Feb 17 • 7

  • RLHFlow/qwq_gen_sft_15k

    Viewer • Updated Feb 17 • 15k • 22

  • RLHFlow/numia_prompt_dpo2

    Viewer • Updated Feb 11 • 20k • 55

  • RLHFlow/numia_prompt_dpo3

    Viewer • Updated Feb 11 • 20k • 34

  • Self-rewarding correction for mathematical reasoning

    Paper • 2502.19613 • Published Feb 26 • 84
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs