user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-03-30 02:12:16
| body
stringlengths 1
173k
| issue_number
int64 1
3.18k
| __index_level_0__
int64 0
8.59k
|
---|---|---|---|---|
albertsokol | 2025-02-12T23:47:15 | @winglian Thanks very much, yes I was able to confirm that when I use the PR by @winglian the training proceeds correctly.
| 2,725 | 1,312 |
albertsokol | 2025-02-12T23:51:58 | @qgallouedec as for my training code I'm running the following script with `python train.py`, I found that trying to use accelerate causes OOM in all my experiments, but with `python` I can now train 14B model with just 2x GPU. (However, only if I use cuda:0 for VLLM generation, otherwise, I get an error).
```
model = AutoModelForCausalLM.from_pretrained(
"Qwen-2.5-14B-Instruct",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
dataset = create_huggingface_dataset(...)
train_args = GRPOConfig(
# Training hyperparameters
num_train_epochs=10,
per_device_train_batch_size=8,
gradient_accumulation_steps=4,
warmup_ratio=0.005,
learning_rate=3e-6,
# Logging and saving
logging_dir=str(log_dir),
report_to=["tensorboard"],
logging_steps=5,
eval_strategy="no",
output_dir=str(output_dir),
save_steps=50,
# Device options
gradient_checkpointing=False,
bf16=True,
use_vllm=True,
vllm_gpu_memory_utilization=0.38,
vllm_max_model_len=args.max_seq_length,
vllm_device="cuda:0",
# GRPO/generation hyperparameters
num_generations=8,
max_completion_length=args.max_completion_length,
beta=0.04,
temperature=0.7,
)
lora_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
init_lora_weights="gaussian",
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
],
)
accelerator = Accelerator()
reward_model_adapter = RewardModelAdapter(
reward_model=reward_model,
accelerator=accelerator,
log_dir=log_dir,
)
model.gradient_checkpointing_enable()
model.enable_input_require_grads()
model = get_peft_model(model, lora_config)
trainer = GRPOTrainer(
model=model,
args=train_args,
train_dataset=dataset,
reward_funcs=[reward_model_adapter],
peft_config=lora_config if args.use_lora else None,
)
trainer.train()
```
Output of `trl env`:
```
- Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.16
- PyTorch version: 2.5.1
- CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB
- Transformers version: 4.48.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- Datasets version: 3.2.0
- HF Hub version: 0.28.1
- TRL version: 0.15.0.dev0 (from commit 81221661c6f864bd7cdb7c461e881bbe03414be8, the latest in `main`)
- bitsandbytes version: not installed
- DeepSpeed version: not installed
- Diffusers version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: 1.59.7
- PEFT version: 0.14.0
``` | 2,725 | 1,313 |
winglian | 2025-02-12T23:52:29 | here's a repro https://gist.github.com/winglian/b534238c44833b4478f7518ebd0f2598
| 2,725 | 1,314 |
qgallouedec | 2025-02-13T13:33:04 | > here's a repro https://gist.github.com/winglian/b534238c44833b4478f7518ebd0f2598
Thanks, I was able to reproduce, the error comes from
```diff
- get_peft_model(model, lora_config)
+ model = get_peft_model(model, lora_config)
```
I've added the missing test in your branch, added some comments for clarity, and I think we're good to merge.
| 2,725 | 1,315 |
TweedBeetle | 2025-02-13T15:02:56 | @albertsokol, would you perhaps be willing to share what max_seq_length you are working with?
Are you also getting a `The PEFT config's `base_model_name_or_path` was renamed from '<model_name>' to 'None'` warning? | 2,725 | 1,316 |
mnoukhov | 2025-02-02T20:42:01 | After digging into it, it's pretty difficult to implement something that efficiently does online RL with either a reward model or with verifiable rewards (RLVR).
I've implemented the former, RLOO with a single reward model, efficiently in `rloov2_trainer.py`.
For the latter, I think it makes more sense for a first attempt to have a separate RLVR trainer. This way we can do generation in VLLM and rewarding with the Callable all in the text space without constantly tokenizing and detokenizing. Combining the two can be done later. I've started to work on an `rloo_rlvr_trainer.py` that will leverage vllm | 2,724 | 1,317 |
accupham | 2025-01-31T21:38:22 | So hear me out. I think the current style of function/tool calling as popularized by OpenAI does not lend itself to the expressiveness and fluidity of thought that R1-like models capitalize on and excel at. The feedback loop is too long.
Instead, I think the best way is to express function calls as mixed code and language. Let the user decide the calling convention, and inject the results inline.
Here's a example of what I mean. This is QwQ hooked up directly to a python interpreter to do neurosymbolic reasoning. Every line entered is directly fed into the REPL, and results fed back immediately to the LLM. I told it to express thoughts as python comments.
```python
Python 3.12.5 (main, Aug 14 2024, 04:32:18) [Clang 18.1.8 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> # Okay, let's tackle this step by step. The problem says:
>>> # If the equations x^2 - a = p * y and x^2 - b = p * y have integer solutions,
>>> # then x^2 - ab = p * y also has integer solutions. p is prime, and a, b are not divisible by p.
>>>
>>> from sympy import symbols, Eq, solveset, S
>>>
>>> # Let's define the symbols we need. I'll start with x, y, p, a, and b.
>>> x, y, p, a, b = symbols('x y p a b', integer=True)
>>>
>>> # Alright, I'll write out the first two equations.
>>> eq1 = Eq(x**2 - a, p * y) # This is x^2 - a = p * y
>>> eq2 = Eq(x**2 - b, p * y) # This is x^2 - b = p * y
>>>
>>> # The claim is that if these two equations have solutions,
>>> # then x^2 - ab = p * y should also have a solution. I'll write that equation too.
>>> eq3 = Eq(x**2 - a * b, p * y) # This is x^2 - ab = p * y
>>>
>>># Okay, let's check if we can find integer solutions for the first two equations.
>>># I'm going to use `solveset` to see if x has solutions in the integers for eq1 and eq2.
>>>sol_x1 = solveset(eq1, x, domain=S.Integers) # Solve eq1 for x
>>>sol_x1 # Let's see what we get here.
ConditionSet(x, Eq(x**2 - a, p*y), Integers)
>>>
>>> # Hmm, that's a generic solution for x in terms of the parameters.
>>> # Makes sense because we don't have concrete values for p, a, or y yet.
>>>
>>> sol_x2 = solveset(eq2, x, domain=S.Integers) # Solve eq2 for x
>>> sol_x2 # Alright, let me look at this one too.
>>> # >>> ConditionSet(x, Eq(x**2 - b, p*y), Integers)
>>>
>>> # Okay, so I have the solution sets for x in eq1 and eq2.
>>> # Now, let's think about what needs to happen for eq3 to have integer solutions.
>>>
>>> # eq3 implies x^2 ≡ ab (mod p) when reduced modulo p.
>>> # If a and b are quadratic residues modulo p, then their product ab should also be.
>>> # I know there's a property about the product of quadratic residues modulo a prime.
>>>
>>> # But let me explicitly check if solving eq3 gives integer solutions.
>>> sol_x3 = solveset(eq3, x, domain=S.Integers) # Solve eq3 for x
>>> sol_x3 # Does this give us something valid?
ConditionSet(x, Eq(x**2 - a*b, p*y), Integers)
>>>
>>> # Hmm, this matches what I expected! x^2 ≡ ab (mod p) implies solutions exist
>>> # as long as ab is a quadratic residue mod p.
>>>
>>> # To confirm, I need to remember why a and b being residues implies ab is a residue.
>>> # Ah, right! The Legendre symbol tells us (a/p) * (b/p) = (ab/p),
>>> # and both (a/p) and (b/p) are 1 because a and b are residues.
>>>
>>> # So, if eq1 and eq2 have solutions, a and b are residues mod p,
>>> # and their product ab is also a residue mod p. That ensures eq3 has solutions too.
>>>
>>> # Final confirmation: we have integer solutions for x in eq3 if integer solutions exist for eq1 and eq2.
>>> # That settles it!
>>>
>>> sol_x1, sol_x2, sol_x3 # Let's print all these solutions for the final check.
(ConditionSet(x, Eq(x**2 - a, p*y), Integers),
ConditionSet(x, Eq(x**2 - b, p*y), Integers),
ConditionSet(x, Eq(x**2 - a*b, p*y), Integers))
>>> # Solutions are correct.
>>> submit_final_answer(sol_x1, sol_x2, sol_x3)
```
The fast feedback loop affords us a few crucial things:
- It's also able to recover from errors (such as syntax/import/unexpected result) in real time
- It can learn from it's mistakes and recover mid-trajectory
- Each line execution can be used as a reward or punishment (ie: -0.3 for syntax error)
---
Another example of more traditional agentic function calling:
```python
def weather(city: str) -> str:
return "The weather in Seattle is 40 deg F and sunny all day."
```
```
User: Should I pack an umbrella today?
Assistant: <think>Ok, so the user wants to know if it's going to rain or now. I should look up the weather using the `weather()` tool. But wait, I don't know where the user is currently located. I should ask before using the tool. </think><answer>What city are you currently in?</answer>
User: Seattle
Assistant: <think>I'll look up the weather. <fn>weather("Seattle")</fn>
<fn_error>
WeatherLookupError: Must specify a two-letter state.
</fn_error>
<fn>weather("Seattle, WA")</fn>
<fn_results>
The weather in Seattle is 40 deg F and sunny all day.
</fn_results>
Ah, so according to the results, it's unlikely the user will need an umbrella because it will not rain.
<answer>
The weather is nice and sunny today, no need to pack an umbrella today. Can I assist with anything else?
</answer>
``` | 2,723 | 1,318 |
accupham | 2025-01-31T21:52:49 | My opinion is to standardize around vLLM's LLM api. We should pass in a user defined `RolloutSampler` class, which takes in a vLLM LLM class, and let the user figure out how to do function calling during rollout sampling. If they want to do it the standard way, they could use the `LLM.chat()` api with tools and call it the traditional way. If they wanted do something more interactive and realtime, they could stream tokens and detect llama-3.2 style function calls with some sort of regex hook, and inject in results into the output stream. | 2,723 | 1,319 |
August-murr | 2025-02-01T05:58:54 | @accupham I think it's better to take things step by step.
Let's build a minimal prototype that works, and then we can focus on different ideas and ways to improve it. | 2,723 | 1,320 |
xiangjjj | 2025-02-09T04:26:43 | Any considerations for the observation tokens from tool use? I don't think we should compute KL for those tokens. | 2,723 | 1,321 |
willccbb | 2025-02-11T18:49:39 | This PR (https://github.com/huggingface/trl/pull/2810) addresses @accupham 's suggestion to allow user-defined rollout logic which wraps vLLM. Would be curious to hear any comments about if this is sufficient for what people have in mind for now.
The protocol here could potentially be extended to allow user-defined masks (for tool calls) as well as rewards being computed at this stage as well. | 2,723 | 1,322 |
jlia0 | 2025-02-12T16:01:52 | > So hear me out. I think the current style of function/tool calling as popularized by OpenAI does not lend itself to the expressiveness and fluidity of thought that R1-like models capitalize on and excel at. The feedback loop is too long.
>
> Instead, I think the best way is to express function calls as mixed code and language. Let the user decide the calling convention, and inject the results inline.
>
> Here's a example of what I mean. This is QwQ hooked up directly to a python interpreter to do neurosymbolic reasoning. Every line entered is directly fed into the REPL, and results fed back immediately to the LLM. I told it to express thoughts as python comments.
>
> ```python
> Python 3.12.5 (main, Aug 14 2024, 04:32:18) [Clang 18.1.8 ] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
> >>> # Okay, let's tackle this step by step. The problem says:
> >>> # If the equations x^2 - a = p * y and x^2 - b = p * y have integer solutions,
> >>> # then x^2 - ab = p * y also has integer solutions. p is prime, and a, b are not divisible by p.
> >>>
> >>> from sympy import symbols, Eq, solveset, S
> >>>
> >>> # Let's define the symbols we need. I'll start with x, y, p, a, and b.
> >>> x, y, p, a, b = symbols('x y p a b', integer=True)
> >>>
> >>> # Alright, I'll write out the first two equations.
> >>> eq1 = Eq(x**2 - a, p * y) # This is x^2 - a = p * y
> >>> eq2 = Eq(x**2 - b, p * y) # This is x^2 - b = p * y
> >>>
> >>> # The claim is that if these two equations have solutions,
> >>> # then x^2 - ab = p * y should also have a solution. I'll write that equation too.
> >>> eq3 = Eq(x**2 - a * b, p * y) # This is x^2 - ab = p * y
> >>>
> >>># Okay, let's check if we can find integer solutions for the first two equations.
> >>># I'm going to use `solveset` to see if x has solutions in the integers for eq1 and eq2.
> >>>sol_x1 = solveset(eq1, x, domain=S.Integers) # Solve eq1 for x
> >>>sol_x1 # Let's see what we get here.
> ConditionSet(x, Eq(x**2 - a, p*y), Integers)
> >>>
> >>> # Hmm, that's a generic solution for x in terms of the parameters.
> >>> # Makes sense because we don't have concrete values for p, a, or y yet.
> >>>
> >>> sol_x2 = solveset(eq2, x, domain=S.Integers) # Solve eq2 for x
> >>> sol_x2 # Alright, let me look at this one too.
> >>> # >>> ConditionSet(x, Eq(x**2 - b, p*y), Integers)
> >>>
> >>> # Okay, so I have the solution sets for x in eq1 and eq2.
> >>> # Now, let's think about what needs to happen for eq3 to have integer solutions.
> >>>
> >>> # eq3 implies x^2 ≡ ab (mod p) when reduced modulo p.
> >>> # If a and b are quadratic residues modulo p, then their product ab should also be.
> >>> # I know there's a property about the product of quadratic residues modulo a prime.
> >>>
> >>> # But let me explicitly check if solving eq3 gives integer solutions.
> >>> sol_x3 = solveset(eq3, x, domain=S.Integers) # Solve eq3 for x
> >>> sol_x3 # Does this give us something valid?
> ConditionSet(x, Eq(x**2 - a*b, p*y), Integers)
> >>>
> >>> # Hmm, this matches what I expected! x^2 ≡ ab (mod p) implies solutions exist
> >>> # as long as ab is a quadratic residue mod p.
> >>>
> >>> # To confirm, I need to remember why a and b being residues implies ab is a residue.
> >>> # Ah, right! The Legendre symbol tells us (a/p) * (b/p) = (ab/p),
> >>> # and both (a/p) and (b/p) are 1 because a and b are residues.
> >>>
> >>> # So, if eq1 and eq2 have solutions, a and b are residues mod p,
> >>> # and their product ab is also a residue mod p. That ensures eq3 has solutions too.
> >>>
> >>> # Final confirmation: we have integer solutions for x in eq3 if integer solutions exist for eq1 and eq2.
> >>> # That settles it!
> >>>
> >>> sol_x1, sol_x2, sol_x3 # Let's print all these solutions for the final check.
> (ConditionSet(x, Eq(x**2 - a, p*y), Integers),
> ConditionSet(x, Eq(x**2 - b, p*y), Integers),
> ConditionSet(x, Eq(x**2 - a*b, p*y), Integers))
> >>> # Solutions are correct.
> >>> submit_final_answer(sol_x1, sol_x2, sol_x3)
> ```
>
> The fast feedback loop affords us a few crucial things:
> - It's also able to recover from errors (such as syntax/import/unexpected result) in real time
> - It can learn from it's mistakes and recover mid-trajectory
> - Each line execution can be used as a reward or punishment (ie: -0.3 for syntax error)
>
> ---
>
> Another example of more traditional agentic function calling:
> ```python
> def weather(city: str) -> str:
> return "The weather in Seattle is 40 deg F and sunny all day."
> ```
>
> ```
> User: Should I pack an umbrella today?
> Assistant: <think>Ok, so the user wants to know if it's going to rain or now. I should look up the weather using the `weather()` tool. But wait, I don't know where the user is currently located. I should ask before using the tool. </think><answer>What city are you currently in?</answer>
> User: Seattle
> Assistant: <think>I'll look up the weather. <fn>weather("Seattle")</fn>
> <fn_error>
> WeatherLookupError: Must specify a two-letter state.
> </fn_error>
> <fn>weather("Seattle, WA")</fn>
> <fn_results>
> The weather in Seattle is 40 deg F and sunny all day.
> </fn_results>
> Ah, so according to the results, it's unlikely the user will need an umbrella because it will not rain.
> <answer>
> The weather is nice and sunny today, no need to pack an umbrella today. Can I assist with anything else?
> </answer>
> ```
This is exactly what I have been thinking and tinkering as well. I wonder how did you make QwQ to do "neurosymbolic reasoning" / "inline function call" like the example? | 2,723 | 1,323 |
accupham | 2025-02-12T16:32:31 | > This is exactly what I have been thinking and tinkering as well. I wonder how did you make QwQ to do "neurosymbolic reasoning" / "inline function call" like the example?
The system prompt was quite simple:
```
You are now operating as a stateful Python REPL environment. You can use it as memory buffer and scratch pad as a goal-seeking agent.
```
Then you set the prefill to the default python REPL intro text:
```
Python 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
At this point we just do a while-loop with stop token set to "\n". Feed the LLM text into a REPL or some stateful code execution environment, and concatenate the results to the end of the prefill, plus ">>>". Feed that prefill into another LLM call and continue completion from there.
---
I think a jupyter notebook like environment might be more appropriate next time-- easier to sandbox. | 2,723 | 1,324 |
brucecai-2001 | 2025-02-17T09:15:48 | > My opinion is to standardize around vLLM's LLM api. We should pass in a user defined `RolloutSampler` class, which takes in a vLLM LLM class, and let the user figure out how to do function calling during rollout sampling. If they want to do it the standard way, they could use the `LLM.chat()` api with tools and call it the traditional way. If they wanted do something more interactive and realtime, they could stream tokens and detect llama-3.2 style function calls with some sort of regex hook, and inject in results into the output stream.
Thank you for your sharing. So, may I ask if you paused the streaming output of the model after detecting the function calling token and then concatenated the observation to the previous sequence? How is this implemented? Are there any reference materials available? Thank you. | 2,723 | 1,325 |
willccbb | 2025-02-17T15:12:39 | this repo now supports multi-step agent training with batched vLLM rollouts (using LLM.chat()): https://github.com/willccbb/verifiers
requires open PR fork for TRL: https://github.com/huggingface/trl/pull/2810 | 2,723 | 1,326 |
vladrad | 2025-02-17T17:29:26 | 👀 I am in the same boat implementing a hack solution with what's there already.
My approach right now just for testing purposes is gross but seemed to have shown some promise. Essentially I do it loops of training. I take 200 example score the first set. Then I go back and and redo the examples with the outputs and continue like that. It seemed to show a lot of promise but I need to redo my score functions. I was just looking at @willccbb pr | 2,723 | 1,327 |
HuggingFaceDocBuilderDev | 2025-01-31T14:47:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2721). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,721 | 1,328 |
HuggingFaceDocBuilderDev | 2025-02-04T20:47:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2720). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,720 | 1,329 |
Superskyyy | 2025-01-31T14:55:03 | It's in fact normal since you are feeding 4 models into the GPUs and assuming you only used one A100. If you can't get more computer try RLOO or GRPO that removes the value model also potentially the reward model if use reward func. Also lower the maximum input/output token will significantly reduce GPU memory usage. | 2,719 | 1,330 |
BenasdTW | 2025-02-01T09:41:34 | This is normal. Full parameter fine-tuning a 7B model with PPO might even require hundreds of GBs of VRAM. You should try using LoRA, it might help you a lot. | 2,719 | 1,331 |
JohnConnor123 | 2025-02-01T17:02:00 | > This is normal. Full parameter fine-tuning a 7B model with PPO might even require hundreds of GBs of VRAM
Okay, I agree that VRAM consumption will clearly exceed the weight of one model. But I don’t have the 1 to 4 ratio. For a 0.5B model weighing 970 megabytes, I expect a consumption of 4-5 gigabytes - well, a maximum of 10 gigabytes. But I get a consumption of 25 gigabytes, i.e. instead of a 4 to 1 ratio I get 25 to 1.
And how to use LoRA in PPO training? Do I need to use the peft_config parameter? If so, then I need to pass it... what to pass? Or is it enough to change the local_rank parameter from the default value of -1 to a positive integer and LoRA will be automatically applied? | 2,719 | 1,332 |
BenasdTW | 2025-02-01T18:13:45 |
> Okay, I agree that VRAM consumption will clearly exceed the weight of one model. But I don’t have the 1 to 4 ratio. For a 0.5B model weighing 970 megabytes, I expect a consumption of 4-5 gigabytes
Most of the VRAM usage comes from the optimizer states, gradients, and activations. The model parameters are just a small proportion of the total VRAM consumption.
[This](https://modal.com/blog/how-much-vram-need-fine-tuning) provides a rough estimation of the VRAM consumption of SFT. PPO requires much more VRAM than SFT, since you only load one model in SFT.
> And how to use LoRA in PPO training? Do I need to use the peft_config parameter? If so, then I need to pass it... what to pass? Or is it enough to change the local_rank parameter from the default value of -1 to a positive integer and LoRA will be automatically applied?
According to [get_peft_config](https://github.com/huggingface/trl/blob/main/trl/trainer/utils.py#L899), I think you can pass these parameters:
`use_peft` `lora_task_type` `lora_target_modules` `lora_alpha` `lora_dropout`
```
--use_peft true \
--lora_task_type "CAUSAL_LM" \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--lora_target_modules "q_proj,k_proj,v_proj" \
```
if it doesn't work, try using `use_peft_lora` instead of `use_peft`
Some additional optimizations you can experiment with: `--use_unsloth True`, `--use_flash_attn True`
You can take a look at [this](https://github.com/huggingface/peft/blob/main/examples/sft/run_unsloth_peft.sh).
Some of them may work, while others may not—it depends on your hardware and model.
I haven't tried using these scripts myself, so I may be wrong here.
| 2,719 | 1,333 |
Superskyyy | 2025-02-01T19:08:49 | Just a note that you can even easily use up HBM on 8*A100s if you want a bit larger batch size/completion length for a 0.5b model. In general I would recommend PEFT on limited resources.
7B model without PEFT requires at least 2 nodes of A100s if you use bf16 + zero3 and aim to train on real world datasets. | 2,719 | 1,334 |
JohnConnor123 | 2025-02-06T15:49:30 | Thanks for the clarification! | 2,719 | 1,335 |
JWQZ | 2025-03-17T03:36:08 | I have a similar problem: AttributeError: 'AutoModelForCausalLMWithValueHead' object has no attribute 'generation_config'. Have you solved it?
TRL=0.15.2,transformers=4.49.0 | 2,718 | 1,336 |
rahulgithub | 2025-03-20T07:20:47 | TRL V 15.2
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[29], [line 3](vscode-notebook-cell:?execution_count=29&line=3)
[1](vscode-notebook-cell:?execution_count=29&line=1) from transformers import GenerationConfig
[2](vscode-notebook-cell:?execution_count=29&line=2) gpt2_model.generation_config = GenerationConfig()
----> [3](vscode-notebook-cell:?execution_count=29&line=3) ppo_trainer = PPOTrainer(
[4](vscode-notebook-cell:?execution_count=29&line=4) config, gpt2_tokenizer,model=gpt2_model, ref_model=gpt2_ref_model, train_dataset=dataset, reward_model= reward_model, data_collator=collator
[5](vscode-notebook-cell:?execution_count=29&line=5) )
File ~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:222, in PPOTrainer.__init__(self, args, processing_class, model, ref_model, reward_model, train_dataset, value_model, data_collator, eval_dataset, optimizers, callbacks, peft_config)
[220](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:220) if module is not None:
[221](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:221) disable_dropout_in_model(module)
--> [222](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:222) self.model = PolicyAndValueWrapper(self.policy_model, self.value_model)
[223](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:223) self.model.config = self.policy_model.config # needed for pushing to hub
[224](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:224) self.create_optimizer_and_scheduler(
[225](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:225) num_training_steps=args.num_total_batches
[226](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:226) ) # note that we are calling `self.lr_scheduler.step()` manually only at the batch level
File ~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:89, in PolicyAndValueWrapper.__init__(self, policy, value_model)
[87](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:87) self.policy = policy
[88](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:88) self.value_model = value_model
---> [89](https://file+.vscode-resource.vscode-cdn.net/Users/manu/Documents/work/rlhf/examples/~/Documents/work/rlhf/.venv/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py:89) self.critic_backbone = getattr(value_model, value_model.base_model_prefix)
AttributeError: 'NoneType' object has no attribute 'base_model_prefix' | 2,718 | 1,337 |
HuggingFaceDocBuilderDev | 2025-01-31T09:45:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2717). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,717 | 1,338 |
August-murr | 2025-01-31T19:43:53 | #2723 | 2,715 | 1,339 |
HuggingFaceDocBuilderDev | 2025-01-31T08:58:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2713). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,713 | 1,340 |
percent4 | 2025-03-04T15:15:12 | +1. I want to use tool, such as Python code, to solve math problem with GRPO, but there is no support for this. Hope the trl can support GRPO with Tool Use. | 2,712 | 1,341 |
big-nico | 2025-03-07T03:56:10 | +1. Other RL training libraries like VeRL support this. Would be great to be able to use TRL for this. Right now the lack of multi-step agentic behavior really limits the agentic work you can do. | 2,712 | 1,342 |
pietergbosma | 2025-03-08T11:16:53 | I created this with unsloth, check out [(https://discord.com/channels/1179035537009545276/1343968352875319336)](url)
| 2,712 | 1,343 |
cfpark00 | 2025-03-16T16:09:52 | @big-nico Verl supports this? can you show me where? Thank you! | 2,712 | 1,344 |
willccbb | 2025-03-17T12:42:10 | TRL-based project which supports tool calling + code execution: https://github.com/willccbb/verifiers | 2,712 | 1,345 |
fkxie | 2025-01-31T04:38:29 | same here, still suffering from OOM for running 7B model.. | 2,709 | 1,346 |
tgaddair | 2025-01-31T05:42:57 | We were also notcing bottlenecks due to the number generations. What we found was causing issues was the forward pass to compute logits for the reference model. One workaround is to break apart the forward pass into a series of smaller forward calls and then concatenate.
So going from:
```
def get_per_token_logps(model, input_ids, num_logits_to_keep):
# We add 1 to `num_logits_to_keep` because the last logits of the sequence is later excluded
logits = model(input_ids, num_logits_to_keep=num_logits_to_keep + 1).logits # (B, L, V)
logits = logits[:, :-1, :] # (B, L-1, V), exclude the last logit: it corresponds to the next token pred
# Compute the log probabilities for the input tokens. Use a loop to reduce memory peak.
per_token_logps = []
for logits_row, input_ids_row in zip(logits, input_ids[:, -num_logits_to_keep:]):
log_probs = logits_row.log_softmax(dim=-1)
token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
per_token_logps.append(token_log_prob)
return torch.stack(per_token_logps)
```
To:
```
def get_per_token_logps(model, input_ids, num_logits_to_keep):
# Process input_ids in mini-batches of size 4
batch_size = input_ids.size(0)
mini_batch_size = 4. # whatever the max you can handle, this could be made configurable
all_logits = []
for i in range(0, batch_size, mini_batch_size):
batch_end = min(i + mini_batch_size, batch_size)
mini_batch = input_ids[i:batch_end]
# We add 1 to `num_logits_to_keep` because the last logits of the sequence is later excluded
mini_batch_logits = model(mini_batch, num_logits_to_keep=num_logits_to_keep + 1).logits # (B, L, V)
all_logits.append(mini_batch_logits)
# Concatenate all mini-batch results
logits = torch.cat(all_logits, dim=0)
logits = logits[:, :-1, :] # (B, L-1, V), exclude the last logit: it corresponds to the next token pred
# Compute the log probabilities for the input tokens. Use a loop to reduce memory peak.
per_token_logps = []
for logits_row, input_ids_row in zip(logits, input_ids[:, -num_logits_to_keep:]):
log_probs = logits_row.log_softmax(dim=-1)
token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
per_token_logps.append(token_log_prob)
return torch.stack(per_token_logps)
``` | 2,709 | 1,347 |
zaddy6 | 2025-01-31T09:19:36 | same issue | 2,709 | 1,348 |
zaddy6 | 2025-01-31T09:37:14 | > We were also notcing bottlenecks due to the number generations. What we found was causing issues was the forward pass to compute logits for the reference model. One workaround is to break apart the forward pass into a series of smaller forward calls and then concatenate.
>
> So going from:
>
> ```
> def get_per_token_logps(model, input_ids, num_logits_to_keep):
> # We add 1 to `num_logits_to_keep` because the last logits of the sequence is later excluded
> logits = model(input_ids, num_logits_to_keep=num_logits_to_keep + 1).logits # (B, L, V)
> logits = logits[:, :-1, :] # (B, L-1, V), exclude the last logit: it corresponds to the next token pred
>
> # Compute the log probabilities for the input tokens. Use a loop to reduce memory peak.
> per_token_logps = []
> for logits_row, input_ids_row in zip(logits, input_ids[:, -num_logits_to_keep:]):
> log_probs = logits_row.log_softmax(dim=-1)
> token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
> per_token_logps.append(token_log_prob)
> return torch.stack(per_token_logps)
> ```
>
> To:
>
> ```
> def get_per_token_logps(model, input_ids, num_logits_to_keep):
> # Process input_ids in mini-batches of size 4
> batch_size = input_ids.size(0)
> mini_batch_size = 4. # whatever the max you can handle, this could be made configurable
> all_logits = []
>
> for i in range(0, batch_size, mini_batch_size):
> batch_end = min(i + mini_batch_size, batch_size)
> mini_batch = input_ids[i:batch_end]
>
> # We add 1 to `num_logits_to_keep` because the last logits of the sequence is later excluded
> mini_batch_logits = model(mini_batch, num_logits_to_keep=num_logits_to_keep + 1).logits # (B, L, V)
> all_logits.append(mini_batch_logits)
>
> # Concatenate all mini-batch results
> logits = torch.cat(all_logits, dim=0)
> logits = logits[:, :-1, :] # (B, L-1, V), exclude the last logit: it corresponds to the next token pred
>
> # Compute the log probabilities for the input tokens. Use a loop to reduce memory peak.
> per_token_logps = []
> for logits_row, input_ids_row in zip(logits, input_ids[:, -num_logits_to_keep:]):
> log_probs = logits_row.log_softmax(dim=-1)
> token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
> per_token_logps.append(token_log_prob)
> return torch.stack(per_token_logps)
> ```
this didnt seem to work for me on 8xH100, 3B model, still OOM | 2,709 | 1,349 |
qgallouedec | 2025-01-31T10:06:27 | > I don't think that looping over the samples in the post-inference forward pass really solves this:
It doesn't solve, but having this loop avoids the peak that you would have here
> One workaround is to break apart the forward pass into a series of smaller forward calls and then concatenate.
It might avoid the big decoding peak indeed. Let me try to profile this.
--
Another option is grad checkpointing. See #2671
| 2,709 | 1,350 |
qgallouedec | 2025-01-31T10:09:38 | You can probably merge the two loops btw:
```python
def get_per_token_logps(model, input_ids, num_logits_to_keep):
batch_size = input_ids.size(0)
mini_batch_size = 4 # This could be made configurable
per_token_logps = []
for i in range(0, batch_size, mini_batch_size):
batch_end = min(i + mini_batch_size, batch_size)
mini_batch = input_ids[i:batch_end]
# Compute logits with an extra token
mini_batch_logits = model(mini_batch, num_logits_to_keep=num_logits_to_keep + 1).logits # (B, L, V)
# Exclude the last logit
mini_batch_logits = mini_batch_logits[:, :-1, :] # (B, L-1, V)
# Compute log probabilities
log_probs = mini_batch_logits.log_softmax(dim=-1)
# Select the relevant tokens
input_ids_trimmed = mini_batch[:, -num_logits_to_keep:]
token_log_probs = torch.gather(log_probs, dim=2, index=input_ids_trimmed.unsqueeze(2)).squeeze(2)
per_token_logps.append(token_log_probs)
return torch.cat(per_token_logps, dim=0)
``` | 2,709 | 1,351 |
qgallouedec | 2025-01-31T12:30:47 | Some profiling on the `get_per_token_logps`. Here we use minibatch for the forward pass to reduce the memory peak.
```python
import torch
import time
from transformers import AutoModelForCausalLM
def get_per_token_logps_old(model, input_ids, num_logits_to_keep):
# We add 1 to `num_logits_to_keep` because the last logits of the sequence is later excluded
logits = model(input_ids, num_logits_to_keep=num_logits_to_keep + 1).logits # (B, L, V)
logits = logits[:, :-1, :] # (B, L-1, V), exclude the last logit: it corresponds to the next token pred
# Compute the log probabilities for the input tokens. Use a loop to reduce memory peak.
per_token_logps = []
for logits_row, input_ids_row in zip(logits, input_ids[:, -num_logits_to_keep:]):
log_probs = logits_row.log_softmax(dim=-1)
token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
per_token_logps.append(token_log_prob)
return torch.stack(per_token_logps)
def get_per_token_logps_new(model, input_ids, num_logits_to_keep, mini_batch_size):
per_token_logps = []
batch_size = input_ids.size(0)
for i in range(0, batch_size, mini_batch_size):
batch_end = min(i + mini_batch_size, batch_size)
mini_input_ids = input_ids[i:batch_end]
mini_logits = model(mini_input_ids, num_logits_to_keep=num_logits_to_keep + 1).logits
mini_logits = mini_logits[:, :-1, :] # exclude the last logit: it corresponds to the next token pred
# Compute the log probabilities for the input tokens
log_probs = mini_logits.log_softmax(dim=-1)
labels = mini_input_ids[:, -num_logits_to_keep:].unsqueeze(2)
token_log_prob = torch.gather(log_probs, dim=2, index=labels).squeeze(2)
per_token_logps.append(token_log_prob)
return torch.cat(per_token_logps, dim=0)
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B").to("cuda")
# Simulate a batch of 8 sequences of length 512, where half is the prompt and half is the completion
input_ids = torch.randint(10, 100, (8, 512), device="cuda")
num_logits_to_keep = 256
# Call the old method
times = []
per_token_logps = get_per_token_logps_old(model, input_ids, num_logits_to_keep) # Warmup
for _ in range(10):
start = time.time()
per_token_logps = get_per_token_logps_old(model, input_ids, num_logits_to_keep)
times.append(time.time() - start)
print("Time taken (get_per_token_logps_old):", sum(times) / len(times))
# Call the new method
times = []
per_token_logps = get_per_token_logps_new(model, input_ids, num_logits_to_keep) # Warmup
for _ in range(10):
start = time.time()
per_token_logps = get_per_token_logps_new(model, input_ids, num_logits_to_keep)
times.append(time.time() - start)
print("Time taken (get_per_token_logps_new):", sum(times) / len(times))
```
We also profile the memory usage of the two methods.
```python
from pynvml import *
def print_gpu_utilization():
nvmlInit()
handle = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(handle)
print(f"GPU memory occupied: {info.used/1024**3:.2f} GB.")
per_token_logps = get_per_token_logps_old(model, input_ids, num_logits_to_keep)
print_gpu_utilization()
```
```python
import matplotlib.pyplot as plt
time_take = [0.205, 0.121, 0.102, 0.090]
ref_time = 0.088
memory = [15.2, 15.97, 17.14, 17.07]
ref_memory = 15.92
minibatch_sizes = [1, 2, 4, 8]
# Plot with 2 y axis: time and memory:
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('minibatch_size')
ax1.set_ylabel('time', color=color)
ax1.plot(minibatch_sizes, time_take, color=color)
ax1.set_ylim(0, 0.3)
ax1.hlines(ref_time, 1, 8, colors='r', linestyles='dashed', label='ref_time')
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('memory', color=color)
ax2.plot(minibatch_sizes, memory, color=color)
ax2.hlines(ref_memory, 1, 8, colors='b', linestyles='dashed', label='ref_memory')
ax2.set_ylim(0, 18)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout()
fig.savefig('plot.png')
```

Observation: The memory peak of the new method bigger as soon as you use a minibatch of 2. This probably mean that the peak is not related to de decoder, but to the softwax.
| 2,709 | 1,352 |
qgallouedec | 2025-01-31T12:49:31 | I try with this:
```python
def get_per_token_logps_new2(model, input_ids, num_logits_to_keep, mini_batch_size):
per_token_logps = []
batch_size = input_ids.size(0)
for i in range(0, batch_size, mini_batch_size):
batch_end = min(i + mini_batch_size, batch_size)
mini_input_ids = input_ids[i:batch_end]
mini_logits = model(mini_input_ids, num_logits_to_keep=num_logits_to_keep + 1).logits
mini_logits = mini_logits[:, :-1, :] # exclude the last logit: it corresponds to the next token pred
for logits_row, input_ids_row in zip(mini_logits, mini_input_ids[:, -num_logits_to_keep:]):
log_probs = logits_row.log_softmax(dim=-1)
token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
per_token_logps.append(token_log_prob)
return torch.cat(per_token_logps, dim=0)
```
Not much better:

```python
import matplotlib.pyplot as plt
time_take1 = [0.205, 0.121, 0.102, 0.090]
time_take2 = [0.217, 0.120, 0.101, 0.089]
ref_time = 0.088
memory1 = [15.2, 15.97, 17.14, 17.07]
memory_2 = [15.13, 15.75, 16.65, 16.65]
ref_memory = 15.92
minibatch_sizes = [1, 2, 4, 8]
# Rewrite this code, but split into to plot, one above the other
fig, ax = plt.subplots(2, 1)
# Set the size
fig.set_size_inches(4, 7)
colors = ["blue", "orange", "green", "red"]
ax[0].set_xlabel("Minibatch size")
ax[0].set_ylabel("time")
ax[0].plot(minibatch_sizes, time_take1, label="New 1")
ax[0].plot(minibatch_sizes, time_take2, label="New 2")
ax[0].set_ylim(0, 0.3)
ax[0].hlines(ref_time, 1, 8, linestyles="dashed", label="Old")
ax[0].tick_params(axis="y")
ax[0].set_title("Time (lower is better)")
ax[0].legend(loc="lower right")
ax[1].set_ylabel("memory")
ax[1].plot(minibatch_sizes, memory1, label="New 1")
ax[1].plot(minibatch_sizes, memory_2, label="New 2")
ax[1].hlines(ref_memory, 1, 8, linestyles="dashed", label="Old")
ax[1].set_ylim(0, 18)
ax[1].tick_params(axis="y")
ax[1].set_title("Memory (lower is better)")
ax[1].legend(loc="lower right")
fig.tight_layout()
fig.savefig("plot2.png")``` | 2,709 | 1,353 |
tgaddair | 2025-02-01T06:48:28 | Hey @qgallouedec, thanks for digging into this. We recently started using a larger model and did indeed start to run into OOMs during the `log_softmax` operation. However, this implementation (I believe the second one you tried) resolved the issue for us:
```
def get_per_token_logps(model, input_ids, num_logits_to_keep):
# Process input_ids in mini-batches of size 1 and compute log probs
batch_size = input_ids.size(0)
mini_batch_size = 1
per_token_logps = []
for i in range(0, batch_size, mini_batch_size):
batch_end = min(i + mini_batch_size, batch_size)
mini_batch = input_ids[i:batch_end]
# We add 1 to `num_logits_to_keep` because the last logits of the sequence is later excluded
mini_batch_logits = model(mini_batch, num_logits_to_keep=num_logits_to_keep + 1).logits # (B, L, V)
logits = mini_batch_logits[:, :-1, :] # (B, L-1, V), exclude the last logit
# Compute log probs for this mini-batch
log_probs = logits.log_softmax(dim=-1)
mini_batch_ids = mini_batch[:, -num_logits_to_keep:]
token_log_prob = torch.gather(log_probs, dim=2,
index=mini_batch_ids.unsqueeze(2)).squeeze(2)
per_token_logps.append(token_log_prob)
return torch.cat(per_token_logps, dim=0)
```
The memory benchmarking results look a little surprising to me. Are you running with `CUDA_LAUNCH_BLOCKING=1`? If not, I would suspect that the async execution might throwing things off if you read the memory usage before execution has completed. Another way to workaround that would be to call `torch.cuda.synchronize()` before calculating memory usage. It also looks like you're capturing final memory, not peak memory, which might be misleading if the log_softmax peaks in memory usage higher than the final output.
All that to say: I think this is a good change that would potentially benefit a number of people running into OOMs. Thanks for digging into it. | 2,709 | 1,354 |
andyl98 | 2025-02-04T04:04:10 | Agree with @tgaddair that this fix should indeed help resolving some OOM issues. I'll make a PR to integrate that change as well (flag-controlled) | 2,709 | 1,355 |
willccbb | 2025-02-12T00:43:47 | Closing as resolved by https://github.com/huggingface/trl/pull/2776
Much more elegant than what I was originally thinking with "nested grad accum", but can be used to basically achieve the same thing (for multi-GPU training at least).
Am now able to train 7B models at >4k context, thanks to this + several other recent optimizations.
Nice work @qgallouedec :) | 2,709 | 1,356 |
aboros98 | 2025-01-31T09:38:52 | The current implementation computes `per_token_logps` without masking the values corresponding to the padding tokens. This means padding tokens are included in the KL divergence calculation, which could slightly skew the results since padding isn't meaningful content.
The current code:
```python
per_token_logps = []
for logits_row, input_ids_row in zip(logits, input_ids[:, -num_logits_to_keep:]):
log_probs = logits_row.log_softmax(dim=-1)
token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
per_token_logps.append(token_log_prob)
return torch.stack(per_token_logps)
```
Suggested change to mask out padding tokens:
```python
per_token_logps = []
for logits_row, input_ids_row in zip(logits, input_ids[:, -num_logits_to_keep:]):
log_probs = logits_row.log_softmax(dim=-1)
token_log_prob = torch.gather(log_probs, dim=1, index=input_ids_row.unsqueeze(1)).squeeze(1)
mask = ~(input_ids_row != self.processing_class.pad_token_id)
token_log_prob *= mask
per_token_logps.append(token_log_prob)
return torch.stack(per_token_logps)
```
This comment is being added here since it relates to KL divergence calculation, which is within the scope of this PR. If this understanding is incorrect, I would be very happy for someone to explain why 😄 | 2,708 | 1,357 |
qgallouedec | 2025-01-31T09:49:59 | thanks @andyl98, that's a very good point indeed. Can you build the attention mask out of the function instead? and not deduce it from the content of input_ids? in some edge cases, input ids may contain pad_token in the middle of the text. | 2,708 | 1,358 |
qgallouedec | 2025-01-31T09:52:51 | move this part
```python
# Mask everything after the first EOS token
is_eos = completion_ids == self.processing_class.eos_token_id
eos_idx = torch.full((is_eos.size(0),), is_eos.size(1), dtype=torch.long, device=device)
eos_idx[is_eos.any(dim=1)] = is_eos.int().argmax(dim=1)[is_eos.any(dim=1)]
sequence_indices = torch.arange(is_eos.size(1), device=device).expand(is_eos.size(0), -1)
completion_mask = (sequence_indices <= eos_idx.unsqueeze(1)).int()
```
before `get_per_token_logps`, then concat `prompt_inputs["attention_mask"]` and `completion_mask`
| 2,708 | 1,359 |
qgallouedec | 2025-01-31T09:55:13 | > This means padding tokens are included in the KL divergence calculation
No, because we mask here:
https://github.com/huggingface/trl/blob/265663af6a64c884e8cb4ec27530039748e61f9e/trl/trainer/grpo_trainer.py#L511-L512 | 2,708 | 1,360 |
andyl98 | 2025-01-31T16:43:29 | Thanks @qgallouedec . Fixed as you suggested.
Also I changed the `completion_mask`'s dtype from `int` to `long` as this is the default dtype for ["attention_mask"] (plus some variable names for clarity). Feel free to edit and merge. | 2,708 | 1,361 |
qgallouedec | 2025-01-31T18:55:35 | Btw, can you run small experiments so that we know the impact of not having this attention mask? If you don't have the compute/time I can handle it. 🙂 | 2,708 | 1,362 |
andyl98 | 2025-01-31T18:58:08 | It would be great if you can test it as I need to work on some other stuff rn, appreciate the reviews :) | 2,708 | 1,363 |
qgallouedec | 2025-01-31T19:16:23 | I will, thanks a lot for spotting this issue! | 2,708 | 1,364 |
HuggingFaceDocBuilderDev | 2025-01-31T19:21:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2708). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,708 | 1,365 |
qgallouedec | 2025-02-02T19:17:06 | very nice finding (green is this PR, red is main)
<img width="1064" alt="Screenshot 2025-02-02 at 20 16 30" src="https://github.com/user-attachments/assets/5d343f24-1b5f-4f8d-89a4-e7a11fff6414" />
| 2,708 | 1,366 |
kooryan | 2025-02-06T19:05:50 | Is anyone able to comment on this? | 2,707 | 1,367 |
Superskyyy | 2025-01-30T20:24:24 | Just a question, on multi node with deepspeed does the current vllm-enabled trainer work?
If we want to specify where to host it the config needs to appoint a dedicated node for pure inference preferably. This is consistent with those disaggregated training infras. | 2,706 | 1,368 |
RicardoDominguez | 2025-02-12T14:20:00 | Splitting examples across multiple GPUs at generation time (i.e., data parallel) would be incredibly useful. Is there some bottleneck for getting this running? Happy to contribute. | 2,706 | 1,369 |
qgallouedec | 2025-02-12T15:08:55 | The current question is how to target the right devices for vLLM. Feel free to give a try, I wasn't able to make it work. | 2,706 | 1,370 |
cfpark00 | 2025-02-12T15:24:42 | Should we at least do this:
https://github.com/huggingface/trl/issues/2775
Or does vllm does this automatically? | 2,706 | 1,371 |
cfpark00 | 2025-02-12T15:26:15 | The relevant issue in vllm:
https://github.com/vllm-project/vllm/issues/3012
it seems like it wasn't straightforward to make it just a python option like:
devices=["cuda:2", "cuda:3"] | 2,706 | 1,372 |
qgallouedec | 2025-02-12T15:28:39 | > Or does vllm does this automatically?
I _think_ it should handle this automatically. | 2,706 | 1,373 |
qgallouedec | 2025-01-30T18:33:05 | Interesting question!
The answer is in the math. If you calculate the value of the loss (ignore the gradient), you'll see that it's equal to $\beta \mathrm{KL}$. That's why it starts at 0 and that's why it's increasing. | 2,703 | 1,374 |
arnavgarg1 | 2025-01-30T18:41:59 | Thanks for the prompt response @qgallouedec!
Does this mean that the loss itself is not a reliable indicator of training progression and we should primarily rely on KL and reward trends instead? | 2,703 | 1,375 |
qgallouedec | 2025-01-30T18:56:29 | You should rely mostly on the reward. And keep an eye on the generations (risk of reward hacking) | 2,703 | 1,376 |
NickyDark1 | 2025-02-02T03:24:30 | I trained this model:
https://huggingface.co/NickyNicky/Llama-1B-base-GRPO-miniThinky_v1
I have these metrics train:

I see that it is not necessary to wait long for it to come out with a value of 0, I also observe that the sudden changes where the range from 300 to 500 tokens for generation and if it increases more say to 1000 tokens to generate only that change could reach the rewards to zero again and without leaving there | 2,703 | 1,377 |
NickyDark1 | 2025-02-02T03:28:50 | other model train:


I think token generation changes affect | 2,703 | 1,378 |
XiaofengZHOU | 2025-02-05T07:04:31 | > > Interesting question!
> > The answer is in the math. If you calculate the value of the loss (ignore the gradient), you'll see that it's equal to β KL . That's why it starts at 0 and that's why it's increasing.
>
> Hi. I am a bit confused. From the implementation about `per_token_loss`, it should be `advantages` as `torch.exp(0)=1`. So how could it be `\beta KL` for a mount of steps?🤔
>
> #
> [trl/trl/trainer/grpo_trainer.py](https://github.com/huggingface/trl/blob/af4ad47035529164799be10f3fe558ee642a9880/trl/trainer/grpo_trainer.py#L567)
>
> Line 567 in [af4ad47](/huggingface/trl/commit/af4ad47035529164799be10f3fe558ee642a9880)
>
> per_token_loss = torch.exp(per_token_logps - per_token_logps.detach()) * advantages.unsqueeze(1)
I think it's not the same as the original GRPO algorithm(missing ration and clamp) | 2,703 | 1,379 |
qgallouedec | 2025-02-05T07:35:49 | > I think it's not the same as the original GRPO algorithm(missing ration and clamp)
It is the same, since we do 1 optimization step | 2,703 | 1,380 |
XiaofengZHOU | 2025-02-05T08:51:56 | > > I think it's not the same as the original GRPO algorithm(missing ration and clamp)
>
> It is the same, since we do 1 optimization step
according to the equation,the loss== βKL, which means the bigger the kl, the better the performance?
so how does the reward work?
for example:
rewards = torch.tensor([ 0, 1, 0], dtype=torch.float32)
per_token_logps1 = [[-0.4, -0.3], [-0.6, -0.5], [-1, -1]]
per_token_logps2 = [[-0.6, -0.5], [-0.4, -0.3], [-1, -1]]
the loss calculated are bot tensor(0.0062).
can you point out where is my problem?
| 2,703 | 1,381 |
zhangsheng377 | 2025-02-11T16:36:51 | @qgallouedec Now I understand why the loss of grpo start from 0. But there is another question, why can training still proceed after the loss starts from 0? Is it because the GPU accumulates computational errors, resulting in KL not being 0, thus allowing the formal training to begin? | 2,703 | 1,382 |
littttttlebird | 2025-02-21T04:13:59 | > Interesting question!
>
> The answer is in the math. If you calculate the value of the loss (ignore the gradient), you'll see that it's equal to β KL . That's why it starts at 0 and that's why it's increasing.
num_generations output o1~o_g,have different length, so, for every A_it, have been accumulated different times, it's not equal to β KL | 2,703 | 1,383 |
qgallouedec | 2025-01-30T18:25:58 | More control on the generation does make sense. A reasonable way to allow for more control is probably to add more generation args in the GRPOConfig.
Are you willing to contribute? | 2,702 | 1,384 |
Superskyyy | 2025-01-30T19:30:11 | > More control on the generation does make sense. A reasonable way to allow for more control is probably to add more generation args in the GRPOConfig. Are you willing to contribute?
Yes I will contribute this feature. | 2,702 | 1,385 |
Benjoyo | 2025-01-31T06:30:51 | > > More control on the generation does make sense. A reasonable way to allow for more control is probably to add more generation args in the GRPOConfig. Are you willing to contribute?
>
> Yes I will contribute this feature.
Please add stop_strings or stopping criteria :)
Although I don’t see why not exposing the full generation config to avoid the next issue of this type in a few weeks. | 2,702 | 1,386 |
Superskyyy | 2025-01-31T14:14:16 | > > > More control on the generation does make sense. A reasonable way to allow for more control is probably to add more generation args in the GRPOConfig. Are you willing to contribute?
> >
> > Yes I will contribute this feature.
>
> Please add stop_strings or stopping criteria :)
>
> Although I don’t see why not exposing the full generation config to avoid the next issue of this type in a few weeks.
@qgallouedec wdyt? Either we should just directly expose the entire generation config because there are all kinds of tricks that people might want to tune there. | 2,702 | 1,387 |
xzuyn | 2025-02-04T19:15:06 | Would love to see this in Online DPO as well. Currently it's hard-coded to [`do_sample=True, top_k=50, top_p=1`](https://github.com/huggingface/trl/blob/main/trl/trainer/online_dpo_trainer.py#L252C13-L287C14). | 2,702 | 1,388 |
Superskyyy | 2025-02-06T23:05:01 | Since we are doing vLLM now, this might need some extra design in it. Either we expose one of them (HF preferrablly) and convert automatically into vLLM config. Or we allow two mutually exclusive configs being supplied. But I guess explicit is better than implicit. | 2,702 | 1,389 |
qgallouedec | 2025-01-30T18:22:11 | It it can allow to have more metrics then it could make sense. Are you willing to contribute? | 2,701 | 1,390 |
qgallouedec | 2025-01-30T18:23:28 | Btw, the reported tok/sec would probably be misleading as it would not account for the generations | 2,701 | 1,391 |
Superskyyy | 2025-01-30T19:02:34 | > Btw, the reported tok/sec would probably be misleading as it would not account for the generations
Oh, that's correct. To avoid having to override the inner training loop where the metrics are calculated, do you think it can be done elsewhere, or can we use callbacks to patch the metrics up (i.e., maybe multiply by rollout counts) before they are reported? | 2,701 | 1,392 |
qgallouedec | 2025-01-30T19:10:28 | Right now, I don't know how to do that while keeping it simple | 2,701 | 1,393 |
Superskyyy | 2025-01-30T19:29:41 | > Right now, I don't know how to do that while keeping it simple
Ok I will try to figure it out. Thanks! | 2,701 | 1,394 |
qgallouedec | 2025-01-30T17:34:54 | Nice! Can you try locally with Multi GPU / DeepSpeed ZeRO 1/2/3? If you don't have the hardware, I can do it.
| 2,700 | 1,395 |
qgallouedec | 2025-01-30T17:36:27 | In the DeepSeek-R1 paper, I think they sync the ref after each epoch, no? | 2,700 | 1,396 |
HuggingFaceDocBuilderDev | 2025-01-30T17:38:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2700). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,700 | 1,397 |
shirinyamani | 2025-01-30T17:44:23 | @qgallouedec
I do not have access to multi-gpu atm unfortunately!
I can request access but it might take long time for them to assign gpu to me!
for the update, I thiiiink they do the update after one complete iteration (epoc), but I am not sure because I think this way there might be a conflict, because the default `ref_model_sync_steps` is 64 , meaning the update of the `ref_model` will happen after these many steps, but one epoc will be alot more than this probably! (i.e. for one epoc scenario we gotta set the `ref_model_sync_steps` as the steps it takes for entire epoc)
Maybe I am misunderstanding?
<img width="822" alt="Screenshot 2025-01-30 at 10 38 58 AM" src="https://github.com/user-attachments/assets/81593ed9-ed1a-459b-9df3-1b587f46c0a9" />
| 2,700 | 1,398 |
shirinyamani | 2025-01-30T17:58:25 | Note that this algorithm and the _ref_update_ discussion is from the [DeepSeekMath](https://arxiv.org/pdf/2402.03300) paper where they discussed the grpo math. but the question still remains!🤔 | 2,700 | 1,399 |
qgallouedec | 2025-01-30T18:18:40 | Don't bother with multi gpu, I'm go a test myself
I think we understand similarly. I'm wondering what the user would expect.
This soft update as implemented gives probably better results. But it doesn't match the paper.
Let me make some tests. I'll come back to you.
| 2,700 | 1,400 |
shirinyamani | 2025-02-03T15:41:45 | @qgallouedec Did you get to test this by any chance ? 🤔 | 2,700 | 1,401 |
qgallouedec | 2025-02-03T15:51:09 | Not yet, will do asap | 2,700 | 1,402 |
qgallouedec | 2025-02-04T20:40:45 | Actually I don't have time to test unfortunately, but I think it's really worth:
1. adding a param `ref_model_sync_epochs` to allow user reproduce precisely the method describe in the paper. Ideally, allow this value to be both int and and float in (0, 1)
2. run some experiments to check if
- the current default values make sense
- does it give significantly different results
Do you want to handle 1. @shirinyamani?
In the meantime I'll merge this one.
| 2,700 | 1,403 |
shirinyamani | 2025-02-04T22:20:48 | > Actually I don't have time to test unfortunately, but I think it's really worth:
>
> 1. adding a param `ref_model_sync_epochs` to allow user reproduce precisely the method describe in the paper. Ideally, allow this value to be both int and and float in (0, 1)
> Do you want to handle 1. @shirinyamani?
@qgallouedec
Sure, for brainstorming purposes let's break down our options;
**Goal:** is to have a param like `0 < ref_model_sync_epochs < 1` that allow user to once in a while after X number of epochs (can be 0.2 of epoch) to update the ref_model ?
**How to build:**
**Option 1**): to override the current `on_step_end` method in the `SyncRefModelCallback` class to reflect what we want;
now it is like this;
```python
class SyncRefModelCallback(TrainerCallback):
def __init__(
self,
ref_model: Union[PreTrainedModel, torch.nn.Module],
accelerator: Optional[Accelerator],
):
self.accelerator = accelerator
self.ref_model = ref_model
@staticmethod
def _sync_target_model(model, target_model, alpha):
for target_param, copy_param in zip(target_model.parameters(), model.parameters()):
target_param.data.mul_(1.0 - alpha).add_(copy_param.data, alpha=alpha)
@staticmethod
def sync_target_model(model, target_model, alpha):
deepspeed_plugin = AcceleratorState().deepspeed_plugin
if deepspeed_plugin is not None and deepspeed_plugin.zero_stage == 3:
with deepspeed.zero.GatheredParameters(
list(model.parameters()) + list(target_model.parameters()), modifier_rank=0
):
if deepspeed.comm.get_rank() == 0:
SyncRefModelCallback._sync_target_model(model, target_model, alpha)
else:
SyncRefModelCallback._sync_target_model(model, target_model, alpha)
def on_step_end(self, args, state, control, **kwargs):
model: PreTrainedModel = kwargs["model"]
if self.ref_model is not None and state.global_step % args.ref_model_sync_steps == 0:
if self.accelerator:
model = self.accelerator.unwrap_model(model)
self.sync_target_model(model, self.ref_model, args.ref_model_mixup_alpha)
```
with the changes would be sth like;
```python
def on_step_end(self, args, state, control, **kwargs):
model: PreTrainedModel = kwargs["model"]
# Calculate total steps per epoch
steps_per_epoch = state.max_steps // args.num_train_epochs
# Determine if we should sync based on ref_model_sync_epochs
if isinstance(self.ref_model_sync_epochs, int):
# Sync based on integer number of epochs
should_sync = state.global_step % (self.ref_model_sync_epochs * steps_per_epoch) == 0
elif isinstance(self.ref_model_sync_epochs, float):
# Sync based on fraction of total epochs
should_sync = (state.global_step / steps_per_epoch) % (self.ref_model_sync_epochs * args.num_train_epochs) == 0
else:
raise ValueError("ref_model_sync_epochs must be an int or a float")
if self.ref_model is not None and should_sync:
if self.accelerator:
model = self.accelerator.unwrap_model(model)
self.sync_target_model(model, self.ref_model, args.ref_model_mixup_alpha)
```
This might work if my understanding of `on_step_end` is correct! Since there was no doc string, my understanding from this method is that its like a check if the reference model should be synchronized based on ref_model_sync_steps and performs the synchronization if necessary.
**Option 2:)** is to add the `ref_model_sync_epochs` locally to the grpo training loss update ?
Thoughts ? 💭
| 2,700 | 1,404 |
shirinyamani | 2025-02-05T00:16:14 | One more Question for you; @qgallouedec
If the ref_model is getting updated so frequently, would it be same as not having ref_model at all ? 🤔
| 2,700 | 1,405 |
HuggingFaceDocBuilderDev | 2025-01-31T08:33:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2699). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,699 | 1,406 |
deter3 | 2025-01-31T06:22:18 | I am having similar problem without lora . As long as using vllm for generation , it might have such problem . My config is
```
training_args = GRPOConfig(
output_dir=output_dir,
run_name=run_name,
learning_rate=1e-6,
adam_beta1 = 0.9,
adam_beta2 = 0.99,
weight_decay = 0.1,
warmup_ratio = 0.1,
lr_scheduler_type='cosine',
logging_steps=1,
bf16=True,
per_device_train_batch_size=1,
gradient_accumulation_steps=2,
num_generations=6,
max_prompt_length=1500,
max_completion_length=256,# #786,
num_train_epochs=3,
save_steps=50,
save_strategy="steps",
max_grad_norm=0.1,
report_to="wandb",
log_on_each_node=False,
# Add your ds config here:
deepspeed="ds_config.json",
use_vllm=True,
vllm_device="cuda:0",
vllm_gpu_memory_utilization=0.4
#gradient_checkpointing=True
)
```
The error is
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/train_grpo.py", line 263, in <module>
[rank0]: trainer = GRPOTrainer(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/trl/trainer/grpo_trainer.py", line 313, in __init__
[rank0]: self.llm = LLM(
[rank0]: ^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/utils.py", line 986, in inner
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/entrypoints/llm.py", line 230, in __init__
[rank0]: self.llm_engine = self.engine_class.from_engine_args(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/engine/llm_engine.py", line 517, in from_engine_args
[rank0]: engine = cls(
[rank0]: ^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
[rank0]: self.model_executor = executor_class(vllm_config=vllm_config, )
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/executor/executor_base.py", line 36, in __init__
[rank0]: self._init_executor()
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/executor/gpu_executor.py", line 35, in _init_executor
[rank0]: self.driver_worker.load_model()
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/worker/worker.py", line 155, in load_model
[rank0]: self.model_runner.load_model()
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/worker/model_runner.py", line 1096, in load_model
[rank0]: self.model = get_model(vllm_config=self.vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
[rank0]: return loader.load_model(vllm_config=vllm_config)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/model_executor/model_loader/loader.py", line 366, in load_model
[rank0]: loaded_weights = model.load_weights(
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/model_executor/models/llama.py", line 594, in load_weights
[rank0]: return loader.load_weights(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/model_executor/models/utils.py", line 237, in load_weights
[rank0]: autoloaded_weights = set(self._load_module("", self.module, weights))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/vllm/model_executor/models/utils.py", line 226, in _load_module
[rank0]: raise ValueError(msg)
[rank0]: ValueError: There is no module or parameter named 'embed_tokens' in LlamaForCausalLM
```
| 2,698 | 1,407 |
zaddy6 | 2025-01-31T09:13:11 | same issue | 2,698 | 1,408 |
tchang1997 | 2025-02-04T18:47:14 | I've been experiencing something similar. I suspect the underlying issue is that when we LoRA-ify the model, the underlying model technically isn't the same model class as whatever we're training, but `PeftModel`. This can yield various mismatches when we try to actually load the weights via vLLM as-is, since the `state_dict` isn't in the expected format. So we need to merge the adapter weights for generation, then un-merge afterwards to continue training.
I'll note that I don't hit this error on the first training epoch; I hit this error on subsequent epochs when I try to load the partially-trained model into vLLM.
As a starting point, I've tried adding this check when we load the model into vLLM, and manually filtering out keys from the `state_dict`. I no longer get a ValueError about `base_model`, but different keys mismatch. Specifically on a Llama 3.1-8B model, **I get a KeyError on `layers.0.self_attn.qkv_proj.base_layer.weight`.**
**Full traceback:**
```
Traceback (most recent call last):
File "/home/ctrenton/repo_name/rl.py", line 183, in <module>
trainer.train()
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/transformers/trainer.py", line 2171, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/transformers/trainer.py", line 3669, in training_step
inputs = self._prepare_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ctrenton/repo_name/grpo_trainer_patch.py", line 51, in _prepare_inputs
llm_model.load_weights(state_dict.items())
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 565, in load_weights
return loader.load_weights(
^^^^^^^^^^^^^^^^^^^^
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 233, in load_weights
autoloaded_weights = set(self._load_module("", self.module, weights))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 194, in _load_module
yield from self._load_module(prefix,
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 171, in _load_module
loaded_params = module_load_weights(weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data2/ctrenton/uv/llm_server/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 423, in load_weights
param = params_dict[name]
~~~~~~~~~~~^^^^^^
KeyError: 'layers.0.self_attn.qkv_proj.base_layer.weight'
```
Note that my `grpo_trainer_patch.py` file in the stack trace defines a subclass of `GRPOTrainer` with the change I describe below, and all other functions should be identical.
The error makes sense — there's no `base_layer` attribute in the corresponding parameter (I think vLLM's Llama model expects a similar key of the form `layers.0.self_attn.ATTN_COMPONENT.weight` +/- some custom weight merging on vLLM's end?) — but I'm not completely if we can patch GRPOTrainer to avoid this.
Here's my attempt in case it's useful for debugging (you can copy-paste into the `if self.args.us_vllm` block in `_prepare_inputs`):
**`grpo_trainer_patch.py`:**
```
def _prepare_inputs(self, inputs: dict[str, Union[torch.Tensor, Any]]) -> dict[str, Union[torch.Tensor, Any]]:
....
if self.args.use_vllm:
# First, have main process load weights if needed
if self.state.global_step != self._last_loaded_step:
with unwrap_model_for_generation(model, self.accelerator) as unwrapped_model:
if isinstance(unwrapped_model, PeftModel):
print("PeftModel detected -- merging first")
# check if LoRA adapter has learned anything (remove later)
unwrapped_model.merge_adapter()
raw_state_dict = unwrapped_model.base_model.model.state_dict()
state_dict = {}
for name, param in raw_state_dict.items():
if "lora_" not in name:
state_dict[name] = param
else:
state_dict = unwrapped_model.state_dict()
if self.accelerator.is_main_process:
llm_model = self.llm.llm_engine.model_executor.driver_worker.model_runner.model
# check weight shapes and change state_dict names?
llm_model.load_weights(state_dict.items())
with unwrap_model_for_generation(model, self.accelerator) as unwrapped_model:
if isinstance(unwrapped_model, PeftModel):
unwrapped_model.unmerge_adapter() # restore LoRA for further training
self._last_loaded_step = self.state.global_step
```
When I inspect `unwrapped_model` after `merge_adapter()`, it ostensibly contains the right set of weights — for example, `model.layers[0].self_attn.o_proj.base_layer.weight` matches the vLLM model's `model.layers[0].self_attn.o_proj.weight`:
```
(Pdb) unwrapped_model.base_model.model.state_dict()["model.layers.0.self_attn.o_proj.base_layer.weight"]
tensor([[ 0.0093, -0.0042, -0.0165, ..., -0.0008, -0.0162, 0.0051],
[-0.0056, 0.0273, 0.0119, ..., -0.0070, -0.0036, 0.0067],
[-0.0048, -0.0005, 0.0055, ..., -0.0074, -0.0034, -0.0011],
...,
[ 0.0115, -0.0045, -0.0649, ..., 0.0181, -0.0013, 0.0061],
[-0.0159, 0.0058, -0.0059, ..., -0.0017, -0.0041, -0.0043],
[-0.0157, 0.0042, -0.0065, ..., -0.0004, -0.0004, 0.0014]],
device='cuda:0', dtype=torch.bfloat16)
...
(Pdb) self.llm.llm_engine.model_executor.driver_worker.model_runner.model.model.layers[0].self_attn.o_proj.weight
# same output as above
```
and the `PeftConfig` returns what I expect:
```
(Pdb) unwrapped_model.peft_config
{'default': LoraConfig(task_type='CAUSAL_LM', peft_type=<PeftType.LORA: 'LORA'>, auto_mapping=None, base_model_name_or_path='deepseek-ai/DeepSeek-R1-Distill-Llama-8B', revision=None, inference_mode=False, r=16, target_modules={'k_proj', 'down_proj', 'o_proj', 'up_proj', 'gate_proj', 'q_proj', 'v_proj'}, exclude_modules=None, lora_alpha=64, lora_dropout=0.05, fan_in_fan_out=False, bias='none', use_rslora=False, modules_to_save=None, init_lora_weights=True, layers_to_transform=None, layers_pattern=None, rank_pattern={}, alpha_pattern={}, megatron_config=None, megatron_core='megatron.core', loftq_config={}, eva_config=None, use_dora=False, layer_replication=None, runtime_config=LoraRuntimeConfig(ephemeral_gpu_offload=False), lora_bias=False)}
(Pdb) unwrapped_model.active_adapters
['default']
```
— but it appears that there's some LoRA modules that aren't getting merged when I inspect the model layers:
```
(Pdb) unwrapped_model.base_model.model.model.layers[0].self_attn.o_proj
lora.Linear(
(base_layer): Linear(in_features=4096, out_features=4096, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=16, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=16, out_features=4096, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
```
I'm pretty new to `peft`, so it's possible I'm doing something very naive.
Dependency info:
* VLLM 0.7.1
* transformers 4.48.2
* peft 0.14.0
* trl built from source (checked this on SHA `bbdd6db17c49db813695d0a8bc0da7bf6b1bb88e` AND `1f344c9377d87cd348d92b78f27afea8e66563d7`). Note that on `1f344c9377d87cd348d92b78f27afea8e66563d7`, the vLLM loading logic is in `_prepare_inputs`; prior to that it was in `compute_loss`.
| 2,698 | 1,409 |
zaddy6 | 2025-02-06T00:11:09 | still same issue | 2,698 | 1,410 |
1JZER | 2025-02-08T02:46:30 | Same issue | 2,698 | 1,411 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.