repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
18,658
closed
Update run_clm_flax.py from single TPU worker to multiple TPU workers
Current code only works on 1 TPU worker. If there's multiple TPU workers, the data need to be split into multiple workers first then shard to local devices. The same issue for T5 language modeling with flax: https://github.com/google/flax/discussions/2017 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-17-2022 00:50:41
08-17-2022 00:50:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18658). All of your documentation changes will be reflected on that endpoint.<|||||>WDYT @sanchit-gandhi?<|||||>Hey @congyingxia and sorry for the delay! We try to keep these examples as simple as possible. In that spirit, we have limited the example scripts to single host training (v3-8). Unfortunately, scaling up to multi-host training is non-trivial: it requires another driver VM to keep the TPU hosts synced and execute commands across TPUs in parallel. For this reason, we have currently omitted multi-host training/inference, and instead focussed on single host training. If you're interested in running training/inference on a pod, I would suggest looking at the repo https://github.com/huggingface/bloom-jax-inference, which details how inference for LLM's can be scaled up to an arbitrary number of TPU devices with MP + DP.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,657
closed
Fix for issue #12182 to ensure that the tutorial for zero shot distillation works
# What does this PR do? The code for training models via zero shot distillation was breaking because the .map() function was removing the _labels_ field from the dataset object. This PR fixes the issue by changing the way the tokenizer is called via the map() function. Fixes [https://github.com/huggingface/transformers/issues/12182](https://github.com/huggingface/transformers/issues/12182 ) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @patil-suraj @VictorSanh <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-16-2022 21:17:28
08-16-2022 21:17:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18657). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,656
closed
BigBird inference: same input data gives different outputs
### System Info - pytorch==1.10.2 - transformers==4.20.1 - python=3.9.7 - ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` from transformers import (BigBirdForQuestionAnswering, BigBirdTokenizer) import torch device = "cuda:0" if torch.cuda.is_available() else "cpu" tokenizer = BigBirdTokenizer.from_pretrained("model path ") model = BigBirdForQuestionAnswering.from_pretrained("model path") def question_answer(model,tokenizer,text, question, handle_impossible_answer=False): encoded_inputs = tokenizer(question, text, return_tensors="pt").to(device) start_positions = torch.tensor([1]).to((device), non_blocking=True) end_positions = torch.tensor([3]).to((device), non_blocking=True) bb_model = model.to((device), non_blocking=True) with torch.no_grad(): outputs = bb_model( **encoded_inputs, start_positions=start_positions, end_positions=end_positions, output_attentions=True) print(outputs) ##note this is a sample input chunks: chunks = ["Scikit-learn is a free software machine learning library for the Python programming language.", "It features various classification, regression and clustering algorithms including support-vector machines", "sklearn is a lib"] question = "what is sklearn?" for chunk in chunks: print(question_answer(chunk, question,handle_impossible_answer=False)) print("-----------------------------------------------------") print(question_answer(chunk, question,handle_impossible_answer=False)) print("------xxxxxxx-----------------------------------------------") ``` ### Expected behavior Every time for the exact same input data, the output tensors are the same for the first time. However, in the 2nd, 3rd runs and so on the tensors vary from the 1st run. run1 : ``` BigBirdForQuestionAnsweringModelOutput(loss=tensor(1000012.9375), start_logits=tensor([[-1.3296e+00, -1.0000e+06, -1.0000e+06, ..., -1.1439e+01, -1.0714e+01, -1.0375e+01]]), end_logits=tensor([[-3.0393e+00, -1.0000e+06, -1.0000e+06, ..., -8.3824e+00, -7.1354e+00, -5.7355e+00]]), pooler_output=None, hidden_states=None, attentions=(tensor([[[[1.7520e-01, 3.3716e-03, 1.7215e-03, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [3.4175e-01, 4.7108e-02, 2.2092e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [6.5068e-02, 6.3478e-01, 1.1994e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [3.0409e-03, 6.9234e-05, 1.2098e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [6.5139e-03, 5.8579e-05, 4.8735e-05, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [1.9445e-02, 2.0696e-04, 2.2188e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]], [[2.0401e-02, 9.3263e-04, 1.9891e-03, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [1.7520e-02, 1.4839e-02, 1.2475e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [7.2214e-02, 3.1718e-02, 1.0765e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [2.3348e-03, 6.1570e-04, 8.4744e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ... [9.8621e-02, 1.7792e-06, 4.8031e-06, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]]]]))) ``` run 2: ``` BigBirdForQuestionAnsweringModelOutput(loss=tensor(1000012.9375), start_logits=tensor([[-1.3296e+00, -1.0000e+06, -1.0000e+06, ..., -1.1439e+01, -1.0714e+01, -1.0375e+01]]), end_logits=tensor([[-3.0393e+00, -1.0000e+06, -1.0000e+06, ..., -8.3824e+00, -7.1354e+00, -5.7355e+00]]), pooler_output=None, hidden_states=None, attentions=(tensor([[[[1.7520e-01, 3.3716e-03, 1.7215e-03, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [3.4175e-01, 4.7108e-02, 2.2092e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [6.5068e-02, 6.3478e-01, 1.1994e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [3.0409e-03, 6.9234e-05, 1.2098e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [6.5139e-03, 5.8579e-05, 4.8735e-05, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [1.9445e-02, 2.0696e-04, 2.2188e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]], [[2.0401e-02, 9.3263e-04, 1.9891e-03, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [1.7520e-02, 1.4839e-02, 1.2475e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [7.2214e-02, 3.1718e-02, 1.0765e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [2.3348e-03, 6.1570e-04, 8.4744e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ... [1.1374e-05, 2.7021e-06, 4.4213e-07, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]]]]))) ``` run 3: ``` BigBirdForQuestionAnsweringModelOutput(loss=tensor(1000012.9375), start_logits=tensor([[-1.3296e+00, -1.0000e+06, -1.0000e+06, ..., -1.1439e+01, -1.0714e+01, -1.0375e+01]]), end_logits=tensor([[-3.0393e+00, -1.0000e+06, -1.0000e+06, ..., -8.3824e+00, -7.1354e+00, -5.7355e+00]]), pooler_output=None, hidden_states=None, attentions=(tensor([[[[1.7520e-01, 3.3716e-03, 1.7215e-03, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [3.4175e-01, 4.7108e-02, 2.2092e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [6.5068e-02, 6.3478e-01, 1.1994e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [3.0409e-03, 6.9234e-05, 1.2098e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [6.5139e-03, 5.8579e-05, 4.8735e-05, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [1.9445e-02, 2.0696e-04, 2.2188e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]], [[2.0401e-02, 9.3263e-04, 1.9891e-03, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [1.7520e-02, 1.4839e-02, 1.2475e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [7.2214e-02, 3.1718e-02, 1.0765e-02, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [2.3348e-03, 6.1570e-04, 8.4744e-04, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ... [1.1374e-05, 2.7021e-06, 4.4213e-07, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]]]]))) ``` Can you please look into this @ydshieh?
08-16-2022 16:11:39
08-16-2022 16:11:39
@NautiyalAmit Sure!<|||||>@NautiyalAmit The provided code snippet has several issues that it fails to run. The "model path " is not a valid model name on the Hub, and I don't know which one you tried. The call `print(question_answer(chunk, question,handle_impossible_answer=False))` has missing arguments which gives ```bash Traceback (most recent call last): File "/home/yih_dar_huggingface_co/transformers/run_bigbird.py", line 31, in <module> print(question_answer(chunk, question,handle_impossible_answer=False)) TypeError: question_answer() missing 2 required positional arguments: 'text' and 'question' ``` Could you fix the code snippet, please 🙏 . Thank you.<|||||>Please find the code with the correct args: ``` from transformers import (BigBirdForQuestionAnswering, BigBirdTokenizer) import torch device = "cuda:0" if torch.cuda.is_available() else "cpu" tokenizer = BigBirdTokenizer.from_pretrained("model path ") model = BigBirdForQuestionAnswering.from_pretrained("model path") device = "cuda:0" if torch.cuda.is_available() else "cpu" model = model.to((device), non_blocking=True) def question_answer(text, question, handle_impossible_answer=False): encoded_inputs = tokenizer(question, text, return_tensors="pt").to(device) start_positions = torch.tensor([1]).to((device), non_blocking=True) end_positions = torch.tensor([3]).to((device), non_blocking=True) # self.model.eval() bbmodel = model.to((device), non_blocking=True) with torch.no_grad(): # reduce memory consumption outputs = bbmodel( **encoded_inputs, start_positions=start_positions, end_positions=end_positions, output_attentions=True) print(outputs) ##note this is a sample input chunks: chunks = ["Scikit-learn is a free software machine learning library for the Python programming language.", "It features various classification, regression and clustering algorithms including support-vector machines", "sklearn is a lib"] question = "what is sklearn?" for chunk in chunks: print(question_answer(chunk, question,handle_impossible_answer=False)) print("-----------------------------------------------------") print(question_answer(chunk, question,handle_impossible_answer=False)) `print("------xxxxxxx-----------------------------------------------")` <|||||>Thanks @NautiyalAmit ! The following 2 lines would still fail . Could you specify the exact checkpoint name you used? Thanks. ```python tokenizer = BigBirdTokenizer.from_pretrained("model path ") model = BigBirdForQuestionAnswering.from_pretrained("model path") ```<|||||>Hi @ydshieh , you can check on base checkpoint 0 from: https://console.cloud.google.com/storage/browser/bigbird-transformer/pretrain/bigbr_base?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false<|||||>@NautiyalAmit I would like to help if the code snippet is self-contained, which means it should be able to run directly. (In some special case, I agree some manual actions might be necessary). Here the code snippet is incomplete (missing `model path `). Even the provided GCS link contains TF checkpoint files, which could not be loaded with the `.from_pretrained` method in `transformers` models. As you found the issue, you must have something that could run on your side. Please try to help us debug more easily in order to investigate the issue you encountered. I would guess you have used a model from [HuggingFace Hub](https://huggingface.co/models), with probably an official bigbird model checkpoint. But it would still be very nice if you can specify explicitly in the code snippet. Thank you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,655
closed
Generate: deprecate the use of model `config` as a source of defaults
EDIT: Updated with the discussion up to [2022/08/20](https://github.com/huggingface/transformers/issues/18655#issuecomment-1221047772) ## Why? A confusing part of `generate` is how the defaults are set. When a certain argument is not specified, we attempt to fetch it from the model `config` file. This makes `generate` unpredictable and hard to fully document (the default values change for each model), as well as a major source of issues :hocho: ## How? We have the following requirements: 1️⃣ The existing behavior can’t be removed, i.e., we must be able to use the model `config.json` as a source of generation parameters by default; 2️⃣ We do need per-model defaults -- some models are designed to do a certain thing (e.g. summarization), which requires a specific generation configuration. 3️⃣ Users must have full control over generate, with minimal hidden behavior. Ideally, we also want to: 4️⃣ Have separation of concerns and use a new `generate_config.json` to parameterize generation; A TL;DR of the plan consists in changing the paradigm from “non-specified `generate` arguments are overridden by the [model] configuration file” to “`generate` arguments will override the [generate] configuration file, which is always used”. With proper documentation changes and logging/warnings, the user will be aware of what's being set for `generate`. ### Step 1: Define a new generate config file and class Similar to the model config, we want a `.json` file to store the generation defaults. The class itself can be a very simplified version of `PretrainedConfig`, also with functionality to load/store from the hub. ### Step 2: Integrate loading generate config file in `.from_pretrained()` The generation configuration file should be loaded when initializing the model with a `from_pretrained()` method. A couple of things to keep in mind: 1. There will be a new `kwarg` in `from_pretrained`, `generate_config` (or `generation_config`? Leaning toward the former as it has the same name as the function); 2. It will default to `generate_config.json` (contrarily to the model `config`, which defaults to `None`). This will allow users to set this argument to `None`, to load a model with an empty generate config. Some users have requested a feature like this; 3. Because the argument can take a path, it means that users can store/load multiple generate configs if they wish to do so (e.g. to use the same model for summarization, creative generation, factual question-answering, etc) 🚀 5. Only models that can run `generate` will attempt to load it; 6. If there is no `generate_config.json` in the repo, it will attempt to initialize the generate configuration from the model `config.json`. This means that this solution will not change any `generate` behavior and will NOT need a major release 👼 7. To keep the user in the loop, log ALL parameters set when loading the generation config file. Something like the snippet below. 8. Because this happens at `from_pretrained()` time, logging will only happen at most once and will not be verbose. ``` `facebook/opt-1.3b` generate configuration loaded from `generate_config.json`. The following generation defaults were set: - max_length: 20 - foo: bar - baz: qux ``` ### Step 3: Generate uses the generate config class internally Instead of using the configuration to override arguments when they are not set, overwrite a copy of the generation config at `generate` time. I.e. instead of: ``` arg = arg if arg is not None else self.config.arg ... ``` do ``` generate_config = self.generate_config.copy() generate_config.arg = arg if arg is not None ... ``` This change has three main benefits: 1. We can improve the readability of the code, as we gain the ability to pass configs around. E.g. [this function](https://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/generation_utils.py#L674) won't need to take a large list of arguments nor to bother with their initialization. 2. Building `generate` argument validation *for each type of generation* can be built in simple functions that don't need ~30 arguments as input 🙃 3. The three frameworks (PT/TF/FLAX) can share functionality like argument validation, decreasing maintenance burden. ### Step 4: Document and open PRs with the generation config file Rewrite part of the documentation to explain that a generation config is ALWAYS used (regardless of having defaults loaded from the hub or not). Open Hub PRs to pull generate-specific parameters from `config.json` to `generate_config.json` ## Pros/Cons Pros: - Better awareness -- any `generate` default will be logged to the screen when loading a generate-compatible model; - Full control -- the users can choose NOT to load generation parameters or easily load a set of options from an arbitrary file; - Enables more readable `generate` code; - Enables sharing `generate`-related code across frameworks; - Doesn't need a major release. Cons: - Pulling the generate parameters into their own files won't happen everywhere, as merging the changes described in step 4 is not feasible for all models (e.g. due to unresponsive model owners); - Logging loaded defaults may not be enough to stop issues related to the default values, as the logs can be ignored; - Another config file (and related code) to maintain.
08-16-2022 15:31:25
08-16-2022 15:31:25
cc @patrickvonplaten <|||||>I like the idea of using a `use_config_defaults` a lot - think that's a great additional safety mechanism to ensure it's possible to keep backward compatibility. Also we were thinking about the idea of having a `generation_config.json` file that can optionally be passed to `generate` by the user and that includes all the default values that are set in the config at the moment. This would also make it easier to possible have multiple different generation configs. Some models like `bart-large`: https://huggingface.co/facebook/bart-large/blob/main/config.json#L45 always have certain generation parameters enabled by default and IMO it would be a smoother transition to help the user extract a `generation_config.json` from `config.json` and then always pass this config if present in the repo to `generate(...)` **instead** of forcing the user to always pass all those arguments to generate. With the config, we could do something like the following automatically: - User runs model repo with `generate`. We detect that no `generation_config.json` is present and that default generation params are used from `config.json` - We throw a warning that states "no generation config detected, we strongly advise you to run the following code snippet on your repo to create a `generate_config.json` file - We keep all the generation params in `config.json` though to keep backwards compatibility with `use_config_defaults` - However if a `generation_config.py` is present we always use this and do not look into the config - We have to make an exception with `max_length=20` because it's always set and we don't want to create a `generation_config.py` for all models Also happy to jump on a call to brainstorm about this a bit!<|||||>Fair point! 👍 From the comment above, let's consider the updated requirements: 1. Until `v5`, the default behavior can’t change, i.e., we will use the model `config.json` as a source of defaults; 2. From `v5` onwards, the default behavior is to use `generate_config.json` as a source of defaults; 3. The transition should be as smooth as possible — the users should be able to anticipate this transition, so nothing changes when we release the new major version; 4. We want to use defaults (many models are designed to do a certain thing) while also enabling power users to have full control over `generate`. ______________________ A solution that fits all requirements is the ability to specify where the defaults should be loaded from, with default paths controlled by us. With the aid of a script to create the new generation config file from the existing model config file, the transition should be smooth and users can anticipate any change. E.g. if we have a `generation_config_file` flag, defaulting to `None` and where a path in the model repo can be specified, then we could: - Set `generation_config_file="config.json"`, which would mimic the existing default behavior (and would be the default behavior for now); - Set `generation_config_file="generation_config.json"`, which would use the new config file for generation (which would be the default behavior in the future); - Set `generation_config_file` to ANY generation config path, so that power users can have multiple configurations for the same model; - Set `generation_config_file=False` (or other non-`None` falsy value) to not use any configuration at all. We seem to need two warnings ⚠️ : 1. [Needed because in `v5` we will be defaulting to a new config file, which may not exist in a user's model repo, and the model may have generation parameters in its config] If the configuration file does not exist, fall back to `config.json` and warn about it. We can quickly scan `config.json` to avoid raising a warning if it doesn't contain any generation argument; 2. [Needed because the default behavior will still be to use values from a config, and many users are not aware of it] If `generation_config_file` is not specifically set by the user, a warning should be raised if the config replaces any argument. Many configs don't replace any value. Both warnings can be avoided by specifying the `generation_config_file` argument. They may be a bit verbose, but I think verbosity (which can be shut down easily) is preferable to magic confusing behavior. The `max_length=20` default (and other similar defaults) can be easily added -- `max_length = max_length if max_length is not None else 20` after attempting to load the configs. We can add them to the argument's documentation (see below). __________________________________ 🤔 The only issue I have with this approach is that it is hell to document (similar to the current approach). Having "this argument defaults to X or to `config.argument`" for all arguments' documentation line is verbose and confusing, and users need to be aware that the configuration files play an important role. My suggestion here would be to make `generation_config_file` the second argument of `generate` (after `input_ids`), so that it becomes immediately clear that `generate` argument defaults can be set through a file. Then, I would remove further references to the config in the docs, relying on the warnings to remind the user of what's going on. I think it is clear by now that long docs don't avoid simple issues :( WDYT? (P.S.: will edit the issue after we settle on an approach :) )<|||||>Cool, I think this is going into a very nice direction! A couple more questions to think about: - Do we really want a `generate_config_file` keyword argument for `generate(...)` ? For me it would be much more intuitive to just have `config: Optional[Dict]` as an argument. This would then mean it requires the user to do one more step for a specific config: ```python generate_config = # load generation config from path model.generate(input_ids, config=generate_config) ``` - We could add a `config` argument to the init of `GenerationMixin` which would make backwards compatibility then very easy: - `from_pretrained(...)` would load either a `generation_config.json` or if not present a `config.json` and then set it as `self.generation_config = config` => then every generation model would have access to `self.generation_config` . In `generate` would could then add a `self.generate_config = config if config is not None else self.generate_config (the default one)` and then overwrite `self.generate_config` once more with if the user passes generate args into `generate` directly (e.g. model.generate(top_k=top_k)` - Overall I think we cannot really come around the fact the we need to store a config inside the model somewhere because it'd be a bit to me to load a config **upon calling generate**. E.g. `model.generate(..., generate_config="config.json")` would have to load a config which opens too many problems with internet connection etc.... - What type should `generation_config` be? Just a `dict` or let's maybe create a class for it (similar to `BloomConfig`). Creating its own class probably also helps with documentation -> What do you think? <|||||>@patrickvonplaten Agreed, the argument name is a bit too long 😅 However, if we decide to go the `GenerationMixin.__init__` route, we can't pick `config` -- `PreTrainedModel`, which inherits from `GenerationMixin`, uses a `config` argument for the model config. Perhaps `generation_config`? We could then do `.from_pretrained(foo, generation_config=bar)`. I love the ideas you gave around the config: 1. if it is part of the `__init__` and if we always attempt to load the new file format before falling back to the original config, it actually means we don't need to do a major release to build the final version of this updated configuration handling! No need to change defaults with a new release at all ❤️ ; 2. The idea of "arguments write into a config that is always used" as opposed to "config is used when no arguments are passed" is much clearer to explain. We gain the ability to pass config files around (as opposed to tens of arguments), and it also opens the door to exporting generation configurations; 3. Despite the above, we need to be careful with the overwrites: if a user calls `model.generate(top_k=top_k)` and then `model.generate(temperature=temperature)`, `top_k` should be the original config's `top_k`. Copies of objects are needed; 4. Agreed, having all downloads/file paths in the same place is helpful. Regarding `dict` vs `class` -- I'd go with `class` (or perhaps a simpler `dataclass`). Much easier to document and enforce correctness, e.g. check if the right arguments are being used with a certain generation type. __________________________ It seems like we are in agreement. Are there more issues we can anticipate?<|||||>Very nice summary @gante thanks for writing this all down - I agree with all the above point! @LysandreJik @sgugger and maybe @thomwolf could you take a quick look here? I think @gante and I have now an actionable plan for `generate()` and would be ready to open a PR. Before starting the PR, it would be nice if you could check if you generally agree with our comments here so that we're not totally on a different page before opening such a big PR. The PR will then take some time and require discussion, but I think we have a clear vision of what we want now<|||||>@patrickvonplaten @LysandreJik @sgugger @thomwolf -- I took the liberty of updating the issue at the top with the plan that originated from the discussion here (and also to structure the whole thing in my head better) :)<|||||>Thanks for the write-up! I think this is a much welcome change that will tremendously improve the way we use `generate`. Writing down some thoughts below. - Very personal, but I think `generation_config` sounds more explicit. `generate` is very understandable by us because we know what is the "generate method", but "generation config" sounds so much clearer to me than "generate config". - Would the generate config class be able to load from other models? i.e., could we load a generation config specific to `bart-large-cnn` in `t5-large`? Would we enforce model-specificity, or would we allow this to work? How would we handle model-specific attributes (maybe there aren't any, there seems to be only RAG that has its own `generate` method)? - Could we store multiple generation configs in the same repo? How would you handle a model that can have several generation configuration, for example a model such as a prefix-LM that could do both translation and summarization with the same checkpoint? The biggest work here will likely be education & documentation. I think this will already make things much clearer, but I suppose the much awaited generate method doc rework will be an absolute requirement after this refactor!<|||||>Agreed, the biggest issue is and will be education and documentation. Hopefully, this will make the process easier 🙏 - Regarding one or multiple generation config classes: there are two arguments in `generate` that are used with a limited number of (encoder-decoder) models, `forced_bos_token_id` and `forced_eos_token_id`. Additionally, there is one argument, `encoder_no_repeat_ngram_size`, that is only used in encoder-decoder models (and have no effect on decoder-only). The remaining **36** arguments are usable by all models. IMO, having a single class would make documentation (the key issue) much simpler, and model<>arguments verification can be done in the function (as it is done in the present). - Regarding multiple configs in the same repo: Yes that would be doable. According to the plan above, through the specification of a different `generation_config` files. But @LysandreJik raised a good point, as the name of the files containing the defaults for different tasks may not be immediately obvious to the users, which implies more documentation pain. Perhaps we can take the chance here to approximate `generate` to `pipeline`, which we know is user-friendly -- in the `pipeline`, [specific config parameters are loaded for each task](https://github.com/huggingface/transformers/blob/051311ff66e7b23bfcfc42bc514c969517323ce9/src/transformers/pipelines/base.py#L783) (here's an [example of a config with task-specific parameters](https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L55)). We could use the exact same structure with the new `generation_config` files, where all task-specific arguments can be set this way, and `generate` could gain a new `task` argument. That way, there would be a higher incentive to set task-specific parameters that would work across the HF ecosystem (`generate` and `pipeline` for now, perhaps more in the future). ```python # with the `task` parameter, it is trivial to share the parameters for some desired behavior # When loading the model, the existence of task-specific options would be logged to the user. model = AutoModelForSeq2SeqLM.from_pretrained("...") input_prompt = ... task_tokens = model.generate(**input_prompt, task="my_task") # There would be an exception if `my_task` is not specified in the generation config file. ```<|||||>The plan looks good to me, but the devil will be in the details ;-) Looking forward to the PRs actioning this!<|||||>Closing -- `generation_config` is now the source of defaults for `.generate()`
transformers
18,654
closed
Update TF fine-tuning docs
This PR updates the fine-tuning sidebar tutorial with modern TF methods that were added in the most recent release.
08-16-2022 15:16:23
08-16-2022 15:16:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>@stevhliu is there a better way to format a link to the `prepare_tf_dataset` docs [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18654/en/training#loading-data-as-a-tfdatadataset) than the way I did it here? It prints the whole `TFPreTrainedModel.prepare_tf_dataset` text, which looks a bit ugly on the page!<|||||>Yes, I believe you can just squiggly it! [`~TFPreTrainedModel.prepare_tf_dataset`]<|||||>@stevhliu Thank you for the suggestions! I'll make more edits to incorporate the other bits and ping you again for a final look.<|||||>@sgugger @stevhliu I finished incorporating your edits and did some other cleanup. I also replaced `to_tf_dataset` in the other fine-tuning pages. I didn't touch the translations, though - should I edit those too?<|||||>@sgugger I tried those edits but it looked a little odd because there was no separate header for the PyTorch section. I added a `Train` header to the whole thing with a brief intro, and then a `Train with PyTorch Trainer` header inside that block, which I think works a little better and makes it easier for people to find what they want in the sidebar. Let me know what you think!<|||||>@sgugger :facepalm: I knew I'd regret pinging you before waiting for the job to finish and checking it myself. Fixed!
transformers
18,653
closed
Generate: validate `model_kwargs` on FLAX (and catch typos in generate arguments)
# What does this PR do? FLAX version of https://github.com/huggingface/transformers/pull/18261 Adds model_kwargs validation to FLAX generate, which also catches typos in the arguments. See the PR above for more details and an example of the error message users will see.
08-16-2022 14:42:31
08-16-2022 14:42:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,652
closed
Add cross entropy loss with stable custom gradient
# What does this PR do? This PR adds Flax code for a cross entropy loss calculation with an additional term to stabilize gradients for bfloat16 training. The loss function is authored by the T5X Authors (https://github.com/google-research/t5x/blob/90d74fa703075d8b9808ae572602bc48759f8bcc/t5x/losses.py#L25) Also add 'z_loss' as training argument to the T5 Flax pre-training script. If z_loss > 0, then an auxiliary loss equal to z_loss*log(z)^2 will be added to the cross entropy loss (z = softmax normalization constant). The two uses of z_loss are: 1. To keep the logits from drifting too far from zero, which can cause unacceptable roundoff errors in bfloat16. 2. To encourage the logits to be normalized log-probabilities. While the z_loss function is only added to the t5 flax pretraining script, this loss function might be interesting for other flax pre-training scripts. I did not test this. Finally, there is a (currently unused) function `compute_weighted_cross_entropy` with z_loss and label smoothing, which might be useful for other flax training scripts as well. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [*] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No but I tested with the t5 flax norwegian script, bfloat16 and z_loss set to 1e-4, the setting used in t5 gin configs. In my own tests I've seen the following consistently: Without z_loss and bfloat16, the loss will either diverge or converge on a higher plateau than training with float32. With z_loss and bfloat16, set to 1e-4, loss curves almost match curves with float32 training. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten, @patil-suraj If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.
08-16-2022 14:32:43
08-16-2022 14:32:43
float32 compared with bfloat16 without and with z_loss ![image](https://user-images.githubusercontent.com/3098618/184906406-88786977-7a52-4f59-8319-c818c7c85050.png) <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18652). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @yhavinga, Thanks a lot for your PR. Could we maybe add this to the `examples/research_folder` instead of the official examples? The reason is that we won't have time to maintain this example and we would have need to check those loss curves on more than just Norwegian. Would that be ok for you? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Apologies for the long delay. In the meantime I've noticed that term added to the loss (z_loss * jax.lax.square(log_z)) might in fact be (similar to) L2 regularization, and that this kind of regularization might in fact already be available through Optax Adafactors weight_decay_rate parameter. I currently do not have access to TRC so cannot test this, but thought it might be interesting to others training with bfloat16 and run_t5_mlm_flax.py if they might hit this page.
transformers
18,651
closed
Generate: validate `model_kwargs` on TF (and catch typos in generate arguments)
# What does this PR do? TF version of https://github.com/huggingface/transformers/pull/18261 Adds `model_kwargs` validation to TF `generate`, which also catches typos in the arguments. See the PR above for more details and an example of the error message users will see. Since TF had no dedicated file for `generate` tests, I took the liberty to create it and move some existing tests there (>70% of the diff is due to moving things around :) ). The test for this new check was also added there.
08-16-2022 13:28:04
08-16-2022 13:28:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,650
closed
Allow users to force TF availability
We have a user report that with custom Tensorflow builds and package names that `_tf_available` can return `False` even if `import tensorflow` succeeds, because the user's package name isn't in the [allowed list](https://github.com/huggingface/transformers/blob/02b176c4ce14340d26d42825523f406959c6c202/src/transformers/utils/import_utils.py#L63L75). This is quite niche, so I don't want to do anything that could affect other users and workflows, but I added a `FORCE_TF_AVAILABLE` envvar that will skip version checks and just make sure TF is treated as available. @sgugger WDYT, or is there a better solution? Fixed #18642
08-16-2022 12:28:29
08-16-2022 12:28:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ah, additional nit: I'd document this somewhere.<|||||>@LysandreJik I think a lot of these envvars aren't documented anywhere - I can't see any documentation for USE_TF or USE_TORCH! Maybe we should make a separate docs PR with a list of envvars that `transformers` uses?<|||||>That would be fantastic :) Thanks for your contribution, merging!
transformers
18,649
closed
When resuming from checkpoint with Trainer using a streamed dataset, use the Datasets API to skip
### Feature request Huggingface Datasets has a feature where you can instantiate datasets in streaming mode, so you don't have to download the whole thing onto your machine. The API has a skip function. The Transformers Trainer doesn't use this, it just iterates through all the batches to be skipped. I propose Trainer checks whether the given dataset is a Datasets one in streaming mode, and if so, it uses the skip function. ### Motivation I've been using the C4 dataset in streaming mode because of its size. Whenever I resume from a checkpoint, it takes a long time to skip. Around an hour for 200k batches. With this change, it should be effectively instant, which would save me a lot of time. ### Your contribution I can make the change if no one else wants to. In which case, I'd like to be assured that the change will be reviewed and merged in a reasonable timeframe rather than being lost in the sea of pull requests.
08-16-2022 10:48:22
08-16-2022 10:48:22
WDYT @lhoestq?<|||||>Unfortunately the `skip` function does the same. Though if you call `skip` before `map` it won't apply the preprocessing on the examples to skip and save time. Therefore I don't think it using `skip` would have a big impact here<|||||>Thanks for your insight @lhoestq . Is there a technical reason the function is implemented that way? I assume random access is supported, given that shuffling of enormous datasets is allowed and fast.<|||||>Random access is not supported for streaming datasets. Shuffling is approximate by shuffling the dataset shards order, and using a shuffle buffer, see the documentation here: https://huggingface.co/docs/datasets/v2.4.0/en/stream#shuffle<|||||>Would it be possible to skip whole shards when a large number of samples need to be skipped?<|||||>In the general case we don't know in advance how many examples there are per shard (it depends on the data file format). In many cases we would need some extra metadata somewhere that says how many examples each shard contain. For example the C4 dataset is made out gzipped JSON Lines files - you don't know in advance how many examples each shard contain, because you need to uncompress the data and count the EOL. For certain file formats like Parquet or Arrow however, the number of examples is known for free, as metadata included in the file itself. So maybe for those specific formats we could do something<|||||>Ah, this needs deeper changes than I thought then. Still, the metadata would be nice to have. It could even be generated lazily when people stream the dataset, to avoid a bit upfront cost. Alternatively, the streamed dataset client could locally keep a cache of how many samples each shard it's iterated through contains. I'd prefer the former, so it doesn't matter if the cache is cleared or you switch machines.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,648
closed
TF: Fix generation repetition penalty with XLA
# What does this PR do? There was a dynamic shape being fetched as a static shape, causing issues from the 2nd generation iteration. Fixes #18630
08-16-2022 10:34:17
08-16-2022 10:34:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,647
closed
Fix cost condition in DetrHungarianMatcher and YolosHungarianMatcher to allow zero-cost
# What does this PR do? Fixes costs condition in DetrHungarianMatcher and YolosHungarianMatcher. In https://github.com/huggingface/transformers/pull/16720 a bug was introduced while switching from asserts to conditions. Currently, any zero-cost will result in a ValueError: ```python if class_cost == 0 or bbox_cost == 0 or giou_cost == 0: raise ValueError("All costs of the Matcher can't be 0") ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @sgugger based on reviewers of previous PR https://github.com/huggingface/transformers/pull/16720.
08-16-2022 09:41:41
08-16-2022 09:41:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>This looks reasonable given the error message! cc @NielsRogge
transformers
18,646
closed
[bnb] Small improvements on utils
# What does this PR do? Fixes a small typo in `bitsandbytes.py`, should address https://github.com/huggingface/blog/pull/463#discussion_r946067141 I will have to test it first and mark it as ready for review!
08-16-2022 09:07:26
08-16-2022 09:07:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Can confirm the tests pass!<|||||>so will there always be just one module not to convert? won't it be safer to have modules instead and work with the list?<|||||>I have proposed a small refactoring that includes: - checking the list of modules to not convert instead of a single value. - changing an error message as it confused some user. Check: https://github.com/TimDettmers/bitsandbytes/issues/10 The bnb slow tests are passing with this fix!<|||||>From https://github.com/huggingface/transformers/issues/18660 I also just added a commit to support having a custom list of the keys to ignore <|||||>Thanks a lot @stas00 ! There is no rush at all for this PR, we can definitely wait for @sgugger before moving forward with it <|||||>Can confirm the bnb slow tests are passing with the proposed fixes! Would love to have a final round of review 💪 cc @sgugger @stas00 <|||||>Can confirm the slow tests pass after rebasing on `main`, will merge once it's green! 🟢
transformers
18,645
closed
[BLOOM] Update doc with more details
# What does this PR do? Addressing https://huggingface.co/bigscience/bloom/discussions/86 I think that we should add the full list of the trained languages on the documentation so that we can refer to that whenever any user will have any question related to the trained languages cc @ydshieh @Muennighoff
08-16-2022 09:05:00
08-16-2022 09:05:00
cc @cakiki <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I would remove it, as it doesn't matter for the architecture, which these docs explain afaict? It's just an artefact of the data the models are trained on, hence it's arldy on the pretrained model & dataset READMEs. If someone were to release a BLOOM architecture model trained on different languages, this would be confusing imo. Also should probably say `Pre-trained BLOOM models were officially released in the following sizes:`, as theoretically it's available in whatever version/size someone wants, just need to train it from scratch <|||||>Not sure if the model doc is restricted to the architecture. - `T5` has `## Example scripts` section - Some models include `training` and `generation` sections, for example, `t5` or `m2m_100` - `Marian` has a `## Naming` section But I agree that we can probably just have something similar to `Marian` doc: ``` Note: The list of languages used in training can be found [here](link to model card page). ```<|||||>Okay I agree with both points! IMO we can just specify that this list of languages is only relevant for bloom models stated on the doc, and for custom bloom models one should refer to the corresponding model card. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,644
closed
Change scheduled CIs to use torch 1.12.1
# What does this PR do? To align with CircleCI tests.
08-16-2022 07:59:48
08-16-2022 07:59:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,643
closed
`AttributeError: 'BertTokenizer' object has no attribute 'tokens_trie'
### System Info Google colab ### Who can help? Anyone ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I have trained a model with `transformers==4.9.1`. Now I want to run this saved model with the new version of transformers and I get this error: `AttributeError: 'BertTokenizer' object has no attribute 'tokens_trie'`. As I looked at transformers codes `token_trie` has been added which was not in the previous versions if I am correct. How can I solve this problem from my side? and is there any possibilities that this compatibility problem can be handled in the newer versions of transformers? ### Expected behavior None
08-16-2022 07:33:54
08-16-2022 07:33:54
@Narsil is the one knows the best. This is added in the PR #13220<|||||>@roetezadi How did you solve this ?
transformers
18,642
closed
_tf_available for customized built tensorflow
### System Info n/a ### Who can help? @Rocketknight1 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` File "virtualenv_mlu/lib/python3.8/site-packages/transformers/pipelines/base.py", line 212, in infer_framework_load_model raise RuntimeError( RuntimeError: At least one of TensorFlow 2.0 or PyTorch should be installed. To install TensorFlow 2.0, read the instructions at https://www.tensorflow.org/install/ To install PyTorch, read the instructions at https://pytorch.or ``` ### Expected behavior https://github.com/huggingface/transformers/blob/02b176c4ce14340d26d42825523f406959c6c202/src/transformers/utils/import_utils.py#L63L75 I built a tensorflow-xxu for our in house accelerator and tried to run the transformer example. I got the RuntimeError indicates tf is not available. Currently the _tf_available has a hard-coded candidates list. I'm not suire if extending the existing list is a good idea. Maybe adding some runtime flexibility would be better? Thanks Kevin
08-16-2022 07:31:33
08-16-2022 07:31:33
Hi @kevint324 I think it (extending the list ) is fine if you would work with a specific `transformers` version. But it would be a bit tedious if you want to use newer versions constantly. cc @Rocketknight1 for the idea regarding `adding some runtime flexibility`.<|||||>Changed the tag to `Feature request` instead :-)<|||||>Hi @kevint324, I filed a PR that might resolve this, but I want to check with other maintainers that it's okay before I merge it. In the meantime, can you try it out? Just run `pip install git+https://github.com/huggingface/transformers.git@allow_force_tf_availability`, then set the environment variable `FORCE_TF_AVAILABLE=1` before running your code, and it should skip those checks now.<|||||>Yes, it works. Thanks for the quick fix.
transformers
18,641
open
Add TF VideoMAE
### Feature request Add the [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) model in TensorFlow. ### Motivation There's an evident scarcity of SoTA and easy-to-use video models in TensorFlow. I believe having VideoMAE in TensorFlow would greatly benefit the community. ### Your contribution I am willing to contribute the model. Please assign it to me. @amyeroberts possible to assign this to me?
08-16-2022 01:49:01
08-16-2022 01:49:01
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Please reopen the issue as I am working on it. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It's taking more time as I am in a job switch. Please reopen it. I apologise for the inconvenience.
transformers
18,640
closed
Finetune guide for semantic segmentation
This PR creates a finetune guide for semantic segmentation in the docs. Unlike previous finetune guides, this one will include: * metrics for evaluation * a section for how to use the model for inference * an embedded Gradio demo 🚧 To do: - [x] create section for inference - [ ] create Gradio demo
08-16-2022 01:16:17
08-16-2022 01:16:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,639
closed
CLIP output doesn't match the official weight
### System Info I am using transformer 4.20.1. I found that the official CLIP model outputs don't match the hugging face one. ### Who can help? @NielsRogge @stevhliu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction To install clip, run `pip install git+https://github.com/openai/CLIP.git` ```python import torch import clip from PIL import Image import requests device = "cpu" model, preprocess = clip.load("ViT-B/32", device=device) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image = preprocess(image).unsqueeze(0).to(device) text = clip.tokenize(["a photo of a cat", "a photo of a dog"]).to(device) with torch.no_grad(): clip_image_features = model.encode_image(image) clip_text_features = model.encode_text(text) logits_per_image, logits_per_text = model(image, text) probs = logits_per_image.softmax(dim=-1).cpu().numpy() print("CLIP Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]] from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1) print("HF Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]] print(clip_text_features.shape) print(outputs.text_model_output.pooler_output.shape) print((clip_text_features - outputs.text_model_output.pooler_output).abs().max()) assert torch.allclose(clip_text_features, outputs.text_model_output.pooler_output, atol=1e-5) ``` ### Expected behavior The feature and probability should be the same.
08-15-2022 23:24:24
08-15-2022 23:24:24
Hi @xvjiarui, actually the output between original and hf is the same, but you compare with the wrong tensor. Firstly, we can compare the code between the original and hf (here we focus on the text features part). ``` # Original CLIP def encode_image(self, image): return self.visual(image.type(self.dtype)) def encode_text(self, text): x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] x = x + self.positional_embedding.type(self.dtype) x = x.permute(1, 0, 2) # NLD -> LND x = self.transformer(x) x = x.permute(1, 0, 2) # LND -> NLD x = self.ln_final(x).type(self.dtype) # x.shape = [batch_size, n_ctx, transformer.width] # take features from the eot embedding (eot_token is the highest number in each sequence) x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection return x def forward(self, image, text): image_features = self.encode_image(image) text_features = self.encode_text(text) # normalized features image_features = image_features / image_features.norm(dim=1, keepdim=True) text_features = text_features / text_features.norm(dim=1, keepdim=True) # cosine similarity as logits logit_scale = self.logit_scale.exp() logits_per_image = logit_scale * image_features @ text_features.t() logits_per_text = logits_per_image.t() # shape = [global_batch_size, global_batch_size] return logits_per_image, logits_per_text ``` ``` # HF CLIP def forward( self, input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, return_loss: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, CLIPOutput]: # Use CLIP model's config for some fields (if specified) instead of those of vision & text components. output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict vision_outputs = self.vision_model( pixel_values=pixel_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) text_outputs = self.text_model( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) image_embeds = vision_outputs[1] image_embeds = self.visual_projection(image_embeds) text_embeds = text_outputs[1] text_embeds = self.text_projection(text_embeds) # normalized features image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True) text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True) # cosine similarity as logits logit_scale = self.logit_scale.exp() logits_per_text = torch.matmul(text_embeds, image_embeds.t()) * logit_scale logits_per_image = logits_per_text.T loss = None if return_loss: loss = clip_loss(logits_per_text) if not return_dict: output = (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs) return ((loss,) + output) if loss is not None else output return CLIPOutput( loss=loss, logits_per_image=logits_per_image, logits_per_text=logits_per_text, text_embeds=text_embeds, image_embeds=image_embeds, text_model_output=text_outputs, vision_model_output=vision_outputs, ) ``` you may found out that the output of original clip `encode_text` method should equal to `text_embeds = self.text_projection(text_embeds)` in the hf repo. You can modify your script like this one to check it. ``` import torch import clip from PIL import Image import requests import numpy as np device = "cpu" model, preprocess = clip.load("ViT-B/32", device=device) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image = preprocess(image).unsqueeze(0).to(device) text = clip.tokenize(["a photo of a cat", "a photo of a dog"]).to(device) with torch.no_grad(): clip_image_features = model.encode_image(image) clip_text_features = model.encode_text(text) logits_per_image, logits_per_text = model(image, text) org_probs = logits_per_image.softmax(dim=-1).cpu().numpy() print("CLIP Label probs:", org_probs) # prints: [[0.9927937 0.00421068 0.00299572]] from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") image = Image.open(requests.get(url, stream=True).raw) inputs = processor( text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True, ) outputs = model(**inputs) logits_per_image = outputs.logits_per_image hf_probs = logits_per_image.softmax(dim=1).detach().cpu().numpy() print("HF Label probs:", hf_probs) # prints: [[0.9927937 0.00421068 0.00299572]] print(np.allclose(org_probs, hf_probs)) print( torch.allclose( clip_text_features, model.text_projection(outputs.text_model_output.pooler_output).detach(), atol=1e-5, ) ) ```<|||||>Hi @aRyBernAlTEglOTRO thanks for your reply. I see. That makes sense.
transformers
18,638
closed
Update run clm no trainer.py and run mlm no trainer.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes issue in selecting `no_decay` parameters, we need to exclude "layer_norm.weight" not "LayerNorm.weight" Fixes issue that `resume_step` will not be constructed properly when the user continues to train from a checkpoint with `gradient_accumulation_steps != 1` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-15-2022 21:49:16
08-15-2022 21:49:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @muellerzr, would you like to take a look at this while Sylvain is away?
transformers
18,637
closed
Update run_translation_no_trainer.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes issue in selecting `no_decay` parameters, we need to exclude "layer_norm.weight" not "LayerNorm.weight" Fixes issue that `resume_step` will not construct properly when the user continues to train from a checkpoint with `gradient_accumulation_steps != 1` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-15-2022 21:23:27
08-15-2022 21:23:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @muellerzr would you like to take a look at this while Sylvain is away?<|||||>@zhoutang776 I believe these two are the same, no? https://github.com/huggingface/transformers/pull/18638<|||||>> @zhoutang776 I believe these two are the same, no? #18638 Yes, I guess other examples files have the same problem but I haven't checked other codes besides these three files. <|||||>Okay! Will just merge this one then 😄 Thanks for the bugfix and nice find!
transformers
18,636
open
Support output_scores in XLA TF generate
### Feature request Support output_scores in XLA TF generate. ### Motivation The scores would be critical and widely used in generation models for downstream applications. It is a confidence score for downstream thresholding or ranking. But it is not yet supported for XLA TF generate. ### Your contribution N/A
08-15-2022 19:43:56
08-15-2022 19:43:56
cc @gante @sgugger <|||||>Hey @rossbucky 👋 The support for returning scores with XLA is in our plans. Like other XLA-related changes, it requires rewriting that part of the code to work with static shapes, as opposed to lists of tensors that grow as generate runs. The next round of `generate` changes will focus on correctness (e.g. adding informative error messages). This feature will come right after that -- I'm keeping this issue open to track it publicly. <|||||>Hi, just curious if there is any update on this issue? Or, are there any known alternatives for easily getting confidence scores w/ XLA enabled, particularly for greedy decoding? It seems beam_search outputs a `sequences_score` when `return_dict_in_generate` is `True`, but don't see anything similar for greedy decoding.<|||||>No updates :) I have yet to build the appropriate piping to update and return those variables
transformers
18,635
closed
Passing optimizer to Trainer constructor does not work
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (False) - Tensorflow version (GPU?): 2.8.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: TPU (used on the platform, nothing specific in the script) - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the following script, which passes an optimizer object to `Trainer`. ``` import numpy as np import site import torch from datasets import load_dataset, load_metric from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments, set_seed from transformers.trainer_pt_utils import get_parameter_names PASS_OPTIMIZER_TO_TRAINER = True MODEL_NAME = 'albert-large-v2' TASK = 'rte' MAX_SEQ_LENGTH = 128 EPOCHS = 1 LEARNING_RATE = 2e-5 SEED = 10000 OPTIMIZER = 'adamw_torch' OUTPUT_DIR = 'output' train_args = TrainingArguments(num_train_epochs=EPOCHS, learning_rate=LEARNING_RATE, seed=SEED, optim=OPTIMIZER, output_dir=OUTPUT_DIR, overwrite_output_dir=True, evaluation_strategy='epoch', do_eval=True, full_determinism=True) set_seed(SEED) model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) raw_datasets = load_dataset("glue", TASK) metric = load_metric("glue", TASK) def compute_metrics(p): preds = p.predictions preds = np.argmax(p.predictions, axis=1) return metric.compute(predictions=preds, references=p.label_ids) def preprocess_function(examples): # Tokenize the texts args = ( (examples['sentence1'], examples['sentence2']) ) return tokenizer(*args, padding="max_length", max_length=MAX_SEQ_LENGTH, truncation=True) raw_datasets = raw_datasets.map( preprocess_function, batched=True) train_dataset = raw_datasets["train"] eval_dataset = raw_datasets["validation"] if not PASS_OPTIMIZER_TO_TRAINER: trainer = Trainer( model=model, args=train_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, tokenizer=tokenizer) # Create adamw_torch optimizer manually decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if "bias" not in name] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if n in decay_parameters], "weight_decay": train_args.weight_decay, }, { "params": [p for n, p in model.named_parameters() if n not in decay_parameters], "weight_decay": 0.0, }, ] optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=train_args.learning_rate, betas=(train_args.adam_beta1, train_args.adam_beta2), eps=train_args.adam_epsilon) if PASS_OPTIMIZER_TO_TRAINER: trainer = Trainer( model=model, args=train_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, tokenizer=tokenizer, optimizers=(optimizer, None)) else: #trainer.optimizer = optimizer pass trainer.train() ``` The model fails to train. Also the training pass runs at about 2x the normal speed. If the variable `PASS_OPTIMIZER_TO_TRAINER` is now set to `False`, the `Trainer` creates its optimizer based on `train_args`, which should be identical to the manually created one. However, now the training is successful. I'm guessing that after passing `model` into the `Trainer` constructor it gets modified and the `optimizer` parameters are no longer valid. This is because in the script (with `PASS_OPTIMIZER_TO_TRAINER = False`) if line -4 is uncommented it has no effect, indicating that `optimizer` is now the same as `trainer.optimizer`. ### Expected behavior The script should work properly as is. It should have identical results whether `PASS_OPTIMIZER_TO_TRAINER` is `True` or `False`. If my guess is correct, then I don't see how the optimizer argument of `Trainer` can accept an optimizer object. But then that creates issues for anyone wanting to use `Trainer` with a custom optimizer.
08-15-2022 19:18:10
08-15-2022 19:18:10
cc @muellerzr, would you like to take a look at this while Sylvain is on leave?<|||||>@quantitative-technologies your issue here is how you have your if/else setup. I believe by reinstantiating an optimizer always even if we don't use it in the trainer, the model gets linked to that optimizer instead (or too?), and as a result you aren't training well. Here's my refactor of your code, and below shows that when doing `PASS_OPTIMIZER_TO_TRAINER` = `True` or `False`, I get the exact same results (bar the timing ever so slightly) ```python import numpy as np import site import torch from datasets import load_dataset, load_metric from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments, set_seed from transformers.trainer_pt_utils import get_parameter_names PASS_OPTIMIZER_TO_TRAINER = True MODEL_NAME = 'albert-large-v2' TASK = 'rte' MAX_SEQ_LENGTH = 128 EPOCHS = 1 LEARNING_RATE = 2e-5 SEED = 10000 OPTIMIZER = 'adamw_torch' OUTPUT_DIR = 'output' train_args = TrainingArguments( num_train_epochs=EPOCHS, learning_rate=LEARNING_RATE, seed=SEED, optim=OPTIMIZER, output_dir=OUTPUT_DIR, overwrite_output_dir=True, evaluation_strategy='epoch', do_eval=True, full_determinism=True ) set_seed(SEED) model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) raw_datasets = load_dataset("glue", TASK) metric = load_metric("glue", TASK) def compute_metrics(p): preds = p.predictions preds = np.argmax(p.predictions, axis=1) return metric.compute(predictions=preds, references=p.label_ids) def preprocess_function(examples): # Tokenize the texts args = ( (examples['sentence1'], examples['sentence2']) ) return tokenizer(*args, padding="max_length", max_length=MAX_SEQ_LENGTH, truncation=True) raw_datasets = raw_datasets.map( preprocess_function, batched=True ) train_dataset = raw_datasets["train"] eval_dataset = raw_datasets["validation"] if not PASS_OPTIMIZER_TO_TRAINER: trainer = Trainer( model=model, args=train_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, tokenizer=tokenizer ) else: # Create adamw_torch optimizer manually decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if "bias" not in name] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if n in decay_parameters], "weight_decay": train_args.weight_decay, }, { "params": [p for n, p in model.named_parameters() if n not in decay_parameters], "weight_decay": 0.0, }, ] optimizer = torch.optim.AdamW( optimizer_grouped_parameters, lr=train_args.learning_rate, betas=(train_args.adam_beta1, train_args.adam_beta2), eps=train_args.adam_epsilon ) trainer = Trainer( model=model, args=train_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, tokenizer=tokenizer, optimizers=(optimizer, None) ) trainer.train() ``` With passing: ```python {'epoch': 1.0, 'train_loss': 0.5830265436417017, 'train_runtime': 206.1904, 'train_samples_per_second': 12.076, 'train_steps_per_second': 1.513} ``` Without passing: ```python {'epoch': 1.0, 'train_loss': 0.5830265436417017, 'train_runtime': 205.4122, 'train_samples_per_second': 12.122, 'train_steps_per_second': 1.519} ```<|||||>@muellerzr I don't believe that was the issue. My if/else structure was convoluted because I was trying to make an additional point besides the bug. Now when I run the exact code you sent, I am still seeing the original problem: ``` {'train_runtime': 50.4719, 'train_samples_per_second': 49.334, 'train_steps_per_second': 6.182, 'train_loss': 0.7272015596047426, 'epoch': 1.0} ``` So it is not learning. Also mine is running ~4x faster than yours, though we may be using different hardware. I am using a Colab instance with a TPU (using a single-core, i.e. not distributed for the purpose of the bug report). Could the TPU be an issue here? Can you test it out on a TPU?<|||||>@quantitative-technologies I could recreate your issue once I had torch-xla installed. Looking into it<|||||>@quantitative-technologies the issue is described in this xla issue: https://github.com/pytorch/xla/issues/3675#issuecomment-1171702988 Essentially you need to place the model on the device yourself first, then create the optimizer. After doing this I got the exact same results. *Super* subtle bug here. To do so, add the following lines to your code: ```python import torch_xla.core.xla_model as xm # In the if/else of to create the optimizer yourself # Create adamw_torch optimizer manually model = model.to(xm.xla_device()) # <- We need to move the model to the device *before* creating the optimizer decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm]) decay_parameters = [name for name in decay_parameters if "bias" not in name] ... ``` <|||||>> @quantitative-technologies I could recreate your issue once I had torch-xla installed. Looking into it Right, I should have mentioned that `torch-xla` was installed in the report.<|||||>> @quantitative-technologies the issue is described in this xla issue: [pytorch/xla#3675 (comment)](https://github.com/pytorch/xla/issues/3675#issuecomment-1171702988) > > Essentially you need to place the model on the device yourself first, then create the optimizer. After doing this I got the exact same results. OK, I see. I was impatient and made my own solution, by subclassing `Trainer` with an `optimizers_init` function argument that creates the optimizer from the `Trainer`'s model and the `args`. This resolved the issue, since `Trainer` places the model for you. I'm not sure if this is perhaps an improved design. Actually, having an `lr_scheduler_init` definitely makes more sense -- I didn't implement it yet though. This is because the `Trainer` does the calculation for the number of training steps which is needed to build the `lr_regularizer` anyhow. If there is interest updating the `Trainer` I can submit a push request. Here is my code for my subclass: ``` class TrainerOptimizerInit(Trainer): """ Args: optimizers_init (`Tuple[Callable[[Union[PreTrainedModel, nn.Module], TrainingArguments], torch.optim.Optimizer], torch.optim.lr_scheduler.LambdaLR]`, *optional*): A tuple containing (1) a function that is used to create an optimizer from the `model` and `args`, and (2) the scheduler to use. Will default to an instance of [`AdamW`] on your model and a scheduler given by [`get_linear_schedule_with_warmup`] controlled by `args`. """ def __init__( self, model: Union[PreTrainedModel, nn.Module] = None, args: TrainingArguments = None, data_collator: Optional[DataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, tokenizer: Optional[PreTrainedTokenizerBase] = None, model_init: Callable[[], PreTrainedModel] = None, compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, callbacks: Optional[List[TrainerCallback]] = None, optimizers_init: Tuple[Callable[[Union[PreTrainedModel, nn.Module], TrainingArguments], torch.optim.Optimizer], torch.optim.lr_scheduler.LambdaLR] = (None, None), preprocess_logits_for_metrics: Callable[[torch.Tensor, torch.Tensor], torch.Tensor] = None, ): super().__init__(model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, model_init=model_init, compute_metrics=compute_metrics, callbacks=callbacks, preprocess_logits_for_metrics=preprocess_logits_for_metrics) self.optimizer_init, self.lr_scheduler = optimizers_init def create_optimizer(self): """ Setup the optimizer. We provide a reasonable default that works well. If you want to use something else, you can subclass and override this method in a subclass. """ opt_model = self.model_wrapped if is_sagemaker_mp_enabled() else self.model if self.optimizer is None: if self.optimizer_init is None: # fall back to original behaviour return super().create_optimizer() self.optimizer = self.optimizer_init(opt_model, self.args) if is_sagemaker_mp_enabled(): self.optimizer = smp.DistributedOptimizer(self.optimizer) return self.optimizer ``` <|||||>That'd be up to @sgugger, who is OOF until the 30th on holiday 😄 I'll make sure he sees this though when he's back!<|||||>We don't really want to add other init functions. For any specific behavior, users should subclass the `create_optimizer` function directly.<|||||>I am confused. Should the optimizer be passed to TrainingArguments or to the Trainer?
transformers
18,634
closed
Longt5 fix link in docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-15-2022 18:01:38
08-15-2022 18:01:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,633
closed
Fix typo in add_new_model_like.py
(feel free to ignore; trying to see if this triggers CI successfully as it's not working for me on other HF repos)
08-15-2022 16:51:01
08-15-2022 16:51:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @mathemakitten, let's try to solve the CircleCI issue! Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Hi @LysandreJik, thanks for the link — I refreshed the user permissions in the CircleCI web interface and tests seem to trigger as expected now for my PRs!<|||||>Glad to hear it! Would you mind pushing a new commit (empty or not) to this branch so that it runs the check here and we can merge?
transformers
18,632
closed
Examples: add Bloom support for token classification
Hi, this PR adds support for fine-tuning Bloom models for token classification tasks. After trying out the current example, I got an error message from the tokenizer part, that `add_prefix_space=True` needs to be set. So with this PR, Bloom can be used in token classification example for PyTorch (there's no support for TensorFlow/FLAX yet).
08-15-2022 16:35:45
08-15-2022 16:35:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @younesbelkada <|||||>Hi @younesbelkada , sure no problem, here are the steps (fresh environment): ```bash git clone https://github.com/huggingface/transformers.git cd transformers/ pip3 install -e . pip3 install seqeval evaluate cd examples/pytorch/token-classification python3 run_ner.py --model_name_or_path bigscience/bloom-560m --dataset_name conll2003 --output_dir ./output-bloom-560m --do_train --do_eval --do_predict ``` Then the following error message is thrown: ```bash Traceback (most recent call last): File "run_ner.py", line 630, in <module> main() File "run_ner.py", line 464, in main train_dataset = train_dataset.map( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2387, in map return self._map_single( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2775, in _map_single batch = apply_function_on_filtered_inputs( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "run_ner.py", line 422, in tokenize_and_align_labels tokenized_inputs = tokenizer( File "/workspace/transformers/src/transformers/tokenization_utils_base.py", line 2475, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/workspace/transformers/src/transformers/tokenization_utils_base.py", line 2561, in _call_one return self.batch_encode_plus( File "/workspace/transformers/src/transformers/tokenization_utils_base.py", line 2752, in batch_encode_plus return self._batch_encode_plus( File "/workspace/transformers/src/transformers/models/bloom/tokenization_bloom_fast.py", line 140, in _batch_encode_plus raise Exception( Exception: You need to instantiate BloomTokenizerFast with add_prefix_space=True to use it with pretokenized inputs. ```
transformers
18,631
closed
[bnb] Minor modifications
# What does this PR do? Addressing the request from https://github.com/huggingface/transformers/pull/17901 to refactor the documentation a little bit and fixes small details in the bnb PR. - Fixing documentation - Added useful troubleshooting document for more efficient debugging - Updated the `Dockerfile` with the correct `bitsandbytes` version cc @stas00 @ydshieh
08-15-2022 13:20:55
08-15-2022 13:20:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>You could also read the rendered doc (not GitHub page) below https://moon-ci-docs.huggingface.co/docs/transformers/pr_18631/en/perf_train_gpu_one#efficient-training-on-a-single-gpu (toward the end)<|||||>@ydshieh do you think we should keep the version `0.31.8` on the Dockerfile since we know it's the version that works 🤔 <|||||>> @ydshieh do you think we should keep the version `0.31.8` on the Dockerfile since we know it's the version that works 🤔 So far let's not add this. If the newer versions constantly break things, we can ping a particular versions.<|||||>Great works for me!<|||||>Great thanks! Do you think that we can merge it for now? I am especially interested in seeing if the changes in the `Dockerfile` would change anything (theoretically no, but we are using `bitsandbytes==0.31.5` on the Dockerfile and the latest verison is the `0.31.8`). For the rest since it's related to documentation I think we can always re-iterate. Perhaps I can open a separate PR just to change the Dockerfile?<|||||>If you feel it's complete and want to merge it, go for it. We can always improve it later.<|||||>Thanks a lot! Just went through a final pass, looks good to me!<|||||>@younesbelkada, I have just realized that this PR added to the wrong perf doc. This new feature is inference only and thus ideally should go into the inference doc and not training. Probably the many-gpu one. https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_many.mdx Thanks.<|||||>No problem at all and sorry for the inconvenience, will re-open a PR for that!<|||||>I have addressed a PR at: https://github.com/huggingface/transformers/pull/18671 !
transformers
18,630
closed
XLA generation error with repetition_penalty
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.1+cu113 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @gante @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction To reproduce error (adapted code from https://huggingface.co/blog/tf-xla-generate): ```python import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM generation_kwargs = { "max_new_tokens": 64, 'eos_token_id': 198, 'do_sample': True, 'temperature': 0.72, 'top_k': 0, 'top_p': 0.725, 'repetition_penalty': 1.13, } tokenizer = AutoTokenizer.from_pretrained( "gpt2", padding_side="left", pad_token="</s>" ) model = TFAutoModelForCausalLM.from_pretrained("gpt2") model.config.pad_token_id = model.config.eos_token_id input_text = "repetition_penalty error" xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_text, return_tensors="tf") print("model.generate") model.generate(**tokenized_input, **generation_kwargs) print("xla_generate") xla_generate(**tokenized_input, **generation_kwargs) # error here ``` Error: ``` File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 604, in generate * seed=model_kwargs.pop("seed", None), File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 1651, in _generate * input_ids, File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 2475, in sample_body_fn * next_tokens_scores = logits_processor(generated, next_token_logits, cur_len) File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 94, in __call__ * scores = processor(input_ids, scores, cur_len) File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 278, in __call__ * score_penalties = self._create_score_penalties(input_ids[:, :cur_len], scores) File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 265, in _create_score_penalties * indexable_prev_input_ids = tf.concat( ValueError: None values not supported. ``` By setting `repetition_penalty` to 1.0 or by removing this parameter everything works fine. ### Expected behavior The expected result is the work of text generation using `repetition_penalty` without any errors, taking into account the use of XLA.
08-15-2022 12:58:22
08-15-2022 12:58:22
Hey @AlekseyKorshuk 👋 Thank you for the reproducible script! I have never seen this exception, so I'll have to dig into it. Expect further information soon :)<|||||>@AlekseyKorshuk The PR linked above fixes the issue :) After it is merged, you'll have to install `transformers` from `main` to get its benefits! (which should be no issue for you, since you're using the `dev0` version)<|||||>Thank you, @gante 🤗
transformers
18,629
closed
OWL-ViT memory usage grows linearly with each prediction
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.11 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from torchvision.datasets import FakeData from torchvision.transforms.functional import pil_to_tensor from transformers import OwlViTProcessor, OwlViTForObjectDetection text_prompts = ["a photo of a cat", "a photo of a dog"] dataset = FakeData(size=50, image_size=(3, 28, 28), transform=pil_to_tensor) processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") target_sizes = torch.Tensor([[28, 28]]) for image, _ in dataset: inputs = processor(text=[text_prompts], images=image, return_tensors="pt") outputs = model(**inputs) _ = processor.post_process(outputs=outputs, target_sizes=target_sizes)[0] ``` ### Expected behavior I expect to be able to generate predictions from the OwlViTForObjectDetection model in a loop without memory usage increasing by ~1GB on each call to the model (line 15). Below, I've included a plot of memory usage over time. I profiled the code using `memory_profiler` to determine that it is the call to the model (not the processing or post processing) that seems to be the culprit. ![memory_usage](https://user-images.githubusercontent.com/3272567/184638200-834094c0-3dad-4b51-b076-b9229ca33534.png)
08-15-2022 12:53:02
08-15-2022 12:53:02
cc @alaradirik as well :)<|||||>Thank you @zduey, I'm looking into this! <|||||>@alaradirik - You are likely further along isolating the problem than I am, so please ignore if this is a distraction. The issue seems specific to repeated calls to the `OwlViTForObjectDetection` forward pass. The following results in the same memory usage pattern as the initial snippet. It differs from the original mainly in that the processor is not called as part of the loop. ```python import requests from PIL import Image from transformers import OwlViTForObjectDetection, OwlViTProcessor from tqdm import trange text_prompts = ["a photo of a cat", "a photo of a dog"] url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") inputs = processor(text=[text_prompts], images=image, return_tensors="pt") for _ in trange(15): _ = model(**inputs) ``` ![owl_vit_for_object_detection](https://user-images.githubusercontent.com/3272567/185410275-50480ab1-4d22-41fd-8db9-785a5cbfc4f1.png) When I've stepped through the code more granularly, the jump in memory usage happens [here](https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/owlvit/modeling_owlvit.py#L1259), which is just a call to `OwlVitModel`. However, calls to `OwlVitModel` directly seem to have constant memory usage: ```python from PIL import Image import requests from transformers import OwlViTProcessor, OwlViTModel model = OwlViTModel.from_pretrained("google/owlvit-base-patch32") processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor( text=[["a photo of a cat", "a photo of a dog"]], images=image, return_tensors="pt" ) for _ in range(50): _ = model(**inputs) ``` ![owl_vit_model](https://user-images.githubusercontent.com/3272567/185410349-9c7bcad0-c2fc-4cfe-b02b-e9cad8c09e74.png) Since I've run that separately without the same memory increase, the issue seems to be with how it gets called from `OWLViTForObjectDetection`, but I haven't been able to track it down. My initial guess (based on [this issue](https://discuss.pytorch.org/t/why-my-memory-keeps-increasing-at-every-iteration/118137)) was to look for a place where a tensor (one that is still attached to the computation graph) is being placed into a standard python list. <|||||>Hi @zduey, thank you for the detailed analysis! I'm more or less at the same point as you are, I confirmed that `OwlViTModel` is not the cause of the memory leak. I tracked the leak down to the calls to `image_text_embedder` method of `OwlViTForObjectDetection` and I'm working on the fix. I'm aiming to open a PR to fix this by tomorrow.<|||||>Just an update on the issue - this should be fixed when this [PR](https://github.com/huggingface/transformers/pull/18734) is merged. The memory leak was due to `OwlViTForObjectDetection` model making calls to `OwlViTModel`s non-forward methods and hence keeping track of all computational graphs. Here is the code I used to confirm the fix: ``` import requests from PIL import Image import torch from transformers import OwlViTModel, OwlViTForObjectDetection, OwlViTProcessor model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") text_prompts = ["a photo of a cat", "a photo of a dog"] url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=[text_prompts], images=image, return_tensors="pt") for i in range(50): with torch.no_grad(): _ = model(**inputs) ``` ![Figure_1](https://user-images.githubusercontent.com/8944735/186166457-f09822cf-0853-44bf-9ef0-421823032d00.png) I will close this issue once the PR is merged. <|||||>We all learn in the hard way, @alaradirik . Just a few weeks ago, @NielsRogge and me had the same issue 😆 Whenever there is PyTorch memory issue -> check `with torch.no_grad()` first. <|||||>@ydshieh so true! Took me a deep dive into PyTorch docs and a while to debug it<|||||>Thanks so much @alaradirik for getting together a fix for this so quickly!<|||||>Closing this issue as the fix PR is merged<|||||>I don't think this bug is fixed. I was trying to use the model without torch.no_grad and the memory still grows linearly with every prediction I make, so something is still leaking<|||||>Hi @AlonZolfi You should use `with` `torch.no_grad`, not `without`<|||||>I understand this is a workaround. However, I need the gradients during inference for another calculation so I cannot use it with torch.no_grad<|||||>In this case, you will have to manage the gradients by yourself: when to save them, when to delete some of them, etc. It's out of the scope of the the library's GitHub issue pages however.
transformers
18,628
closed
Adds GroupViT to models exportable with ONNX
Adds GroupViT to models exportable with ONNX
08-15-2022 12:45:10
08-15-2022 12:45:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>@regisss hopefully you don't have to make any changes this time to make the tests pass!<|||||>Pinging @sgugger for final approval<|||||>Feel free to merge if you approve @lewtun<|||||>@lewtun Can you take a quick look at this PR and merge it when you approve? :slightly_smiling_face:
transformers
18,627
closed
can't run the wav2vec2-base-960 example
### System Info i use Google Colab and the newest version of transformer i wanted to run this example but i got error :`list indices must be integers or slices not str` because line of : `**result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])**` ``` from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` ### Who can help? @patrickvonplaten @anton-l @sanchi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction just run the example. ### Expected behavior calculate the wer
08-15-2022 10:17:23
08-15-2022 10:17:23
Hi, @mehrdad78, I made some modifications to your script. But it should work the same, below is the modified script: ``` from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset( "hf-internal-testing/librispeech_asr_demo", "clean", split="validation" ) model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") def map_to_pred(batch): arrays = [item["array"] for item in batch["audio"]] inputs = processor(arrays, return_tensors="pt", padding="longest") logits = model(**inputs).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map( map_to_pred, batched=True, batch_size=4, remove_columns=["audio"] ) print("WER:", wer(result["text"], result["transcription"])) ``` The output of this script should be: ``` WER: 0.059130434782608696 ``` I think you need to change the dataset from ``` librispeech_eval = load_dataset( "hf-internal-testing/librispeech_asr_demo", "clean", split="validation" ) ``` to ``` librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") ``` which aligns with your original script. <|||||>> Hi, @mehrdad78, I made some modifications to your script. But it should work the same, below is the modified script: > > ``` > from datasets import load_dataset > from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor > import torch > from jiwer import wer > > librispeech_eval = load_dataset( > "hf-internal-testing/librispeech_asr_demo", "clean", split="validation" > ) > > model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") > processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") > > > def map_to_pred(batch): > arrays = [item["array"] for item in batch["audio"]] > inputs = processor(arrays, return_tensors="pt", padding="longest") > logits = model(**inputs).logits > predicted_ids = torch.argmax(logits, dim=-1) > transcription = processor.batch_decode(predicted_ids) > batch["transcription"] = transcription > return batch > > > result = librispeech_eval.map( > map_to_pred, batched=True, batch_size=4, remove_columns=["audio"] > ) > > print("WER:", wer(result["text"], result["transcription"])) > ``` > > The output of this script should be: > > ``` > WER: 0.059130434782608696 > ``` > > I think you need to change the dataset from > > ``` > librispeech_eval = load_dataset( > "hf-internal-testing/librispeech_asr_demo", "clean", split="validation" > ) > ``` > > to > > ``` > librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") > ``` > > which aligns with your original script. Thank you
transformers
18,626
closed
Parallelization for OPT model
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Lyandrejik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import OPTForCausalLM model = OPTForCausalLM.from_pretrained("facebook/opt-6.7b").cuda() model.parallelize() ### Expected behavior Hi, I am loading the OPT model. Due to many parameters, I cannot put the whole model on one CUDA device. Therefore, I tried to parallelize the OPT like T5, using "model.parallelize()". However, I find that there is no parallelization mechanism for OPT model now. Can you please implement the parallelization mechanism for OPT models? Or what can I do to load and run a large OPT model?
08-15-2022 10:16:53
08-15-2022 10:16:53
I have the same question since directly tuning models with this size (OPT) on one single device is infeasible. It would be great to have the parallelization mechanism implemented for OPT models. Many thanks! <|||||>See this conversation which might help out: https://huggingface.co/facebook/opt-66b/discussions/6<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,625
closed
Create pipeline_tutorial.mdx german docs
this PR is another progress for https://github.com/huggingface/transformers/issues/18564
08-15-2022 09:49:31
08-15-2022 09:49:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten, can you take a look at this?
transformers
18,624
closed
Community Integration: Colossal-AI for Large AI Models
### Feature request Dear Hugging Face Team, My name is Yongbin Li. I am part of [Colossal-AI](https://github.com/hpcaitech/ColossalAI) Team. Thanks for your previous [invitation](https://github.com/hpcaitech/ColossalAI/issues/396) to Colossal-AI org to join Hugging Face. We are happy to share our founder's [blog ](https://twitter.com/HPCAITech/status/1547041583337394176)about Hugging Face. We are thinking about further collaboration, eg. integrating Colossal-AI into Hugging Face to help your community members use large AI models in an efficient and easier manner. For example, we can democratize its access to all your users in the same way as you did with DeepSpeed. https://huggingface.co/docs/transformers/v4.21.0/en/main_classes/deepspeed ### Motivation We believe the democratization of large AI models is also very helpful for Hugging Face members. We are very appreciated if we could build the integration with you to benefit both of our users. Actually, we are working on similar integrations with Meta OPT([done](https://github.com/facebookresearch/metaseq#using-opt-with-colossal-ai)), PyTorch Lightning([in process](https://github.com/hpcaitech/ColossalAI/issues/1330)), etc. ### Your contribution We can provide help you need in this cooperation for free. Actually, we have reached a preliminary idea with your team member: omar, lysandre, and julien via email([email protected]) and look forward to your further reply. Feel free to reach out to me on Hugging Face Discord. My username is billy2022. We can discuss more details with other colleagues in a private group. Thank you very much. Best regards, Yongbin Li, Chief Marketing Officer, HPC-AI Tech
08-15-2022 09:26:42
08-15-2022 09:26:42
If you have any difficulties or concerns, please let me know. We can have further discussion about them, thanks. :-)<|||||>@stas00 seems much better than https://github.com/huggingface/transformers/issues/17392<|||||>I haven't had a change to read on Colossal-AI yet, why do you believe it's much better based on your research, @flozi00? I did notice that it suggests the integration of PatrickStar's functionality. CAI appears to be its own eco-system - not sure how easy it'd be to integrate with our eco-system.<|||||>https://github.com/hpcaitech/ColossalAI-Examples/blob/757514d2b1501d3530777cdf567f0a18063acf2d/image/resnet/train.py#L82-L111 In terms of code, it looks very similar to a normal pytorch training loop Did not had a deep look into the CAI code itself, focused on integration compitability to existing code to me it looks like you don't have to deal with the integration of patrickstar since everything is handled by CAI the dependencies are also manageable I already noticed some time ago, that is was for a range of time in the trends of paperswithcode The benchmarks looks pretty nice on the first take, but are a little bit confusing too. https://github.com/hpcaitech/ColossalAI#gpt-2 For RAM, Model size and throughput comparison are different techniques used (pytorch, deepspeed, megatron), did not checked if its only cherry picking or really does not matter which one to use In any case, I think it's not bad to test alternatives to deepspeed. At first glance, the integration into existing pytorch code looks feasible without major problems. Also, with the expertise of both organizations, the integration could be done without much trouble for a single one, with CAI offering to help with the integration "We are very appreciated if we could build the integration with you to benefit both of our users".<|||||>Thank you for sharing your insights, @flozi00! I read their paper and I'm not quite sure of what type of integration is proposed here. Unlike Deepspeed which is meant to be integrated with the user code, CAI seems to be a standalone solution. One of the biggest issues with any parallelism proposals (other than DDP) is that they all require rewriting the model's code, which with 100+ models and growing in our arsenal would be prohibitively expensive. Therefore we always welcome automated solutions like Deepspeed which require no changes whatsoever to most models and sometimes a small tweak for some peculiar models. It's definitely worth exploring all the different versions of TP (2/2.5/3D) mentioned in the paper, but we need this automated and not manually rewritten. The paper briefly mentions PP, but as we all know this one definitely requires a complete rewrite of the model for most frameworks. So again let's ask a very concrete question - other than being part of the HF ecosystem what is the vision for the proposed integration? We already have 2 trainer loop systems (HF Trainer and Accelerate) and we won't want to maintain a 3rd one. Do you need to inject something into the `modeling_utils.py` to better support CAI? Do you propose to rewrite the models to support? Perhaps let's take one HF Transformers model of your choice and tell us what would you like to do with it to have it run on CAI? This would be more practical. and specifically to your interest @flozi00 - yes, I hear you like the advanced memory utilization proposed in PatrickStar and CAI suggests to have integrated that functionality. I hope my commentary was constructive, we are definitely open for good improvements to our tools. It's just I'm weary to add yet another tool unless a clear advantage and ease of integration can be shown. <|||||>Also, let's ping @hyunwoongko - Kevin, I know you have studied many frameworks while building https://github.com/tunib-ai/oslo - have you by chance researched [Colossal-AI](https://github.com/hpcaitech/ColossalAI) on your journey? If you did, would you kindly share a few insights if you have any? I know you were cherry picking the best parts from many systems in addition to your own innovations.<|||||>I'm sorry to admit that I didn't think of the backwards compatibility, totally forgot about that point, sorry. I focused mainly on the integration in the trainer and did not include the now very many architectures and weights. Maybe CAI has an idea to automate that ? What about the integration to lightning, did they had discussed that point too ? I have some ideas in mind but that would be more part of CAI itself or third party tools, about finding JIT methods to convert the required model parts, instead of the HF integration.<|||||>> I'm sorry to admit that I didn't think of the backwards compatibility, totally forgot about that point, sorry. > > I focused mainly on the integration in the trainer and did not include the now very many architectures and weights. No harm done. This is totally understandable - the HF transformers eco-system has been becoming more and more complex so often it's far from trivial to add yet another component to it. We are super welcoming solutions that can automate performance enhancements (like torchdynamo - see below). > Maybe CAI has an idea to automate that ? What about the integration to lightning, did they had discussed that point too ? PL is a training framework/loop, last I looked they didn't have the model library and were using transformers, so they don't need to deal with modeling. > I have some ideas in mind but that would be more part of CAI itself or third party tools, about finding JIT methods to convert the required model parts, instead of the HF integration. there is already work being done on that with torchdynamo/nvfuser - it's not fully stable yet, but shows some impressive speed ups (and lower memory usage) for converting normal pytorch code to fused kernels - but this is a different dimension to parallelism and advanced memory management systems. It's definitely not a replacement for parallelism, as it can save 2x memory, or provide a 2x speed up, but it's far from enough for 100B+ models. Please see the HF integration details here: https://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#inference-with-torchdynamo <|||||>Hi, we drafted a [pull request](https://github.com/Lightning-AI/lightning/pull/14224) which intergrates ColossalAI to lightning. And here are exmaples and benchmark https://github.com/hpcaitech/ColossalAI-Pytorch-lightning. We have impletemented ZeRO-DP with chunk-based memory management and heterogeneous memory management. I think this is not hard to intergrate to HF. Besides, we are working on auto parallelism. I believe we can use TP/PP without modifying model in the future.<|||||>OK, so at the moment you're proposing to integrate CAI for: 1. its ZeRO-DP with chunk-based memory management and heterogeneous memory management. This is something that Deepspeed is lacking at the moment (and if I understand correctly the technology comes from PatrickStar) 2. down the road for auto-parallelism @sgugger, should this perhaps go straight into `accelerate`? (Sylvain is on vacation, so please let's wait a bit for him to be back and advise on how to best to proceed.) <|||||>We'll probably need to duplicate the integration in the Trainer and Accelerate for now, since the Trainer does not depend on Accelerate.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,623
closed
`local_files_only=True` not work
### System Info torch 1.12.1 transformers 4.21.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have downloaded the pretrained weights and the model worked well. I am confused by the long loading time (~25s on a SSD) when using the `from_pretrained` API, and I set `local_files_only=True` to disable connections. But I still found the function called a http request and it spent many seconds on my desktop without Internet. Here is my log file: > File "/disk1/fewshot/anaconda3/envs/pet/lib/python3.9/site-packages/transformers/utils/hub.py", line 284, in cached_path > output_path = get_from_cache( > File "/disk1/fewshot/anaconda3/envs/pet/lib/python3.9/site-packages/transformers/utils/hub.py", line 501, in get_from_cache > r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout) ### Expected behavior When set `local_files_only=True`, it should disable connections to the website.
08-15-2022 09:24:19
08-15-2022 09:24:19
how did you solve it?
transformers
18,622
closed
variable name:_name = "label" if "label" in features[0].keys() else "labels" when training custom NER
I am trying to run custom NER on my data using offset values. I tried to replicate using this link << https://huggingface.co/course/chapter7/2 >> I keep getting the error **_name = "label" if "label" in features[0].keys() else "labels" AttributeError: 'tokenizers.Encoding' object has no attribute 'keys'** **DATA BEFORE tokenize_and_align_labels FUNCTIONS** ``` {'texts': ['WASHINGTON USA WA DRIVER LICENSE BESSETTE Lamma 4d DL 73235766 9 Class AM to Iss 22/03/2021 Ab Exp 07130/2021 DOB 2/28/21 1 BESSETTE 2 GERALD 8 6930 NE Grandview Blvd, keyport, WA 86494 073076 12 Restrictions A 9a End P 16 Hgt 5\'-04" 15 Sex F 18 Eyes BLU 5 DD 73235766900000000000 Gerald Bessette', ] } tag_names': [ [ {'start': 281, 'end': 296, 'tag': 'PERSON_NAME', 'text': 'Gerald Bessette'}, {'start': 135, 'end': 141, 'tag': 'FIRST_NAME', 'text': 'GERALD'}, {'start': 124, 'end': 122, 'tag': 'LAST_NAME', 'text': 'BESSETTE'}, {'start': 81, 'end': 81, 'tag': 'ISSUE_DATE', 'text': '22/03/2021'}, {'start': 99, 'end': 109, 'tag': 'EXPIRY_DATE', 'text': '07130/2021'}, {'start': 114, 'end': 121, 'tag': 'DATE_OF_BIRTH', 'text': '2/28/21'}, {'start': 51, 'end': 59, 'tag': 'DRIVER_LICENSE_NUMBER', 'text': '73235766'}, {'start': 144, 'end': 185, 'tag': 'ADDRESS', 'text': '6930 NE Grandview Blvd, keyport, WA 86494'} ], ``` **DATA AFTER tokenize_and_align_labels FUNCTIONS** ``` {'input_ids': [[0, 305, 8684, 2805, 9342, 10994, 26994, 42560, 39951, 163, 12147, 3935, 6433, 6887, 1916, 204, 417, 13925, 6521, 1922, 4390, 4280, 361, 4210, 3326, 7, 19285, 820, 73, 3933, 73, 844, 2146, 2060, 12806, 321, 5339, 541, 73, 844, 2146, 14010, 387, 132, 73, 2517, 73, 2146, 112, 163, 12147, 3935, 6433, 132, 272, 39243, 495, 290, 5913, 541, 12462, 2374, 5877, 12543, 6, 762, 3427, 6, 9342, 290, 4027, 6405, 13470, 541, 5067, 316, 40950, 2485, 83, 361, 102, 4680, 221, 545, 289, 19377, 195, 32269, 3387, 113, 379, 15516, 274, 504, 26945, 12413, 791, 195, 27932, 6521, 1922, 4390, 36400, 45947, 151, 14651, 163, 3361, 3398, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'offset_mapping': [[(0, 0), (0, 1), (1, 10), (11, 14), (15, 17), (18, 20), (20, 24), (25, 28), (28, 32), (33, 34), (34, 37), (37, 39), (39, 41), (42, 45), (45, 47), (48, 49), (49, 50), (51, 53), (54, 56), (56, 58), (58, 60), (60, 62), (63, 64), (65, 70), (71, 73), (74, 76), (77, 80), (81, 83), (83, 84), (84, 86), (86, 87), (87, 89), (89, 91), (92, 94), (95, 98), (99, 100), (100, 102), (102, 104), (104, 105), (105, 107), (107, 109), (110, 112), (112, 113), (114, 115), (115, 116), (116, 118), (118, 119), (119, 121), (122, 123), (124, 125), (125, 128), (128, 130), (130, 132), (133, 134), (135, 136), (136, 140), (140, 141), (142, 143), (144, 146), (146, 148), (149, 151), (152, 157), (157, 161), (162, 166), (166, 167), (168, 171), (171, 175), (175, 176), (177, 179), (180, 181), (181, 183), (183, 185), (186, 188), (188, 190), (190, 192), (193, 195), (196, 204), (204, 208), (209, 210), (211, 212), (212, 213), (214, 217), (218, 219), (220, 222), (223, 224), (224, 226), (227, 228), (228, 230), (230, 232), (232, 233), (234, 236), (237, 240), (241, 242), (243, 245), (246, 250), (251, 253), (253, 254), (255, 256), (257, 259), (260, 262), (262, 264), (264, 266), (266, 269), (269, 277), (277, 280), (281, 287), (288, 289), (289, 292), (292, 296), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)] 'labels': [[24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 2, 10, 10, 18, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 3, 11, 11, 11, 11, 19, 24, 24, 1, 9, 9, 9, 17, 24, 24, 24, 24, 24, 24, 4, 12, 20, 24, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 16, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 7, 15, 15, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24], ``` My Code: ``` import transformers from transformers import AutoTokenizer from transformers import AutoTokenizer,BertModel,BertTokenizer from transformers import RobertaModel,RobertaConfig,RobertaForTokenClassification from transformers import TrainingArguments, Trainer # from transformers.trainer import get_tpu_sampler from transformers.trainer_pt_utils import get_tpu_sampler from transformers.data.data_collator import DataCollator, InputDataClass from transformers import DataCollatorForTokenClassification from transformers import AutoModelForTokenClassification import torch from torch.nn import CrossEntropyLoss, MSELoss import torch.nn as nn import torch.nn.functional as F from torch.utils.data.dataloader import DataLoader from torch.utils.data.distributed import DistributedSampler from torch.utils.data.sampler import RandomSampler from torchcrf import CRF import dataclasses import logging import warnings import tqdm import os import numpy as np from typing import List, Union, Dict os.environ["WANDB_DISABLED"] = "true" print(transformers.__version__) import evaluate metric = evaluate.load("seqeval") model_checkpoint = "bert-base-cased" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) #add_prefix_space=True def isin(a, b): return a[1] > b[0] and a[0] < b[1] def tokenize_and_align_labels(examples, label2id, max_length=256): tokenized_inputs = tokenizer(examples["texts"], truncation=True, padding='max_length', max_length=max_length,return_offsets_mapping=True) print("tokenization done") labels = [] for i, label_idx_for_single_input in enumerate(tqdm.tqdm(examples["tag_names"])): # print(i,label_idx_for_single_input) labels_for_single_input = ['O' for _ in range(max_length)] # print(labels_for_single_input) text_offsets = tokenized_inputs['offset_mapping'][i] # print("text_offsets",text_offsets) for entity in label_idx_for_single_input: # print("entity",entity) tag = entity['tag'] # print("tag",tag) tag_offset = [entity['start'], entity['end']] # print("tag_offset",tag_offset) # text_offsets [(0, 0), (0, 1), (1, 10), (11, 14), (15, 17), (18, 20), (20, 24), (25, 28), (28, 32), (33, 34), (34, 37), (37, 39), (39, 41), (42, 45), (45, 47), (48, 49), (49, 50), (51, 53), (54, 56), (56, 58), (58, 60), (60, 62), (63, 64), (65, 70), (71, 73), (74, 76), (77, 80), (81, 83), (83, 84), (84, 86), (86, 87), (87, 89), (89, 91), (92, 94), (95, 98), (99, 100), (100, 102), (102, 104), (104, 105), (105, 107), (107, 109), (110, 112), (112, 113), (114, 115), (115, 116), (116, 118), (118, 119), (119, 121), (122, 123), (124, 125), (125, 128), (128, 130), (130, 132), (133, 134), (135, 136), (136, 140), (140, 141), (142, 143), (144, 146), (146, 148), (149, 151), (152, 157), (157, 161), (162, 166), (166, 167), (168, 171), (171, 175), (175, 176), (177, 179), (180, 181), (181, 183), (183, 185), (186, 188), (188, 190), (190, 192), (193, 195), (196, 204), (204, 208), (209, 210), (211, 212), (212, 213), (214, 217), (218, 219), (220, 222), (223, 224), (224, 226), (227, 228), (228, 230), (230, 232), (232, 233), (234, 236), (237, 240), (241, 242), (243, 245), (246, 250), (251, 253), (253, 254), (255, 256), (257, 259), (260, 262), (262, 264), (264, 266), (266, 269), (269, 277), (277, 280), (281, 287), (288, 289), (289, 292), (292, 296), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)] # entity {'start': 281, 'end': 296, 'tag': 'PERSON_NAME', 'text': 'Gerald Bessette'} # tag PERSON_NAME # tag_offset [281, 296] affected_token_ids = [j for j in range(max_length) if isin(tag_offset, text_offsets[j])] # print("affected_token_ids",affected_token_ids) if len(affected_token_ids) < 1: # print('affected_token_ids)<1') continue if any(labels_for_single_input[j] != 'O' for j in affected_token_ids): # print('entity orverlap! skipping') continue for j in affected_token_ids: labels_for_single_input[j] = 'I_' + tag labels_for_single_input[affected_token_ids[-1]] = 'L_' + tag labels_for_single_input[affected_token_ids[0]] = 'B_' + tag label_ids = [label2id[x] for x in labels_for_single_input] labels.append(label_ids) tokenized_inputs["labels"] = labels # print(tokenized_inputs.keys()) return tokenized_inputs import json data = [] with open('data.json', 'r') as f: for line in f: data.append(json.loads(line)) l = [] for k, v in data[0].items(): l.append({'text': k, 'spans': v}) train_set = [ [ x['text'], [{'start': y["start"], 'end': y["end"], 'tag': y["label"], 'text': y["ngram"]} for y in x['spans']] ] for x in l ] ## count labels in dataset from collections import Counter e = [] for x in train_set: for y in x[1]: e.append(y['tag']) Counter(e).most_common() ## get label list ori_label_list = [] for line in train_set: ori_label_list += [entity['tag'] for entity in line[1]] ori_label_list = sorted(list(set(ori_label_list))) label_list = [] for prefix in 'BIL': label_list += [prefix + '_' + x for x in ori_label_list] label_list += ['O'] label_list = sorted(list(set(label_list))) print(label_list) print(len(label_list)) label2id = {n:i for i,n in enumerate(label_list)} id2label= {str(i):n for i,n in enumerate(label_list)} # id2label = {str(i): label for i, label in enumerate(label_names)} # label2id = {v: k for k, v in id2label.items()} train_examples ={'texts':[x[0] for x in train_set],'tag_names':[x[1] for x in train_set]} train_examples = tokenize_and_align_labels(train_examples,label2id) # train_examples = train_examples.map(tokenize_and_align_labels(label2id),batched=True) print("here") print(train_examples.keys()) print(len(train_examples['labels'])) # dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'offset_mapping', 'labels']) # 775 data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) # collator=data_collator(train_examples) # def compute_metrics(eval_preds): # logits, labels = eval_preds # predictions = np.argmax(logits, axis=-1) # # # Remove ignored index (special tokens) and convert to labels # true_labels = [[label_list[l] for l in label if l != -100] for label in labels] # true_predictions = [ # [label_list[p] for (p, l) in zip(prediction, label) if l != -100] # for prediction, label in zip(predictions, labels) # ] # all_metrics = metric.compute(predictions=true_predictions, references=true_labels) # return { # "precision": all_metrics["overall_precision"], # "recall": all_metrics["overall_recall"], # "f1": all_metrics["overall_f1"], # "accuracy": all_metrics["overall_accuracy"], # } model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,id2label=id2label,label2id=label2id,) print(model.config.num_labels) args = TrainingArguments( "bert-finetuned-ner", # evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, num_train_epochs=2, weight_decay=0.01, # push_to_hub=True, ) trainer = Trainer( model=model, args=args, train_dataset=train_examples, # eval_dataset=train_examples, data_collator=data_collator, # compute_metrics=compute_metrics, tokenizer=tokenizer) trainer.train() ```
08-15-2022 08:09:14
08-15-2022 08:09:14
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,621
closed
How to load multiple TXT training files when pre-train RoBERTa from scratch
https://github.com/huggingface/transformers/blob/d6eeb871706db0d64ab9ffd79f9545d95286b536/examples/pytorch/language-modeling/run_mlm.py#L308 Hi, I want to pre-train RoBERTa from scratch on my dataset. But in example run_mlm.py: ``` data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file extension = data_args.train_file.split(".")[-1] if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file extension = data_args.validation_file.split(".")[-1] if extension == "txt": extension = "text" raw_datasets = load_dataset( extension, data_files=data_files, cache_dir=model_args.cache_dir, use_auth_token=True if model_args.use_auth_token else None, ) ``` The train file argument `train_file` seems to support only one file: ``` train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."}) data_files["train"] = data_args.train_file extension = data_args.train_file.split(".")[-1] ``` Because we have too much training data, it is inconvenient to store it in a file. Can train_file support multiple text files?
08-15-2022 07:37:14
08-15-2022 07:37:14
I would recommend using `datasets` in order to store all of your data; you can then pass it either as a local dataset or as a dataset stored on the Hub to that same script.<|||||>> I would recommend using `datasets` in order to store all of your data; you can then pass it either as a local dataset or as a dataset stored on the Hub to that same script. Thanks for your reply! How to build a dataset with our data? Is there a tutorial?<|||||>This part of the datasets documentation should likely help out: https://huggingface.co/docs/datasets/loading#local-and-remote-files<|||||>> This part of the datasets documentation should likely help out: https://huggingface.co/docs/datasets/loading#local-and-remote-files Thanks! It works.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,620
closed
Big Bird cannot be converted to ONNX
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.17.5-x86_64-with-glibc2.33 - Python version: 3.9.6 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ydshie ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```bash python -m transformers.onnx --model=google/bigbird-roberta-base bigbird ``` Returns: ```bash ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 6.103515625e-05 ``` ### Expected behavior I expect the export to work and return: `All good, model saved at: bigbird/model.onnx`.
08-15-2022 00:09:40
08-15-2022 00:09:40
cc @vumichien, have you encountered this when contributing the BigBird ONNX exporter?<|||||>@LysandreJik I didn't encounter this problem when contributing the BigBird ONNX exporter. I do check again by running `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "bigbird"` , and all the tests are still passed. <|||||>Maybe we could put a higher tolerance in that case. WDYT @lewtun ?<|||||>Hey @cigrainger I'm not able to reproduce this behaviour using either CPU or GPU. Could you please try running this again with the latest state of the `main` branch in `transformers` and report back if the issue persists? If yes, we can certainly increase the tolerance as Lysandre suggested :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,619
closed
Bug in DonutFeatureExtractor
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.6.4 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.2 (gpu) - Jax version: 0.3.14 - JaxLib version: 0.3.14 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` tokenizer = AutoTokenizer.from_pretrained(token) feature_extractor = DonutFeatureExtractor.from_pretrained(encoder) processor = DonutProcessor(feature_extractor,tokenizer) class OCRDataset(Dataset): def __init__(self, root_dir, df, processor, max_target_length=256): self.root_dir = root_dir self.df = df self.processor = processor self.max_target_length = max_target_length def __len__(self): return len(self.df) def __getitem__(self, idx): # get file name + text #file_name = self.df['file_name'][idx] text = self.df['text'][idx] # prepare image (i.e. resize + normalize) image = self.df['image'][idx].convert("RGB") print(type(image)) w, h = image.size #image = Image.open(self.root_dir + file_name).convert("RGB") pixel_values = self.processor([image], return_tensors="pt").pixel_values # add labels (input_ids) by encoding the text labels = self.processor.tokenizer(text, padding="max_length", max_length=self.max_target_length).input_ids # important: make sure that PAD tokens are ignored by the loss function labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels] labels = torch.tensor(labels) encoding = {"pixel_values": pixel_values.squeeze(), "labels": labels} return encoding train_dataset = OCRDataset(root_dir='', df=dataset_train, processor=processor) encoding = train_dataset[1] ``` Error: ``` /opt/conda/lib/python3.7/site-packages/transformers/models/trocr/processing_trocr.py in __call__(self, *args, **kwargs) 65 66 if images is not None: ---> 67 inputs = self.feature_extractor(images, *args, **kwargs) 68 if text is not None: 69 encodings = self.tokenizer(text, **kwargs) /opt/conda/lib/python3.7/site-packages/transformers/models/donut/feature_extraction_donut.py in __call__(self, images, return_tensors, random_padding, **kwargs) 193 images = [ 194 self.resize(image=image, size=min(self.size), resample=self.resample, default_to_square=False) --> 195 for image in images 196 ] 197 if self.do_thumbnail and self.size is not None: /opt/conda/lib/python3.7/site-packages/transformers/models/donut/feature_extraction_donut.py in <listcomp>(.0) 193 images = [ 194 self.resize(image=image, size=min(self.size), resample=self.resample, default_to_square=False) --> 195 for image in images 196 ] 197 if self.do_thumbnail and self.size is not None: TypeError: 'int' object is not iterable ``` ### Expected behavior It should work
08-14-2022 19:25:23
08-14-2022 19:25:23
Hi, The [docstring](https://huggingface.co/docs/transformers/main/en/model_doc/donut#transformers.DonutFeatureExtractor.size) says that the size argument should be a tuple of (width, height).
transformers
18,618
closed
Add depth estimation pipeline
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. * I tried debugging Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18446 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge @Narsil <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Error While using the pipeline ``` pipe = pipeline("depth-estimation") No model was supplied, defaulted to Intel/dpt-large and revision e93beec (https://huggingface.co/Intel/dpt-large). Using a pipeline without specifying a model name and revision in production is not recommended. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/nandwalritik/nandwalritik/transformers/src/transformers/pipelines/__init__.py", line 670, in pipeline framework, model = infer_framework_load_model( File "/home/nandwalritik/nandwalritik/transformers/src/transformers/pipelines/base.py", line 257, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 445, in from_pretrained model_class = _get_model_class(config, cls._model_mapping) File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 359, in _get_model_class supported_models = model_mapping[type(config)] File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 565, in __getitem__ return self._load_attr_from_module(model_type, model_name) File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 579, in _load_attr_from_module return getattribute_from_module(self._modules[module_name], attr) File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module return getattribute_from_module(transformers_module, attr) File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module return getattribute_from_module(transformers_module, attr) File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module return getattribute_from_module(transformers_module, attr) [Previous line repeated 982 more times] File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 538, in getattribute_from_module transformers_module = importlib.import_module("transformers") File "/home/nandwalritik/anaconda3/envs/hftfSwinDev/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1004, in _find_and_load File "<frozen importlib._bootstrap>", line 157, in __enter__ File "<frozen importlib._bootstrap>", line 183, in _get_module_lock File "<frozen importlib._bootstrap>", line 59, in __init__ RecursionError: maximum recursion depth exceeded while calling a Python object ```
08-14-2022 18:40:32
08-14-2022 18:40:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @NielsRogge @Narsil I tried debugging to resolve above issue, I found that in `src/transformers/models/auto/auto_factory.py` in line 462 `model_class = _get_model_class(config, cls._model_mapping)` when I logged `cls._model_mapping` I recieved ``` OrderedDict([(<class 'transformers...PTConfig'>, <?>), (<class 'transformers...PNConfig'>, <?>)]) <error>: Traceback (most recent call last): ``` Can you guide me to resolve this error.<|||||>> Great PR, you also need to add > > ```python > + def _sanitize_parameters(self, **kwargs): > + return {}, {}, {} > + > ``` > > To the pipeline, it's not used but the base class expects this method to exist. (Here there's not parameters declared so it's quite easy. > > This weird method exist to allow parameters to be defined both at definition time or call tiime > > ``` > pipe = pipeline(model=model, myargs=1) > data = pipe(image) > # or > pipe = pipeline(model=model) > data = pipe(image, myargs=1) > ``` > > Cheers ! Otherwise LGTM. > > Why do you output 3 different images ? That sounds like a lot. > > The image I can understand, the predicted depth is defined in what unit ? Is it noisy hence the interpolation ? IMO that seems like something to be left to the user to decide what to do. > > In general I think a pure image would be nice (to be a bit more general) but I can understand that the loss of precision might be harmful, do you mind sharing how you use those numbers ? Maybe we could output an other time of image that doesn't loose information (keeping f32 pixel) > > Wdyt ? I just saw `DPT`'s depth estimation example and added these three outputs. I have removed the interpolation one and In the output I have kept only the `predicted_depth` (which is a `tensor`) and `depth` (which is the PIL `Image` object). Let me know if I should remove the `predicted_depth` also. <|||||>> ### Review required > At least 1 approving review is required by reviewers with write access. [Learn more.](https://docs.github.com/articles/about-pull-request-reviews/) > ** 1 pending reviewer ** By generic test did you mean `run_pipeline_test` ? Let me know if it's other than this, I have added `run_pipeline_test` for now. Also In CI `test_pipelines_depth_estimation` is failing can you help me with that ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @nandwalritik, could you revive this PR by rebasing with the main branch? <|||||>> Hi @nandwalritik, could you revive this PR by rebasing with the main branch? Done.<|||||>@nandwalritik Do you want to help to get the PR green ?<|||||>> @nandwalritik Do you want to help to get the PR green ? @Narsil Yeah please , I tried but I was not able to make the test cases pass.<|||||>> Added some comments on how I fixed the CI for you. Thanks I will look at them.<|||||>@sgugger for final review.<|||||>@sgugger the test failure seem unrelated to the PR, should we go ahead and merge ?<|||||>Yes, those are flaky tests.
transformers
18,617
closed
Post-processing for HumanEval code generations not working properly
### System Info Not system-dependent ### Who can help? ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The post-processing function for HumanEval code generation in CodeParrot doesn't work as expected, precisely this function https://github.com/huggingface/transformers/blob/d6eeb871706db0d64ab9ffd79f9545d95286b536/examples/research_projects/codeparrot/scripts/human_eval.py#L67 It returns an empty string if no EOF_STRING is present: ```python EOF_STRINGS = ["\nprint"] def remove_last_block(string): """Remove the last block of the code containing EOF_STRINGS""" string_list = re.split("(%s)" % "|".join(EOF_STRINGS), string) # last string should be "" return "".join(string_list[:-2]) example = "def somme(x,y)\n return x+y" print(f"example :\n{remove_last_block(example)}") ``` ``` example : ``` Going to an old version of the repo, I found we had this function instead which works properly so I'm wondering why we changed it, it would also be more practical in case no stopping criteria at EOF_STRINGs was used during the generation as it only keeps the first block ```python def first_block(string): """Split off first block of code by scanning for class, def etc. on newlines.""" return re.split("|".join(EOF_STRINGS), string)[0].rstrip() print(f"example :\n{first_block(example)}") ``` ``` example : def somme(x,y) return x+y ```
08-14-2022 18:19:19
08-14-2022 18:19:19
Update: the case I mentioned would never occur with the stoppingcriteria in codeparrot as all generation should include an eof_string so no bug https://github.com/huggingface/transformers/blob/d6eeb871706db0d64ab9ffd79f9545d95286b536/examples/research_projects/codeparrot/scripts/human_eval.py#L59
transformers
18,616
closed
BartForConditionalGeneration is erroneous either at .forward or at .generate
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-debian-buster-sid - Python version: 3.7.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help? @patil-suraj @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch text = """ Phillip, Could you please do me a favor?\nI would like to read your current title policy to see what \ it says about easements.\nYou should have received a copy during your closing.\nI don't know how many \ pages it will be but let me know how you want to handle getting a copy made.\nI'll be happy to make the copy,\ or whatever makes it easy for you.\nThanks,\n """ checkpoint = "Aktsvigun/bart-base_aeslc_42" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).cuda() tokenizer = AutoTokenizer.from_pretrained(checkpoint) input_ids = tokenizer(text, truncation=True, return_tensors="pt")["input_ids"].to(model.device) generate_output = model.generate( input_ids, num_return_sequences=4, length_penalty=1., return_dict_in_generate=True, output_scores=True, early_stopping=True ) # Most probable labels according to the generate output. Taking from first since do not need initial generation token. labels = generate_output.sequences[0][generate_output.sequences[0] != 1][None, 1:] out = model(input_ids, labels=labels) probas = torch.nn.functional.softmax(out.logits, dim=-1) sequence_score = probas[0].log().gather(index=labels[0][:, None], dim=-1).sum() / len(labels[0]) assert torch.allclose(-sequence_score, out.loss) assert torch.allclose(sequence_score, generate_output.sequences_scores[0]) ``` ### Expected behavior The last assert must be passed, yet the results differ (-0.8670 for reconstructed score and -0.8581 from generated output). What happens in the code: I first generate the sequence with BART, and then I try to reproduce the score by calling `.forward` (reproducing the score as the average of log-probas of labels ids taken from each decoder iteration). Why is it important: this is a "sub-bug" which I found, verifying another bug: I wrote a function to restore the sequences and sequences scores from `transformers.generation_utils.BeamSearchEncoderDecoderOutput.scores` and got slightly different results with the ones outputted by `transformers.generation_utils.BeamSearchEncoderDecoderOutput`. Namely, I restore some sequences with the scores, higher than `transformers.generation_utils.BeamSearchEncoderDecoderOutput.sequences_scores`. I need to check, which version (default / mine) is correct, hence I need to pass the sequence with forward and calculate its "intrinsic" score. However, as this example shows, either `.forward` or `.generate` return slightly erroneous results.
08-13-2022 20:38:52
08-13-2022 20:38:52
Hi @Aktsvigun 👋 You are absolutely right, our documentation is not clear at the moment if you want to get the scores with beam search. In essence, the scores for a given index of `generate_output.sequences_scores` do not match the sequence with that index, because the sequence's internal index during beam search gets shuffled around (due to the beam search algorithm structure) :) We do have a method to reverse this shuffling, but it is not yet documented: 👉 [`compute_transition_beam_scores`](https://github.com/huggingface/transformers/blob/e0b825a8d03f50ed9dbf9fbbbb3b4fcf0b4e4b22/src/transformers/generation_utils.py#L876) 👉 [Beam search output docs](https://huggingface.co/docs/transformers/v4.21.1/en/internal/generation_utils#transformers.generation_utils.BeamSearchDecoderOnlyOutput), to understand the inputs to this function Give it a go, and let us know if it worked as you expected. We will be updating the docs soon, suggestions are appreciated! (keeping this issue open to track the documentation updates for the sequence scores)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,615
closed
Determine framework automatically before ONNX export
# What does this PR do? Determines whether to use `torch` or `tf2onnx` as the ONNX exporter automatically with the following priority: 1. User input via `framework` / `--framework`. 2. If local checkpoint is provided, use the same framework as the checkpoint. 3. Available framework in environment, with priority given to PyTorch. Fixes issue https://github.com/huggingface/transformers/issues/18495 where PyTorch was still attempted for a local TF checkpoint even though it did not exist in the environment. This also avoids requiring users to use `--framework=tf` when using the ONNX export driver script. Misc: * Adds `tf` to pip install for `run_tests_onnxruntime` and `run_tests_onnxruntime_all` in CI. ## Tests * `python -m transformers.onnx` driver with and without `--framework` on local checkpoints and hub. Tested in containerized environments that had only PyTorch, only TensorFlow, or both. * Successful. * Unit tests: ran `RUN_SLOW=true pytest tests/onnx` ~~* Overall, tests **passed w.r.t `main`** since they share the same failing tests:~~ <strike> ``` FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_pytorch - TypeError: 'module' object is not callable FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_tf - TypeError: 'module' object is not callable FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_048_data2vec_vision_image_segmentation - ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation. FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_048_data2vec_vision_image_segmentation - ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation. ``` </strike> ~~* Wrote up https://github.com/huggingface/transformers/issues/18614 for the `TypeError: 'module' object is not callable` errors.~~ **Fixed by https://github.com/huggingface/transformers/pull/18336** ~~* As for the `AutoModel` error, https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py#L363 says not to add new models, so is this failure acceptable?~~ **Fixed by https://github.com/huggingface/transformers/pull/18587** ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik and others who may be interested :) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-13-2022 16:10:52
08-13-2022 16:10:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thank you so much for greatly improving the framework selection in the ONNX exporter @rachthree (also, welcome as a first time contributor 🥳)! > > Overall, the logic looks great to me and I'd really like to see a unit test of the `determine_framework` function. This would give us some confidence that any future changes on the framework selection side won't accidentally break the desired behaviour. > > Regarding the failing unit tests, these will be fixed by: > > * #18587 > * #18336 > > so we can rebase your branch on `main` once they're approved / merged (should be soon) Thank you for the review and welcoming me! I'm excited to contribute, especially since this is my first PR in the open source community :) Glad to see the 2 PRs will fix those unit tests.
transformers
18,614
closed
`transformers.convert_graph_to_onnx.quantize` fails in unit tests
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.10.60.1-microsoft-standard-WSL2-x86_64-with-glibc2.29 * Used `tensorflow/tensorflow:latest` Docker image for this environment, then used `pip install -e '.[dev,onnx]'` - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Other: - `onnxruntime` version: 1.12.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run `RUN_SLOW=true pytest tests/onnx/test_onnx.py` Get failures: ``` FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_pytorch - TypeError: 'module' object is not callable FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_tf - TypeError: 'module' object is not callable ``` ### Expected behavior The unit tests should pass. I believe this failure is due to `onnxruntime.quantization.quantize` which is a module that contains functions `quantize_static` and `quantize_dynamic`. The API may have changed since the unit test was written. I'm not sure which is the one to use for the unit tests. Even after fixing, not sure how `transformers` should handle the different versions of `onnxruntime` or should the required version change in `setup.py`. See https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/quantize.py
08-13-2022 16:07:40
08-13-2022 16:07:40
cc @lewtun <|||||>I have fixed this in https://github.com/huggingface/transformers/pull/18336, but still waiting for a review.<|||||>Looks like this has been fixed! Closing this.
transformers
18,613
closed
feed `input_embeds` into `FlaxT5ForConditionalGeneration`
This is a PR requested by @sanchit-gandhi in [https://github.com/huggingface/transformers/issues/18036#issuecomment-1214131955](https://github.com/huggingface/transformers/issues/18036#issuecomment-1214131955) To summarize the issue - Flax encoder-decoder models are currently missing `input_embeds` argument, unlike the non-flax models. In this PR, I have added this argument for `FlaxT5ForConditionalGeneration` model and showed how this may be used for feeding features from other modalities such as vision into a language model such as `T5`. Please run [examples/flax/vision-language/t5_for_vl.py](examples/flax/vision-language/t5_for_vl.py) for testing this feature. Here's the output you should see: ``` -------------------------------------------------------------------------------- Model Input -> summarize: The US has "passed the peak" on new coronavirus cases, President Donald Trump said and predicted that some states would reopen this month. The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world. At the daily White House coronavirus briefing on Wednesday, Trump said new guidelines to reopen the country would be announced on Thursday after he speaks to governors. -------------------------------------------------------------------------------- Summary from input_ids -> the country has over 637,000 confirmed cases and more than 30,826 deaths . the latest cases could be announced monday after speaking to governors . -------------------------------------------------------------------------------- Summary from input_embeds -> the country has over 637,000 confirmed cases and more than 30,826 deaths . the latest cases could be announced monday after speaking to governors . -------------------------------------------------------------------------------- Summary after concatenating random visual embeddings -> the country has over 637,000 confirmed cases and more than 30,826 deaths . the latest cases could be announced monday after a tuesday vote . ```
08-13-2022 13:36:13
08-13-2022 13:36:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18613). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @BigRedT! Awesome that you've managed to get it working with so little code! And great to see that the model outputs are the same for `input_ids` and `input_embeds`. Before we can merge this we'll need to add some tests to make sure the functionality is as expected (which is should hopefully be given the toy example passes!). At the very least, we should add one test that mirrors the PyTorch test: https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/tests/test_modeling_common.py#L2094 And one that verifies that the output logits for `input_ids` and `input_embeds` match. Do you want to have a go at this?<|||||>If you have any questions/issues, feel free to reach out to @patrickvonplaten or @patil-suraj. They will be more than happy to lend you a hand and provide a review on this PR! Otherwise I can take a look in a little over a weeks time! Thanks @BigRedT!<|||||>@sanchit-gandhi thanks for helping with this! Will take a look in the coming week. <|||||>Added `test_input_embeds()` to `FlaxT5ModelTest` in `test_modeling_flax_t5.py` as requested by @sanchit-gandhi This test currently checks to see if the generated sequences from `input_ids` match those from `input_embeds` (obtained by feeding `input_ids` through the `shared` embedding layer in the model) @sanchit-gandhi wanted to see if the logits match too. @patrickvonplaten @patil-suraj what's the easiest way to compute logits from `generate()`. Also, could one of you review this PR. Thanks! <|||||>@sanchit-gandhi any updates?<|||||>Hey @BigRedT, let me know if you want any further clarification for the comments. Happy to answer any questions! This PR looks pretty close to completion :)<|||||>Hey @BigRedT - do you want to see this PR to completion? We're pretty close now! 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,612
closed
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC
### System Info Transformers version: 4.4.0 Platform: Google Colab Python version: 3.7 ### Who can help? @patrickvonplaten, @anton-l, @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Pre-trained model: "facebook/wav2vec2-large-xlsr-53" I have fine-tuned the above pre-tained model on Timit dataset. When I loaded my own dataset(named: Torgo), and try to evaluate the fine-tuned model performance on Torgo dataset, I got the following error: `RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC: size mismatch for lm_head.weight: copying a param with shape torch.Size([64, 1024]) from checkpoint, the shape in current model is torch.Size([51, 1024]). size mismatch for lm_head.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([51]).` ### Expected behavior My expected behavior is that I can directly evaluate the fine-tuned model on Torgo dataset. In other words, how do I further train a fine-tuned model on a different dataset? After reading some materials on HuggingFace and github, I know it is due to the `config.vocab_size` of Torgo dataset is not match that of Timit dataset. My questions are as follows: 1. When I align the vocab size of Torgo dataset to the one of the fine-tuned model, do I need to guarantee the vacab extracted from Torgo dataset is as same as that extracted from Timit dataset? Or I just need to care about the size? 2. If I cannot align the vocab size to the one of the fine-tuned model, is there any other method that I can achieve the expected behavior? 3. Could I just load the encoder or decoder components from the fine-tuned model? Because I am a beginner of the wav2vec2 model, if something that I mentioned above is wrong, feel free to correct me. Thank you in advance and looking forward to hearing from you.
08-13-2022 10:46:06
08-13-2022 10:46:06
Hey @YingLi001! Great question, and awesome that you found help using materials on the HuggingFace Hub and GitHub! I'll first provide some context regarding Wav2Vec2 models, their tokenisers and how they affects the model weights. This information should help in answering your questions! The pre-trained Wav2Vec2 model maps a sequence of audio inputs to a sequence of hidden-state representations. In order to decode text form this, we need to map the hidden-state representations to a vector over our vocabulary. To do this, we add a linear layer on top of the pre-trained Wav2Vec2 model. This linear layer performs a linear transformation of our hidden-states. It maps them from a dimensionality of 1024 to a dimensionality equal to our vocabulary size. In the case of TIMIT, where we have a vocabulary size of 51, we map the hidden-state representations from 1024-d down to 51-d. To decode a single character from this, we'd simply take the argmax of the 51-d vector, and look-up the corresponding token from our tokenizer! So if we had a model that predicted the following 51-d vector: ``` [ 0.01 ] [ 0.02 ] [ 0.90 ] ... [ 0.01 ] ``` The argmax would be the third token. The character that we'd predict would be the token at position 3 in the tokenizer. If we want to decode a string of character, we have to do something a bit more fancy than just taking the argmax (i.e. connectionist temporal classification (CTC)), but the linear transformation remains the same! The linear layer that I've eluded to is called the "language model head", or `lm_head` for short. What's special about the LM head is that it has a dimensionality specific to the vocabulary that we train the model on. If we have a vocabulary of 51 tokens, we'll have an LM-head weight matrix of size [51, 1024] (map the 1024-d hidden-states to the 51-d output vector). If we have 64 tokens, such as in the Torgo dataset, we'll have an LM-head weight matrix of size [64, 1024]. Whenever you give a model a different vocabulary size, the LM-head is going to have to be reset to a new size. Because of this, the LM-head weights are going to be randomly initialised, and thus require fine-tuning if we want our model to generate sensible predictions. Since each dataset typically has a different vocabulary, we usually build a new tokeniser for each dataset, and fine-tune the Wav2Vec2 model accordingly. 1. If you match the vocabularies sizes one-to-one, it is possible to load the LM-head weights. However, this does not guarantee that your model will predict characters accurately. Suppose in TIMIT you built the following tokeniser: ``` "a": 1 "b": 2 "c": 3 ... "z": 26 ``` And for Torgo you built a tokenizer of the same dimensionality, but with a re-ordered vocabulary: ``` "z": 1 "y": 2 "x": 3 ... "a": 26 ``` If we now use our LM-head weights to make predictions, we might get the following vector: ``` [ 0.01 ] [ 0.02 ] [ 0.90 ] ... [ 0.01 ] ``` Taking the argmax, we get the third token in our vocabulary. So for TIMIT, we'd output a "c". For Torgo, we'd output a "x". Very different! Because the vocabulary is shuffled, we've effectively re-initialised our LM head. If we want to load the LM-head and evaluate the model **without any further fine-tuning**, we would need to match the tokenisers **exactly** in vocabulary size and positions. This means that for the Torgo datasets, you would load the tokeniser that you built when you fine-tuned on TIMIT. However, if we permit fine-tuning on the Torgo dataset, we can do something a bit different. See point 3. 2. As mentioned, you need to align both the vocabulary size and the tokeniser exactly. 3. If the Torgo dataset is similar to TIMIT, it's more than valid to load the encoder in isolation from the TIMIT checkpoint and then train the model on Torgo. You could then build a tokenizer specifically for the Torgo dataset, but leverage the majority of the weights from the TIMIT checkpoint. You can do this with the `from_pretrained()` method specifying the checkpoint location, setting the `config.vocab_size` to the correct value, and `ingore_mismatched_sizes` to `True` (ignores the `RuntimeError` you got previously): ```python config.vocab_size = VOCAB_SIZE model = Wav2Vec2ForCTC.from_pretrained(CKPT_LOCATION, config=config, ingore_mismatched_sizes=True) ``` This will randomly initialise the LM-head weights. We can then go ahead and train the model on the Torgo dataset with our new purpose-built tokeniser to learn suitable weights. Note that in total, we have approximately $51*1024 + 51 \approx 53 \times 10^{3}$ weights for the linear layer. Overall, the model has nearly 400M params. That means that we're only randomly initialising the last 0.01% of the model weights! The remaining 99.99% are loaded from the pre-trained checkpoint. This means that we need relatively little data to fine-tune a model when we randomly initialised the LM-head but load the rest of the weights from pre-trained. What checkpoint you use to load your model before training on Torgo is at your discretion. If the datasets are similar, you could use the TIMIT checkpoint. Otherwise, you can fine-tune from scratch using the official pre-trained `facebook/wav2vec2-large-xlsr-53` checkpoint. In both cases, you'll likely have to build a new tokeniser to match the vocabulary of Torgo and randomly initialise the linear LM-head accordingly. Hope that helps and best of luck with your task!<|||||>Hi @sanchit-gandhi Thank you sooo much for your quickly reply and providing the detailed and constructive feedback! I followed your suggestions(point 3), and successfully load the fine-tuned checkpoint. However, I still encountered some problems. Could you provide some help? Thank you in advance. The code of load the fine-tuned checkpoint shows as follows: ` config = Wav2Vec2Config.from_json_file( "/content/drive/MyDrive/Thesispackage/wav2vec2-base-timit-demo-phones/checkpoint-11500/config.json") config.vocab_size = VOCAB_SIZE print("vocab_size", config.vocab_size) model = Wav2Vec2ForCTC.from_pretrained( "/content/drive/MyDrive/Thesispackage/wav2vec2-base-timit-demo-phones/checkpoint-11500", config=config, ignore_mismatched_sizes=True, # ctc_loss_reduction="mean", # pad_token_id=processor.tokenizer.pad_token_id, # vocab_size=len(processor.tokenizer), ) model.gradient_checkpointing_enable() model.pad_token_id=processor.tokenizer.pad_token_id model.ctc_loss_reduction="mean" model.vocab_size=len(processor.tokenizer)` Q1: When I load the fine-tuned checkpoint from local path using the `Wav2Vec2ForCTC.from_pretrained()`, I cannot add the `ctc_loss_reduction`, `pad_token_id`, `vocab_size` attributes inside this method. It will give me an error **TypeError: __init__() got an unexpected keyword argument 'vocab_size' site**. Therefore, I add them after loading the model. Am I right? Q2: Because the `vocab_size` between Torgo and Timit is still different, **[** Actually, **Torgo** dataset has **51** phonemes. **Timit** dataset has **64** phonemes. The 51 phonemes of Torgo dataset are **included** in the 64 phonemes of Timit dataset. **]** when I fine-tuned the fine-tuned checkpoint of Timit dataset on Torgo dataset, I got the following error. `[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in ctc_loss(log_probs, targets, input_lengths, target_lengths, blank, reduction, zero_infinity) 2615 ) 2616 return torch.ctc_loss( -> 2617 log_probs, targets, input_lengths, target_lengths, blank, _Reduction.get_enum(reduction), zero_infinity 2618 ) 2619 RuntimeError: blank must be in label range` After looking for some materials on HuggingFace, I found this [link](https://discuss.huggingface.co/t/runtimeerror-blank-must-be-in-label-range/4976/5). But I did not find a corresponding answer. Do you have any suggestions for solving this problem? Thanks again and looking forward to hearing from you. <|||||>Hey @YingLi001, sorry for the late reply! A1: The way you've set the vocab size with the config is entirely correct! A2: Interesting! I would then re-use the tokenizer for TIMIT dataset when fine-tuning on Torgo and load the Wav2Vec2 checkpoint in it's entirety. There seems to be no need in building a new tokenizer if the vocabularies overlap entirely. You'll then also retain the knowledge of the last linear layer (LM head) when you load the checkpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @YingLi001! I hope the above comments answered your questions. Feel free to reopen this issue if you're still encountering problems loading the `state_dict`, or a new issue if there's something else you're having issues with! More than happy to help 🤗
transformers
18,611
closed
OSError in linux server
### System Info Python 3.8.10 version of transformers == 4.0.1 linux server ### Who can help? @patil-suraj Hi. I'm fine-tuning TrOCR in Farsi. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction tokenizer xlm-roberta-large model: `model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224-in21k", 'facebook/mbart-large-50', from_tf=True)` ### Expected behavior I expect it to run and train the model as it does in colab, but it gives me this error: > OSError: Unable to load weights from pytorch checkpoint file for '/root/.cache/huggingface/transformers/d01bfc4a52063e6f2cc1bc7063192e012043a7c6d8e75981bb6afbb9dc911001.e4710baf72bd00d091aab2ae692d487c057734cf044ba421696823447b95521e' at '/root/.cache/huggingface/transformers/d01bfc4a52063e6f2cc1bc7063192e012043a7c6d8e75981bb6afbb9dc911001.e4710baf72bd00d091aab2ae692d487c057734cf044ba421696823447b95521e'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
08-13-2022 05:32:42
08-13-2022 05:32:42
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,610
closed
Added Docstrings for Deberta and DebertaV2 [PyTorch]
Adds Doctest for DeBerta and DeBertaV2 [Pytorch version] Issue: #16292 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ydshieh @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2022 21:24:17
08-12-2022 21:24:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh I made the changes you mentioned in this PR: https://github.com/huggingface/transformers/pull/17997 <|||||>@patrickvonplaten would like to have your feedback on this<|||||>request @LysandreJik for a final review in order to merge<|||||>Let's merge as it is. The usage of tiny models for doc is not ideal, but we decided to use them so doctest could run. There are several (downstream) models have done this way. We/I can definitely find some time to train (at least a few) models. <|||||>Sounds good to me!
transformers
18,609
closed
Optuna hyperparameter does not sync trial/hyperparameters when using torchrun single-node, multi-process
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Note this requires running a script with `torchrun` on a single node but multiple processes. I ran this on a computer with 4 GPUs, so I used 1 node, 4 processes per node. 1. Install dependencies: `datasets`, `evaluate` for example script, `optuna` itself. 2. Unzip scripts: [scripts_to_reproduce.zip](https://github.com/huggingface/transformers/files/9329512/scripts_to_reproduce.zip). 3. Run `bug.sh`. 4. Observe the output mentions 4 trials, rather than 1 as the arguments specify. Note each reported learning rate is different. Example of relevant output: ``` [INFO|trainer.py:1612] 2022-08-12 15:12:03,173 >> ***** Running training ***** [INFO|trainer.py:1613] 2022-08-12 15:12:03,173 >> Num examples = 1024 [INFO|trainer.py:1614] 2022-08-12 15:12:03,173 >> Num Epochs = 3 [INFO|trainer.py:1615] 2022-08-12 15:12:03,173 >> Instantaneous batch size per device = 64 [INFO|trainer.py:1616] 2022-08-12 15:12:03,173 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1617] 2022-08-12 15:12:03,173 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1618] 2022-08-12 15:12:03,174 >> Total optimization steps = 12 0%| | 0/12 [00:00<?, ?it/s][W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) 100%|██████████| 12/12 [00:07<00:00, 1.79it/s][INFO|trainer.py:1857] 2022-08-12 15:12:11,105 >> Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 7.9321, 'train_samples_per_second': 387.289, 'train_steps_per_second': 1.513, 'train_loss': 4.8337141672770185, 'epoch': 3.0} 100%|██████████| 12/12 [00:07<00:00, 1.51it/s] [INFO|trainer.py:729] 2022-08-12 15:12:11,143 >> The following columns in the evaluation set don't have a corresponding argument in `BertForQuestionAnswering.forward` and have been ignored: offset_mapping, example_id. If offset_mapping, example_id are not expected by `BertForQuestionAnswering.forward`, you can safely ignore this message. [INFO|trainer.py:2902] 2022-08-12 15:12:11,148 >> ***** Running Evaluation ***** [INFO|trainer.py:2904] 2022-08-12 15:12:11,148 >> Num examples = 1024 [INFO|trainer.py:2907] 2022-08-12 15:12:11,148 >> Batch size = 64 100%|██████████| 4/4 [00:00<00:00, 7.13it/s]08/12/2022 15:12:12 - INFO - utils_qa - Post-processing 1024 example predictions split into 1024 features. 100%|██████████| 1024/1024 [00:02<00:00, 392.92it/s] 08/12/2022 15:12:15 - INFO - utils_qa - Saving predictions to /tmp/debug_hpsearch_TrfOptunaBug_16693069.out/eval_predictions.json. 08/12/2022 15:12:15 - INFO - utils_qa - Saving nbest_preds to /tmp/debug_hpsearch_TrfOptunaBug_16693069.out/eval_nbest_predictions.json. 100%|██████████| 1024/1024 [00:02<00:00, 394.31it/s] 100%|██████████| 1024/1024 [00:02<00:00, 393.04it/s] 100%|██████████| 1024/1024 [00:03<00:00, 337.82it/s] [I 2022-08-12 15:12:16,218] Trial 1 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 5.82055234642441e-06}. Best is trial 1 with value: 3.0207817191957127. [I 2022-08-12 15:12:16,471] Trial 2 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 1.0083131394917086e-06}. Best is trial 1 with value: 3.0207817191957127. 100%|██████████| 4/4 [00:05<00:00, 1.30s/it] 08/12/2022 15:12:16 - INFO - __main__ - Rank 0: Metrics: {'eval_exact_match': 0.09765625, 'eval_f1': 3.0207817191957127, 'epoch': 3.0} [I 2022-08-12 15:12:16,788] Trial 3 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 5.780902974449181e-06}. Best is trial 1 with value: 3.0207817191957127. 08/12/2022 15:12:16 - INFO - __main__ - Best HP search run results: {'id': '1', 'value': 3.0207817191957127, 'all_metrics': None, 'hyperparameters': {'learning_rate': 5.82055234642441e-06}, 'value_name': 'brier_score', 'train_samples': 1024} [I 2022-08-12 15:12:17,173] Trial 0 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 1.1949546634616653e-06}. Best is trial 1 with value: 3.0207817191957127. [INFO|modelcard.py:443] 2022-08-12 15:12:17,212 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Question Answering', 'type': 'question-answering'}, 'dataset': {'name': 'squad', 'type': 'squad', 'config': 'plain_text', 'split': 'train', 'args': 'plain_text'}} ``` ### Expected behavior I expected this setup to produce and report 1 trial of results, with each GPU-process using the same hyperparameters, in this case the same learning rate. I expected `trainer.hyperparameter_search()` would be consistent in this way with how `trainer.train()` and `trainer.evaluate()` work. Instead the script reports 4 results and each GPU-process apparently uses a different learning rate.
08-12-2022 19:57:15
08-12-2022 19:57:15
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Can confirm this still happens as of commit c126a239bcea9c68453cf86045a5177afbe2be6c.<|||||>Hi, I have enabled the HPO DDP for optuna, and it works for CPU, you could try it in the latest master.<|||||>Hi, @sywangyi. Are you referring to #19096 (merged as 6227078d0a95aed688578d37b319e969a1dcd30f)? It's not clear to me that this should fix the problem -- because each process calls optimize() on its own I'd expect each process to generate a different Optuna trial object and for that to cause problems when passing the trial object to `train()`. However, I did try rerunning the OP (reproduce) script on the latest commit (83dc6377d0107b462e5d804ffa72d069625bc36b). It crashed with `RuntimeError: DDP expects same model across all ranks, but Rank 0 has 199 params, while rank 1 has inconsistent 0 params.`. Unsure if that's related.<|||||>Hi @spigo900 ,I am referring to https://github.com/huggingface/transformers/pull/19002, only rank0 will generate the trial and pass the argument to other ranks<|||||>@sywangyi I see, that looks like it should solve the problem. I will try that today. ETA: Thank you.<|||||>@sywangyi Yes, #19168 solved the problem. Thanks again.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,608
closed
IterableDatasets result in nan loss in eval with dataloader_num_workers>=1 and multi-gpu
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.31 - Python version: 3.9.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: YES ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run this modified/minimized [run_clm.py](https://gist.github.com/dlwh/074e2571fab15f94103603674dd184a3) under DeepSpeed (or presumably any other multiprocessing, but I didn't check) The script works fine if you don't use multiprocessing, or if you change it to not use an IterableDataset, or if you set dataloader_num_workers to 0 (which is the default) Relevant bit of logs: ``` Traceback (most recent call last): File "run_clm.py", line 125, in <module> main() File "run_clm.py", line 116, in main assert np.isfinite(metrics["eval_loss"]) AssertionError ``` ### Expected behavior assertion shouldn't fail, or at least trainer should require that dataloader_num_workers is 0 if using multi-gpu and IterableDataset... The underlying issue is that Trainer creates 'IterableDatasetShard's when using multi-gpu and IterableDataset, and (evaluation_loop)[https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3024-L3027] looks at the "num_examples" property of the IterableDatasetShard, but this value isn't actually incremented in the main training process if you're using `dataloader_num_workers>0`, because it's set in the worker processes... I will note that `evaluation_loop` goes to some trouble [to track the actual number of examples](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2935-L2944) so unless I'm missing something I think one could just always use that.
08-12-2022 18:24:29
08-12-2022 18:24:29
Thanks for flagging. The PR above should fix the issue, could you give it a quick try?
transformers
18,607
closed
Changed the class on which `register_for_auto_class` method is defined from `TFSequenceSummary` to `TFPreTrainedModel`
# What does this PR do? Changed the class on which `register_for_auto_class` method is defined from `TFSequenceSummary` to `TFPreTrainedModel`. It does not make sense that `register_for_auto_class` is defined on `TFSequenceSummary`. I believe this is a bug in PR #15379. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2022 17:53:12
08-12-2022 17:53:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @LysandreJik, could you review please? <|||||>Let me ping @sgugger for review, he's more acquainted with this code and will be back from leave shortly :)
transformers
18,606
closed
Fix Yolos ONNX export test
# What does this PR do? YOLOS has issue on ONNX exporting on CUDA. Let's skip it, just like [this](https://github.com/ultralytics/yolov5/pull/8378). **Question**: we can enable this if there is a way to run the export non-dynamically for this model. current job failure ``` tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_143_yolos_default (line 318) AssertionError: yolos, default -> Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu! (when checking argument for argument index in method wrapper__index_select) tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_144_yolos_object_detection (line 318) AssertionError: yolos, object-detection -> Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu! (when checking argument for argument index in method wrapper__index_select) ```
08-12-2022 16:52:56
08-12-2022 16:52:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,605
open
[WIP] Introduce NestLayer
# What does this PR do? Outline for a new layer class to replace `tf.keras.layers.Layer` in our models. It extends `tf.keras.layers.Layer` to include two methods `get_layer` and `layers` from `tf.keras.Model`. ## Motivation All of our TF models' layers are subclasses of `tf.keras.layers.Layer`. Unfortunately, when there are nested layers, we are not able to access the layers below the first level using the typical keras `layers` API. The main reason for introducing to be able to use our models as backbones. In DETR, we replace the ResNet backbone's [batchnorm layers to frozen batchnorm layers](https://github.com/huggingface/transformers/blob/2ab790e82d0759b667cd848a4d49e6ad65e15d59/src/transformers/models/detr/modeling_detr.py#L306). We need to be able to perform the same or similar operations on our TF models. This requires being able to access all of the layers, which is currently not possible. For example - our `TFResNetModel` will only show `TFResNetMainLayer` when we call `model.summary(expand_nested=True)` and `TFResNetMainLayer` has no property `layers`. ``` In [1]: from transformers import TFResNetModel In [2]: model_checkpoint = "microsoft/resnet-50" In [3]: model = TFResNetModel.from_pretrained(model_checkpoint) In [4]: model.summary(expand_nested=True) Model: "tf_res_net_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet (TFResNetMainLayer) multiple 23561152 ================================================================= Total params: 23,561,152 Trainable params: 23,508,032 Non-trainable params: 53,120 _________________________________________________________________ In [5]: model.layers Out[5]: [<transformers.models.resnet.modeling_tf_resnet.TFResNetMainLayer at 0x17fb9daf0>] In [6]: hasattr(model.layers[0], 'layers') Out[6]: False ``` This is also necessary if we every want to be able to access the intermediate activation functions of our TF model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
08-12-2022 16:49:49
08-12-2022 16:49:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18605). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,604
closed
[Donut] Fix URLs
# What does this PR do? This PR fixes the URL the my Donut notebooks.
08-12-2022 16:20:04
08-12-2022 16:20:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,603
closed
NAN values appears when including a new padding token in my tokenizer
I'm trying to fine-tune a DialoGPT model on a new dataset. I already processed my data correctly and adding a new padding token in the tokenizer didn't seem to make any issue : ```python #my dataset : print(dataset) print(dataset[0]['text']) ``` > ### output ### > > Dataset({ > features: ['text'], > num_rows: 48423 > }) > > [speaker 1]: Great that you wish to hear the voices of the guitarists. Here are your booking details of the tickets. You wish to purchase 4 tickets for the event The Original Wailers that is going to take place on March 8th in Berkeley, right? > [speaker 2]: Yup, you're right. Please May I know where is the event conducted and I need the complete address? > [speaker 1]: Please note down the complete address of the event happening. It's at Cornerstone Craft Beer & Live Music, 2367 Shattuck Avenue. Your reservation is successful and have a great time there! > [speaker 2]: Thanks much for the information you've given. Please can you help me to find some intermediate priced restaurant that provides Ethiopian kind of food. > [speaker 1]: Yup! There is an Ethiopian Restaurant named Addis Restaurant providing excellent and authentic traditional Ethiopian cuisine located in Berkeley. Do you wish to reserve a table here? > [speaker 2]: ```python #tokenizing and adding labels tokenizer.add_special_tokens({'pad_token': '[PAD]'}) def tokenize_function(examples): return tokenizer(examples["text"], padding='max_length', add_special_tokens =True, max_length=246) #truncation=True, max_length=13) tokenized_datasets = ds.map( tokenize_function, batched=True, num_proc=4, remove_columns=["text"] ) tokenized_datasets = tokenized_datasets.add_column("labels", tokenized_datasets[:]['input_ids']) train_set = model.prepare_tf_dataset( tokenized_datasets, shuffle=True, batch_size=1, ) sample = train_set.as_numpy_iterator() sample = sample.next() print(tokenized_datasets) print(train_set) print(sample) ``` > ### output ### > > Dataset({ > features: ['input_ids', 'attention_mask', 'labels'], > num_rows: 48423 > }) > > <PrefetchDataset element_spec=({'input_ids': TensorSpec(shape=(1, 246), dtype=tf.int64, name=None), 'attention_mask': TensorSpec(shape=(1, 246), dtype=tf.int64, name=None)}, TensorSpec(shape=(1, 246), dtype=tf.int64, name=None))> > > ({'attention_mask': array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0]]), > 'input_ids': array([[ 58, 4125, 3110, 352, 5974, 314, 765, 284, 711, > 440, 9190, 440, 14918, 440, 3825, 319, 616, 3359, > 13, 198, 58, 4125, 3110, 362, 5974, 921, 765, > 284, 3350, 262, 3496, 440, 9190, 440, 14918, 440, > 3825, 4291, 262, 3195, 11, 826, 30, 198, 58, > 4125, 3110, 352, 5974, 1320, 318, 826, 13, 1867, > 2099, 286, 3496, 318, 340, 30, 198, 58, 4125, > 3110, 362, 5974, 632, 318, 5610, 739, 262, 12136, > 6536, 290, 534, 3496, 468, 2067, 13, 198, 58, > 4125, 3110, 352, 5974, 20558, 617, 1637, 329, 502, > 13, 198, 58, 4125, 3110, 362, 5974, 220, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257]])}, > array([[ 58, 4125, 3110, 352, 5974, 314, 765, 284, 711, > 440, 9190, 440, 14918, 440, 3825, 319, 616, 3359, > 13, 198, 58, 4125, 3110, 362, 5974, 921, 765, > 284, 3350, 262, 3496, 440, 9190, 440, 14918, 440, > 3825, 4291, 262, 3195, 11, 826, 30, 198, 58, > 4125, 3110, 352, 5974, 1320, 318, 826, 13, 1867, > 2099, 286, 3496, 318, 340, 30, 198, 58, 4125, > 3110, 362, 5974, 632, 318, 5610, 739, 262, 12136, > 6536, 290, 534, 3496, 468, 2067, 13, 198, 58, > 4125, 3110, 352, 5974, 20558, 617, 1637, 329, 502, > 13, 198, 58, 4125, 3110, 362, 5974, 220, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, > 50257, 50257, 50257]])) The ouputs so far seem pretty clean for me. But when I try to make a prediction with my model or train it I have nan values as output : ```python #Instatiation of model from transformers import TFAutoModelForCausalLM model = TFAutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium") optimizer = AdamWeightDecay(learning_rate=1e-9, weight_decay_rate=0.01) model.compile(optimizer=optimizer, jit_compile=True) ``` ```python #model inference loss = model(sample[0], labels=sample[1]) print(loss) ``` > ### output ### > > TFCausalLMOutputWithCrossAttentions([('loss', > <tf.Tensor: shape=(1,), dtype=float32, numpy=array([nan], dtype=float32)>), > ('logits', > <tf.Tensor: shape=(1, 246, 50258), dtype=float32, numpy= > array([[[nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > ..., > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan]]], dtype=float32)>), > ('past_key_values', > (<tf.Tensor: shape=(2, 1, 16, 246, 64), dtype=float32, numpy= > array([[[[[nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > ..., > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan]], > > [[nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > ..., > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan], > [nan, nan, nan, ..., nan, nan, nan]], > ............. ```python #model training model.fit(train_set, epochs=1) ``` > ### output ### > > 56/48423 [..............................] - ETA: 2:27:49 - loss: nan This NAN value is certainly caused by the new token '[PAD]' added but I don't know how to deal with it. Can someone help me please ?
08-12-2022 13:48:58
08-12-2022 13:48:58
@ydshieh, would you like to take a look at this issue?<|||||>Hi @tessanix, thank you for reporting. Could you provide a self-contained code snippet that could be run and reproduce the issue. So far, `dataset` is not defined, neither `ds`. And `model` is used (`model.prepare_tf_dataset`) before it is created. It would be really helpful to have a self-contained code snippet for debugging 🙏 . Thank you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,602
closed
Remove pos arg from Perceiver's Pre/Postprocessors
Fix #15971 @NielsRogge
08-12-2022 13:21:30
08-12-2022 13:21:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,601
closed
Update run_mlm_no_trainer.py
Fixes #18436 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @muellerzr Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2022 12:24:33
08-12-2022 12:24:33
@muellerzr Please let me know if I have to make some changes or have I done it correctly<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@vedant-z Why did you close the request yourself? Was there a mistake?
transformers
18,600
closed
Add `TFAutoModelForSemanticSegmentation` to the main `__init__.py`
# What does this PR do? Currently, `from transformers import TFAutoModelForSemanticSegmentation` fails.
08-12-2022 11:43:28
08-12-2022 11:43:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for fixing!<|||||>test failure is irrelevant. Merge now.
transformers
18,599
closed
how to customize the encoder_output when using the generate function in BART?
For instance, I would like to concatenate two different last encoder hidden states from two different text. How can I achieve it using existing generate function?
08-12-2022 11:27:21
08-12-2022 11:27:21
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,598
closed
mac m1 `mps` integration
# What does this PR do? 1. Enables users to leverage Apple M1 GPUs via mps device type in PyTorch for faster training and inference than CPU. Fixes #17971 2. User has to just pass `--use_mps_device` argument. For example, you can run the offical Glue text classififcation task (from the root folder) using Apple Silicon M1 GPU with below command: ```bash export TASK_NAME=mrpc python examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --use_mps_device \ --overwrite_output_dir ``` Below are the output logs: ```bash python examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --use_mps_device \ --overwrite_output_dir NOTE: Redirects are currently not supported in Windows or MacOs. 08/12/2022 15:30:13 - WARNING - __main__ - Process rank: -1, device: mps, n_gpu: -1distributed training: False, 16-bits training: False 08/12/2022 15:30:13 - INFO - __main__ - Training/evaluation parameters TrainingArguments( _n_gpu=-1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=2e-05, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=-1, log_level_replica=-1, log_on_each_node=True, logging_dir=/tmp/mrpc/runs/Aug12_15-30-12_Sourabs-MacBook-Pro.local, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=3.0, optim=adamw_hf, output_dir=/tmp/mrpc/, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=32, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/tmp/mrpc/, save_on_each_node=False, save_steps=500, save_strategy=steps, save_total_limit=None, seed=42, sharded_ddp=[], skip_memory_metrics=True, tf32=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=True, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) 08/12/2022 15:30:14 - INFO - datasets.info - Loading Dataset Infos from ... [INFO|configuration_utils.py:643] 2022-08-12 15:30:17,041 >> loading configuration file config.json from cache at /Users/sourabmangrulkar/.cache/huggingface/hub/models--bert-base-cased/snapshots/a8d257ba9925ef39f3036bfc338acf5283c512d9/config.json [INFO|configuration_utils.py:695] 2022-08-12 15:30:17,042 >> Model config BertConfig { "_name_or_path": "bert-base-cased", "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.22.0.dev0", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } ... 08/12/2022 15:30:19 - INFO - __main__ - Sample 2619 of the training set: {'sentence1': 'The proceedings were taken up with prosecutors outlining their case against Amrozi , reading 33 pages of documents outlining allegations against him .', 'sentence2': 'Proceedings were taken up with prosecutors outlining their case against Amrozi , reading a 33-page accusation letter to the court .', 'label': 1, 'idx': 2916, 'input_ids': [101, 1109, 10830, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 3081, 5097, 1104, 4961, 1149, 13260, 9966, 1222, 1140, 119, 102, 20661, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 170, 3081, 118, 3674, 21100, 2998, 1106, 1103, 2175, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}. ... [INFO|trainer.py:1612] 2022-08-12 15:30:22,027 >> ***** Running training ***** [INFO|trainer.py:1613] 2022-08-12 15:30:22,027 >> Num examples = 3668 [INFO|trainer.py:1614] 2022-08-12 15:30:22,027 >> Num Epochs = 3 [INFO|trainer.py:1615] 2022-08-12 15:30:22,027 >> Instantaneous batch size per device = 32 [INFO|trainer.py:1616] 2022-08-12 15:30:22,027 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:1617] 2022-08-12 15:30:22,027 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1618] 2022-08-12 15:30:22,027 >> Total optimization steps = 345 100%|█████████████████████████████████████████████████████████████| 345/345 [09:38<00:00, 2.04s/it][INFO|trainer.py:1857] 2022-08-12 15:40:00,410 >> Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 578.4189, 'train_samples_per_second': 19.024, 'train_steps_per_second': 0.596, 'train_loss': 0.4251004426375679, 'epoch': 3.0} 100%|█████████████████████████████████████████████████████████████| 345/345 [09:38<00:00, 1.68s/it] [INFO|trainer.py:2647] 2022-08-12 15:40:00,481 >> Saving model checkpoint to /tmp/mrpc/ [INFO|configuration_utils.py:440] 2022-08-12 15:40:00,487 >> Configuration saved in /tmp/mrpc/config.json [INFO|modeling_utils.py:1569] 2022-08-12 15:40:01,553 >> Model weights saved in /tmp/mrpc/pytorch_model.bin [INFO|tokenization_utils_base.py:2114] 2022-08-12 15:40:01,561 >> tokenizer config file saved in /tmp/mrpc/tokenizer_config.json [INFO|tokenization_utils_base.py:2121] 2022-08-12 15:40:01,561 >> Special tokens file saved in /tmp/mrpc/special_tokens_map.json ***** train metrics ***** epoch = 3.0 train_loss = 0.4251 train_runtime = 0:09:38.41 train_samples = 3668 train_samples_per_second = 19.024 train_steps_per_second = 0.596 08/12/2022 15:40:01 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:729] 2022-08-12 15:40:01,619 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1, idx. If sentence2, sentence1, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [INFO|trainer.py:2898] 2022-08-12 15:40:01,637 >> ***** Running Evaluation ***** [INFO|trainer.py:2900] 2022-08-12 15:40:01,637 >> Num examples = 408 [INFO|trainer.py:2903] 2022-08-12 15:40:01,637 >> Batch size = 8 100%|███████████████████████████████████████████████████████████████| 51/51 [00:04<00:00, 11.68it/s] ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.8407 eval_combined_score = 0.8644 eval_f1 = 0.8881 eval_loss = 0.3957 eval_runtime = 0:00:04.80 eval_samples = 408 eval_samples_per_second = 84.915 eval_steps_per_second = 10.614 ``` Attaching plots showing GPU usage on M1 pro with 10 CPU and 14 GPU cores: <img width="393" alt="Screenshot 2022-08-12 at 3 37 37 PM" src="https://user-images.githubusercontent.com/13534540/184333060-85df7fc3-28ab-4a7f-9a61-787df6c19c90.png"> Note: Pre-requisites: Installing torch with `mps` support ```python # installing torch with m1 support on mac # install python 3.10.5 # check the platform import platform platform.platform() 'macOS-12.5-arm64-arm-64bit' # (This is compatible as the macOS version is above 12.3 and it is the ARM64 version) # install torch 1.12 via the below command # pip3 install torch torchvision torchaudio # test the `mps` device support >>> import torch >>> torch.has_mps True >>> a = torch.Tensor([10,11]) >>> a.to("mps") /Users/mac/ml/lib/python3.10/site-packages/torch/_tensor_str.py:103: UserWarning: The operator 'aten::bitwise_and.Tensor_out' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)) tensor([10.0000, 11.0000], device='mps:0') ```
08-12-2022 10:15:29
08-12-2022 10:15:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,597
closed
[CvT] Tensorflow implementation
# What does this PR do? This PR adds the Cvt model implementation in Tensorflow. This includes the base model and the model with an image classification head on top. <!-- Remove if not applicable --> ## TODO - [x] Write the fundamental components (Convolutional Token Embeddings & Convolutional Transformer Block) - [x] Write base model & image classification model - [x] Modify related utilities - [x] Write relevant tests (in test suite) - [x] Preview Tensorflow documentation for Cvt ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ### Questions - In the configuration file of the model CVT, ```layer_norm_eps``` is initialized at ```1e-12```. However, it seems that in the original implementation, the authors use ```epsilon=1e-5```. Moreover, the Cvt model in pytorch (HuggingFace), does not seem to use the configuration ```layer_norm_eps=1e-12``` for layer normalization throughout the model, instead using the default ```epsilon=1e-5```. What is the use of layer_norm_eps in the configuration file (of the Cvt model) ? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
08-12-2022 09:55:13
08-12-2022 09:55:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR @mathieujouffroy! Let me ping @amyeroberts for review :)<|||||>You're welcome. Cool thanks, should I create an Issue ? <|||||>Thanks a lot for both of your reviews 🙏 ! I've corrected the issues :) Although, I kept using `shape_list` instead of `tf.shape` throughout the implementation of the model as `tf.shape` was breaking things while running the tests (see comment above). Should I follow the instructions in this [PR comment](https://github.com/huggingface/transformers/pull/18678#issuecomment-1222244001) to upload to weights ?<|||||>@mathieujouffroy awesome, seems like we are ready to move on to the next stage. I'm adding @sgugger as the last reviewer. Meanwhile, you can open the PR to the TF model weights on the hub as follows: 1. Make sure you have the latest version of the hub installed (`pip install huggingface_hub -U`) and that you are logged in to HF with a write token (`huggingface-cli login`) 2. Run `transformers-cli pt-to-tf --model-name foo/bar` from this branch :D 3. In the Hub PR, tag `@joaogante, @nielsr, @sgugger`<|||||>> @mathieujouffroy awesome, seems like we are ready to move on to the next stage. I'm adding @sgugger as the last reviewer. > > Meanwhile, you can open the PR to the TF model weights on the hub as follows: > > 1. Make sure you have the latest version of the hub installed (`pip install huggingface_hub -U`) and that you are logged in to HF with a write token (`huggingface-cli login`) > 2. Run `transformers-cli pt-to-tf --model-name foo/bar` from this branch :D > 3. In the Hub PR, tag `@joaogante, @nielsr, @sgugger` I am getting an error when using `transformers-cli pt-to-tf --model-name microsoft/cvt-13` : ``` File "/Users/MathieuJouffroy/transformers/src/transformers/commands/pt_to_tf.py", line 307, in run + "\n".join([f"{k}: {v:.3e}" for k, v in hidden_differences.items() if v > self._max_error]) ValueError: The cross-loaded TensorFlow model has different outputs, something went wrong! List of maximum output differences above the threshold (5e-05): logits: 1.190e-04 List of maximum hidden layer differences above the threshold (5e-05): hidden_states[2]: 1.227e-02 ``` It seems that both the `max_crossload_output_diff `and the `max_crossload_hidden_diff` are bigger than the `self._max_error` **(5e-5)**. Respectively I have `max_crossload_output_diff` = 0.00011897087 **(1.190e-04)** and `max_crossload_hidden_diff` = 0.012268066 **(1.227e-02)**. I am trying to figure out how to correct this error (WIP).<|||||>@mathieujouffroy ~1e-2 is quite large -- does this happen exclusively on `microsoft/cvt-13`, or across all CvT models?<|||||>> @mathieujouffroy ~1e-2 is quite large -- does this happen exclusively on `microsoft/cvt-13`, or across all CvT models? Yess, unfortunately it happens across all CvT models. When inspecting the difference between the hidden states of the pytorch model and the hidden states of the tensorflow model, I can see that the difference increases throughout the model (with the number of layers). The CvT model is composed of 3 stages of encoder block, with their respective number of layers being 1, 2 and 10. In the last stage, the difference between the torch model's hidden state and the tensorflow model's hidden state increases from ~e-5 at layer[0] to ~1e-2 at layer[9]. For `microsoft/cvt-13` : ``` CvtLayer output vs TFCvtLayer output diff pt-tf stage[0]/layer[0]: 1.77919864654541e-05 diff pt-tf stage[1]/layer[0]: 2.4199485778808594e-05 diff pt-tf stage[1]/layer[1]: 3.9249658584594727e-05 diff pt-tf stage[2]/layer[0]: 3.0934810638427734e-05 diff pt-tf stage[2]/layer[1]: 0.000102996826171875 diff pt-tf stage[2]/layer[2]: 0.0004825592041015625 diff pt-tf stage[2]/layer[3]: 0.0009307861328125 diff pt-tf stage[2]/layer[4]: 0.001621246337890625 diff pt-tf stage[2]/layer[5]: 0.0032196044921875 diff pt-tf stage[2]/layer[6]: 0.0064239501953125 diff pt-tf stage[2]/layer[7]: 0.0091705322265625 diff pt-tf stage[2]/layer[8]: 0.012481689453125 diff pt-tf stage[2]/layer[9]: 0.01226806640625 Hidden Differences: hidden_states[0]:1.77919864654541e-05 hidden_states[1]:3.9249658584594727e-05 hidden_states[2]:0.01226806640625 Output Differences: logits:0.00011897087097167969 ``` I can't seem to correct this issue. I was wondering if this was due to floating points operations. Do you have any advice ? 🙏 <|||||>@mattchurgin in these cases, a deep dive has to be done -- place a pair of `breakpoint()` in the layer where the problems start, one in each framework, and see which operation causes the divergence. Then, confirm that the TF operation/layer is parametrized correctly and, if it is, one has to dig even deeper :D <|||||>Hello @gante, sorry for the late response. I've done a deep dive into both frameworks. It seems that the Batch Normalization is responsible for the divergence. The 2 residual connections further increase the divergence throughout the model. However, I have parameterized `tf.keras.layers.BatchNormalization` accordingly to the default parameters of pytorch (`epsilon=1e-5` and `momentum=0.1`). I have also set both models in inference mode when testing. Is this divergence due to the **momentum** definition of Batch Normalization being different in tensorflow than in pytorch ? When removing the Batch Normalization layers from both frameworks, the difference in the output tensors and the hidden states is greatly reduced. I get a `max_crossload_output_diff` of ~e-6 and a `max_crossload_hidden_diff` of ~e-4 for all Cvt models. However, the `max_crossload_hidden_diff` is still higher than 5e-5 (I have ~e-4). The 2 residual connections are responsible for this difference. I'm a bit confused. Therefore I've inspected the ViT model (`google/vit-base-patch16-224`) which also has 2 residual connections. There is also a divergence in the hidden states between the tensorflow implementation and the pytorch implemention. This difference also increases throughout the layers (with the residual connections), until it reaches a `max_crossload_hidden_diff` of ~2e-2 at layer 12. Is this behaviour normal/acceptable ?<|||||>@mathieujouffroy That's a great in-depth exploration! Previously we didn't have these checks in place, so it is possible that issues like the one you're seeing slipped through the cracks. It's not positive at all to have such large mismatches (it implies that TF users will have a poorer experience). I've had in my plans to go back and double-check the most popular models with the recently introduced checks, and you've just raised the priority of the task with your message :) I think @amyeroberts has seen similar TF/PT mismatches due to the Batch Normalization layer. @amyeroberts do you mind pitching in?<|||||>@mathieujouffroy Thanks for all the work digging into this 🕵️ As `momentum` is set for both the pytorch and TF models, I believe their behaviour (outputs and moving stats updates) _should_ be the same during both inference and training, given the same weights and params. @gante @mathieujouffroy Yes, I had similar issues with the TF ResNet port ([a weights PR for reference](https://huggingface.co/microsoft/resnet-152/discussions/1)). Like this model, the batch norm layer introduced differences which then got amplified through the forward pass. @ydshieh did some excellent detective work, and found that matching all of the parameters and inputs to produce an equivalent TF and PyTorch layer would still produce outputs with a difference on the order of `1e-7` (enough to start causing problems 😭) Ultimately, we decided to add the weights as the difference between the logits was small ~1e-5. I think the ~1e-4 absolute differences in this case are acceptable for adding the weights. @sgugger Is this OK? <|||||>Yes, as long as it stays in the range of 1e-4, we can accept the difference between frameworks.<|||||>Thank you for pitching in @amyeroberts :D @mathieujouffroy feel free to use `--max-error 1e-4` (or slightly higher) in the `pt-to-tf` CLI to ignore those errors and push the weights!<|||||>@gante @amyeroberts you're welcome and thanks a lot for your feedbacks 😊 ! @amyeroberts It seems that for the batch normalization, the update rule for the running statistics is slightly different in Tensorflow compared to Pytorch: PT -> `running_mean = (1 - momentum) * running_mean + momentum * new_value` TF -> `running_mean = momentum * running_mean + (1 - momentum) * new_value` Therefore, I think I made a mistake in setting the momentum to 0.1 in TF Batch Norm. Considering the update rules, shouldn't the momentum be set to 0.9 (default) in TF when it is set to 0.1 (default) in PT ? However, even though I change the momentum, I still have the same difference in my outputs 😕. @gante Ok thanks. Although, just as a reminder, when I keep the Batch Normalization layers, I have a `max_crossload_output_diff` of ~1e4 and a `max_crossload_hidden_diff` of ~2e-2 for all CvT models except the `cvt-21-384-22k`. The `cvt-21-384-22k` has a `max_crossload_output_diff` of ~4e4 and a `max_crossload_hidden_diff` of ~1e-1. Therefore, should I use `--max-error 2e-2` for all CvT models and `--max-error 1e-1` for `cvt-21-384-22k` ? I'll also be more than happy to help if you need any assistance regarding the mismatches between PT and TF (I'm a bit intrigued) 😊<|||||>Hello @gante @amyeroberts, as pointed out in this [PR](https://github.com/huggingface/transformers/pull/19341), the dense layer weights for PT should be initialized with `nn.init.trunc_normal_` instead of `normal_` as in the original implementation of the Cvt model (which uses `trunc_normal_` from `timm` library). In TF `get_initializer` already returns `tf.keras.initializers.TruncatedNormal`. Also, following the original implementation, in both frameworks the `cls_token` should be initialized with `trunc_normal` as with the `config.initializer_range` (here 0.02). Should I add the modifications regarding the `momentum` (setting it to 0.9 in TF) and the use of `trunc_normal` ? <|||||>@mathieujouffroy regarding initialization yeah, update it if possible :) In any case, it has come to our attention that TF initialization is very wrong in some places, and we will be adding tests and updates in the coming weeks! Regarding momentum, I will defer the decision to @amyeroberts, who has been working more closely with that layer.<|||||>@mathieujouffroy Thanks for the update. Regarding the initialisation, could you update the PyTorch model in a separate PR and do similar thing as [suggested here](https://github.com/huggingface/transformers/pull/19341#pullrequestreview-1131874527) - naming the PR with a 🚨🚨🚨 prefix so we can easily spot and flag in the release notes. For momentum in the batch norm layers, yes please use the (1 - pytorch momentum) value :) <|||||>@gante @amyeroberts Okay thanks, I will update the changes concerning the TF model (cls_token initialization & momentum) in this PR and will create a new PR for the Pytorch model 😊. What should I do regarding the use of `pt-to-tf` (and `--max-error`) to add the weights ? @gante should I wait for the future tests and updates you were mentioning regarding the TF initialization ? Or should I add the weights with the `--max-error` I mentioned in the comment above ? Thanks !<|||||>@mathieujouffroy the initialization shouldn't change the mismatch between the two frameworks for pre-trained models -- I'd recommend going forward with `--max-error` 💪 <|||||>Hi @amyeroberts, thanks for your mention !! I've added the [PR](https://github.com/huggingface/transformers/pull/19486) regarding the pytorch model. @gante following your recommendation I've added the weights on the hub 😊 As @amyeroberts had pointed out, I'll need to remove `from_pt` in the testing file once the weights are added. <|||||>@mathieujouffroy weights merged 🙌 <|||||>> @mathieujouffroy weights merged 🙌 Cool thanks @gante 😊 ! I'll update the testing file & run the slow tests locally . <|||||>@mathieujouffroy off-topic: are you working with `transformers` as part of École 42? I've been at the school once (like 5+ years ago) and I had a friend who participated -- I really liked the concept!<|||||>> @mathieujouffroy off-topic: are you working with `transformers` as part of École 42? I've been at the school once (like 5+ years ago) and I had a friend who participated -- I really liked the concept! @gante Yess I was working with `transformers` 🤗 on my last project (computer vision) at École 42. The project was in partnership with Hectar, an agricultural campus. I was pretty excited to try out the vision transformers 😊. I've also used `transformers` at 42 for my NLP projects and in my internship (in NLP). I think 42 is a very good training (I've just finished) 🚀 : project-based & peer to peer pedagogy ! <|||||>All seems ready, merging as soon as CI turns green. @mathieujouffroy on behalf of TF users, thank you for making the ecosystem richer 🧡 <|||||>@gante @amyeroberts thanks a lot for your help and feedbacks !! 💛 It was really interesting and cool to do this PR (1st in an open source project) and to get it merge 😊
transformers
18,596
closed
FSDP bug fix for `load_state_dict`
# What does this PR do? Workaround for https://github.com/pytorch/pytorch/issues/82963
08-12-2022 05:34:28
08-12-2022 05:34:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,595
closed
oob performance improvement for cpu DDP
Signed-off-by: Wang, Yi A <[email protected]> # What does this PR do? oob performance improvement for cpu DDP, currently if no OMP_NUM_THREADS/MKL_NUM_THREADS is set, num_cpu_threads_per_process is set to 1. very slow performance. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
08-12-2022 03:45:38
08-12-2022 03:45:38
@yao-matrix @sgugger @liangan1 please help review it<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @sywangyi, thanks for your PR! Sylvain is currently off for a few weeks, we'll merge this PR once he's back. Thanks for your contribution!
transformers
18,594
closed
typos
a few small typo fixes. @sgugger
08-12-2022 03:00:05
08-12-2022 03:00:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,593
closed
[bloom] convert script tweaks
@younesbelkada, could you please have a look 1. when creating a small model for testing the assert for unexpected keys breaks the conversion - I think we should either not assert or perhaps warn instead? 2. also don't try to set up dtype if it's `None` actually for the latter if `torch_dtype` is not defined, shouldn't we try to derive the dtype from the Meg-DS checkpoint dtype? since currently it'll just create fp32 weights, ignoring the actual dtype of the weights. Thank you
08-12-2022 02:53:19
08-12-2022 02:53:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>sorry, forgot to follow up here. It was just a small model that I created on the fly. that assert is not user-friendly as it just fails w/o telling which keys are unexpected. if it were to tell which keys are unexpected I would be able to answer your question<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@younesbelkada, this is still an issue - we are training a few variations of bloom for m4 https://github.com/huggingface/m4/blob/text_pretraining/experiments/pretraining/text_pretraining/narrow_gpt.slurm and the conversion fails in 2 asserts: ``` File "src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py", line 194, in convert_bloom_checkpoint_to_pytorch assert not other_keys.unexpected_keys AssertionError ``` and if the above removed, then next in: ``` File "src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py", line 200, in convert_bloom_checkpoint_to_pytorch assert not missing_keys AssertionError ``` it's fine then at converting. so besides my PR a 2nd assert is an issue as well. Thank you! the failing command line is: ``` python src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py \ --bloom_checkpoint_path $ajs_ALL_CCFRSCRATCH/m4_text_pretraining/narrow_gpt/checkpoints/main/global_step66000 \ --pytorch_dump_folder_path $ajs_ALL_CCFRSCRATCH/m4_text_pretraining/narrow_gpt/checkpoints/hf/narrow-gpt-66000 \ --pretraining_tp 1 \ --bloom_config_file /gpfsdswork/projects/rech/cnw/commun/experiments/stas/m4/experiments/pretraining/text_pretraining/narrow_config.json ``` cc for awareness: @TevenLeScao <|||||>grr, my apologies, @younesbelkada - it proved to be a misinformation - the pretrained model was actually meg-ds gpt2 model and not bloom :( sorry about that. but yes, your suggestion of printing out the unexpected keys rather than how it was before sounds great to me Let's do that for both asserts? <|||||>for posterity since we don't have an official script converting from Megatron-Deepspeed's gpt2 code I ended up using this [Megatron-Deepspeed conversion script](https://github.com/bigscience-workshop/bigscience/tree/aa872e754106f6678e8a9dac8c6962404ba39a6d/train/tr1-13B-base#checkpoint-conversion-and-upload) we wrote when developing [pre-bloom tr1-13b-en model](https://github.com/bigscience-workshop/bigscience/tree/aa872e754106f6678e8a9dac8c6962404ba39a6d/train/tr1-13B-base) and then using HF's GPT2 modeling code to generate with it. It's not perfect as GPT2 != gpt2 in Meg-DS - 3 differences are https://github.com/bigscience-workshop/Megatron-DeepSpeed/issues/138 but it more or less works. conversion: ``` cd Megatron-DeepSpeed PYTHONPATH=. $six_ALL_CCFRWORK/code/Megatron-DeepSpeed/tools/convert_checkpoint/deepspeed_to_transformers.py \ --input_folder checkpoints/main/global_step112000 \ --output_folder checkpoints/hf/shallow-gpt-112000 ``` validate the conversion produced a usable model: ``` python -c '\ import sys; \ mname = sys.argv[1]; \ from transformers import AutoTokenizer, AutoModelForCausalLM; \ tokenizer = AutoTokenizer.from_pretrained(mname); \ tokenizer.add_special_tokens({"pad_token": tokenizer.eos_token}); \ model = AutoModelForCausalLM.from_pretrained(mname); \ inputs = ["Hello, my dog is cute"]; \ input_tokens = tokenizer.batch_encode_plus(inputs, return_tensors="pt", padding=True) outputs = model.generate(**input_tokens, do_sample=False); \ outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True); \ print(outputs); \ ' $ajs_ALL_CCFRSCRATCH/m4_text_pretraining/shallow_gpt/checkpoints/hf/shallow-gpt-112000 ['Hello, my dog is cute." "I\'m sorry, I\'m not allowed to say that."'] ``` so we know it works.<|||||>Ahh I see thanks a lot for the clarification! Yes I think we should update the asserts ;) ! Also I think it might be useful to have a script to convert meg-ds gpt2 models to HF format, where do you think we should put this file? Or maybe just adding an arg `convert-gpt2` on the current file would work too, your call! <|||||>I updated the PR to improve the 2nd assert, so I think it's good to merge now. > Also I think it might be useful to have a script to convert meg-ds gpt2 models to HF format, where do you think we should put this file? I'm not sure since it depends on `Megatron-Deepsped`'s internal files: https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/tools/convert_checkpoint and it's specific to this fork of Meg-DS (or rather nobody maintains the parent fork that is under MSFT) Perhaps we add CONVERSION.md under https://github.com/huggingface/transformers/tree/main/src/transformers/models/gpt2 and show how to convert meg-ds models?<|||||>There is also https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2 but that one can only handle Megatron-LM generated checkpoints (not Megatron-Deepspeed ones). <|||||>> There is also https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2 but that one can only handle Megatron-LM generated checkpoints (not Megatron-Deepspeed ones). That one folder would fit best I think, happy to open a PR there, let me know! <|||||>I pushed the new doc here. Do you think we want a separate PR for that? can move it there if preferable. in a way it summarizes all the discussions of this PR. whatever works.<|||||>this is perfectly fine for me thanks a lot @stas00 ! Gently pinging @sgugger for a final review / approval Thank you!
transformers
18,592
closed
[fsmt] deal with -100 indices in decoder ids
Fixes: https://github.com/huggingface/transformers/issues/17945 decoder input ids get the default index -100, which breaks the model - like t5 and many other models add a hardcoded fix to replace -100 with the correct pad index. For some reason this use case hasn't been used with this model until recently - so this issue was there since the beginning it seems. Any suggestions to how to add a simple test here? or perhaps we have something similar already? user's script is quite massive. I think it's Trainer's collate that leads to that padding since it uses -100. @sgugger
08-12-2022 02:43:38
08-12-2022 02:43:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,591
closed
[doc] fix anchors
the manual anchors end up being duplicated with automatically added anchors and no longer work. Examples: https://huggingface.co/docs/transformers/v4.21.1/en/glossary#input-ids https://huggingface.co/docs/transformers/v4.21.1/en/glossary#position-ids I confirmed that this fix works via the auto-generated docs link. @sgugger
08-12-2022 02:16:35
08-12-2022 02:16:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,590
closed
Update docs landing page
This PR updates the docs landing page to better describe what `transformers` is, what it offers, and briefly introduce users to its design. I think this gives a clearer picture of `transformers` and is more impactful than listing all the different tasks supported. Let me know what you think! There's also a minor issue with the image for custom support. Nils is no longer with us, so we may want to update this image with another member of the team. No big deal though :)
08-12-2022 01:42:32
08-12-2022 01:42:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,589
closed
Cannot get WER during WavVec2 fine-tuning
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten, @anton-l ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I followed the [blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) and revisited one of my old project, but I couldn't get WER during the fine-tuning. Surprisingly it worked perfectly early this year. I did get the following evaluation metrics during the first round: ``` {'eval_loss': -0.26198065280914307, 'eval_runtime': 41.4222, 'eval_samples_per_second': 25.904, 'eval_steps_per_second': 6.494, 'epoch': 0.27} ``` As you can see, there was no `eval_wer` in this entry. Tried the following, still not seeing `eval_wer` ```python def compute_metrics(pred): """ batchfy and compute the WER metrics :param pred: _description_ :type pred: _type_ :return: _description_ :rtype: _type_ """ wer_metric = load_metric("wer") pred_logits = pred.predictions # change to pred.logits did not help pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer_metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} for batch in trainer.get_eval_dataloader(): print(batch.keys()) # return dict_keys(['input_values', 'labels']) batch = {k: v.to("cuda") for k, v in batch.items()} print(trainer.evaluate()) # return {'eval_loss': 0.29788103699684143, 'eval_runtime': 44.4312, 'eval_samples_per_second': 24.15, 'eval_steps_per_second': 6.054} break ``` Any suggestions? Thanks! ### Expected behavior Should return `eval_wer` in the evaluation step.
08-11-2022 23:46:43
08-11-2022 23:46:43
transformers
18,588
closed
Adds OWLViT to models exportable with ONNX
Output for tests on my local machine: ```bash (transformers) ➜ transformers git:(owlvit_onnx) ✗ RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -v -k "owlvit" --full-trace ================================================================== test session starts =================================================================== platform darwin -- Python 3.8.12, pytest-7.1.2, pluggy-1.0.0 -- /Users/dhruv/Documents/code/transformers/.venv/bin/python cachedir: .pytest_cache rootdir: /Users/dhruv/Documents/code/transformers, configfile: setup.cfg collected 410 items / 408 deselected / 2 selected tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default PASSED [ 50%] tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default PASSED [100%] ==================================================================== warnings summary ==================================================================== tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default /Users/dhruv/Documents/code/transformers/src/transformers/image_utils.py:223: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default /Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/feature_extraction_owlvit.py:80: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. resample=Image.BICUBIC, tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default /Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:272: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default /Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:312: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default /Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:709: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask.fill_(torch.tensor(float("-inf"))) tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default /Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:280: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default /Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:289: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default /Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results. warnings.warn( -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ==================================================== 2 passed, 408 deselected, 14 warnings in 44.45s ===================================================== ``` Note: Haven't tested this on GPU yet, don't have a GPU machine with me currently. Also, this is for the `default` task of OWLViT. The `object-detection` task isn't supported by AutoModel yet, because of which if I add that to onnx it's failing currently. Should I make the change for AutoModel as well? cc: @ChainYo
08-11-2022 19:02:10
08-11-2022 19:02:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>@unography That's strange that it does not work for object detection. It should actually work, DETR and YOLOS are exportable to ONNX for instance (see [here](https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/src/transformers/onnx/features.py#L262)). What is the error you get when trying to export the model for object detection?<|||||>@regisss I think it just needs to be defined in the config for AutoModel, for Object detection [here](https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/src/transformers/models/auto/modeling_auto.py#L443) This is the stacktrace - ```bash _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cls = <class 'transformers.models.auto.modeling_auto.AutoModelForObjectDetection'> config = OwlViTConfig { "_commit_hash": "7cc55348dae46396474cd94bf00a542167a10f8d", "_name_or_path": "google/owlvit-base-pa...nsformers_version": "4.22.0.dev0", "typical_p": 1.0, "use_bfloat16": false }, "vision_config_dict": null } kwargs = {}, trust_remote_code = False @classmethod def from_config(cls, config, **kwargs): trust_remote_code = kwargs.pop("trust_remote_code", False) if hasattr(config, "auto_map") and cls.__name__ in config.auto_map: if not trust_remote_code: raise ValueError( "Loading this model requires you to execute the modeling file in that repo " "on your local machine. Make sure you have read the code there to avoid malicious use, then set " "the option `trust_remote_code=True` to remove this error." ) if kwargs.get("revision", None) is None: logger.warning( "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure " "no malicious code has been contributed in a newer revision." ) class_ref = config.auto_map[cls.__name__] module_file, class_name = class_ref.split(".") model_class = get_class_from_dynamic_module(config.name_or_path, module_file + ".py", class_name, **kwargs) return model_class._from_config(config, **kwargs) elif type(config) in cls._model_mapping.keys(): model_class = _get_model_class(config, cls._model_mapping) return model_class._from_config(config, **kwargs) > raise ValueError( f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." ) E ValueError: Unrecognized configuration class <class 'transformers.models.owlvit.configuration_owlvit.OwlViTConfig'> for this kind of AutoModel: AutoModelForObjectDetection. E Model type should be one of DetrConfig, YolosConfig. src/transformers/models/auto/auto_factory.py:412: ValueError ```<|||||>Hi @unography and @regisss! OWL-ViT is not a part of the object detection pipeline because it requires both image and search queries as input. We are planning to add a zero-shot-object-detection pipeline for OWL-ViT (see this [issue](https://github.com/huggingface/transformers/issues/18445)). cc @sgugger @NielsRogge <|||||>Thanks for the information @alaradirik :) @unography Let's keep only the default pipeline as you did then. I had to change one `.T` for `.t()` in `modeling_owlvit.py` to make the test pass, as in the PR of CLIP :laughing: Could you please change this?<|||||>Pinging @sgugger for final approval<|||||>@regisss ya sorry i missed the `.T` issue, i was testing on the nightly pytorch. should be fixed now<|||||>Hey @lewtun, would you like to have a look at this and merge if it looks good to you?<|||||>@lewtun Can you take a quick look at this PR and merge it when you approve? :slightly_smiling_face:
transformers
18,587
closed
Fix Data2VecVision ONNX test
# What does this PR do? Fix an issue from #18427. In short, `Data2VecVision` is for semantic segmentation. Current CI test failure ```bash tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_048_data2vec_vision_image_segmentation (line 412) ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation. tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_048_data2vec_vision_image_segmentation (line 412) ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation. ```
08-11-2022 16:03:14
08-11-2022 16:03:14
_The documentation is not available anymore as the PR was closed or merged._<|||||>With `nn.AdaptiveAvgPool2d` with `output_size` > 1, we get error ```bash Current thread 0x00007f5bb924e740 (most recent call first): File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_io/saferepr.py", line 71 in repr_instance File "/usr/lib/python3.9/reprlib.py", line 62 in repr1 File "/usr/lib/python3.9/reprlib.py", line 71 in <listcomp> File "/usr/lib/python3.9/reprlib.py", line 71 in _repr_iterable File "/usr/lib/python3.9/reprlib.py", line 78 in repr_tuple File "/usr/lib/python3.9/reprlib.py", line 60 in repr1 File "/usr/lib/python3.9/reprlib.py", line 52 in repr File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_io/saferepr.py", line 60 in repr File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_io/saferepr.py", line 107 in saferepr File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py", line 727 in repr_args File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py", line 817 in repr_traceback_entry File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py", line 867 in repr_traceback File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py", line 926 in repr_excinfo File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py", line 666 in getrepr File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/nodes.py", line 475 in _repr_failure_py File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/python.py", line 1795 in repr_failure File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/reports.py", line 345 in from_item_and_call File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py", line 365 in pytest_runtest_makereport File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py", line 39 in _multicall File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py", line 80 in _hookexec File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py", line 265 in __call__ File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py", line 221 in call_and_report File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py", line 130 in runtestprotocol File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py", line 111 in pytest_runtest_protocol File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py", line 39 in _multicall File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py", line 80 in _hookexec File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py", line 265 in __call__ File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py", line 347 in pytest_runtestloop File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py", line 39 in _multicall File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py", line 80 in _hookexec File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py", line 265 in __call__ File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py", line 322 in _main File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py", line 268 in wrap_session File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py", line 315 in pytest_cmdline_main File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py", line 39 in _multicall File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py", line 80 in _hookexec File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py", line 265 in __call__ File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/config/__init__.py", line 164 in main File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/config/__init__.py", line 187 in console_main File "/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pytest/__main__.py", line 5 in <module> File "/usr/lib/python3.9/runpy.py", line 87 in _run_code File "/usr/lib/python3.9/runpy.py", line 197 in _run_module_as_main Segmentation fault ```<|||||>Thanks @lewtun for the review. Totally fine for me to remove `semantic-segmentation` as a supported feature. I will clean this PR a bit and pin you for final review then.<|||||>The failed tests are irrelevant. @lewtun it's ready for you to take a final look 🚀 ! ```bash FAILED tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py::BigBirdPegasusStandaloneDecoderModelTest::test_sample_generate FAILED tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py::XLMRobertaXLModelTest::test_sample_generate_dict_output ```<|||||>Now we have a different generation test failing: ``` FAILED tests/models/blenderbot/test_modeling_blenderbot.py::BlenderbotStandaloneDecoderModelTest::test_sample_generate FAILED tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py::XLMRobertaXLModelTest::test_sample_generate ``` Since this is unrelated to the current PR, is it OK to merge?<|||||>Hi @lewtun . Thank you for running. It is good to merge. But let me rebase and re-run it, as @gante fixed the issue in #18696 merged into `main`. I will take care of the merge when everything is fine. Thanks again for the review.
transformers
18,586
closed
fix owlvit tests, update docstring examples
# What does this PR do? - Fixes the `OwlViTModelIntegrationTest` failures due to recently merged [PR](https://github.com/huggingface/transformers/pull/18573) that fixed a resizing bug in `OwlViTFeatureExtractor` - Updates the outputs shown in OwlViT docstring examples ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
08-11-2022 15:58:59
08-11-2022 15:58:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,585
closed
Fix failure on DeBERTa(base/v2/sew_d) fp16 training with ONNX Runtime
## Context It was reported in optimum https://github.com/huggingface/optimum/issues/305 that the mixed-precision training on DeBERTa with optimum.onnxruntime.ORTTrainer is broken. After investigation, the break comes from mismatched inputs dtype for some Matmul nodes. In #18272, some sqrt results are cast to fp32, and they need to be re-casted to fp16 before Matmul ops, and this PR is supposed to correct the dtype. Besides, this PR also fix the tracing of DeBERTa which haven't been fixed in #18272 Fixes #https://github.com/huggingface/optimum/issues/305 Fixes #18199 Who can review? @michaelbenayoun @LysandreJik @sgugger
08-11-2022 15:35:31
08-11-2022 15:35:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,584
closed
[bnb] Fix non passing trainer tests
# What does this PR do? It fixes a small slow test that was not passing due a very small typo when designing the tests in https://github.com/huggingface/transformers/pull/15622 cc @ydshieh
08-11-2022 15:33:32
08-11-2022 15:33:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you all!
transformers
18,583
closed
Add checks for some workflow jobs
# What does this PR do? We have encountered errors ```bash stderr: nvidia-container-cli: initialization error: nvml error: driver/library version mismatch: unknown ``` (due to auto update) several times. In such cases, the reports failed to send to slack channels. We were not aware of this issue on push CI for a few days. This PR checks the setup job and also adds a check on the CI runners. If such jobs fail, it could still send the report containing some information. **We should also disable the auto update (for some packages)** **I will add the same check to scheduled CI and past CI (if the changes are approved)**
08-11-2022 15:09:18
08-11-2022 15:09:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,582
closed
Segformer, can't save checkpoint in saved_model format
Thanks for this repo ! ### Issue description: 1) initalise your segformer model 2) try to save it. ### Error: ``` File "/home/maxime/anaconda3/envs/tf/lib/python3.9/site-packages/transformers/models/segformer/modeling_tf_segformer.py", line 547, in serving * output = self.call(inputs) File "/home/maxime/anaconda3/envs/tf/lib/python3.9/site-packages/transformers/models/segformer/modeling_tf_segformer.py", line 753, in call * batch_size = shape_list(encoder_hidden_states[-1])[0] KeyError: -1 ``` ### Minimum reproducting code: ``` import os import tensorflow as tf from transformers import TFSegformerForSemanticSegmentation model = TFSegformerForSemanticSegmentation.from_pretrained( "nvidia/mit-b0", num_labels=2, id2label={1:"1", 2:"2"}, ignore_mismatched_sizes=True, # Will ensure the segmentation specific components are reinitialized. ) model.summary(line_length=250) tf.keras.models.save_model( model, os.path.join("/tmp", model.name), include_optimizer=False ) ``` ### Expected behavior The checkpoint should save correctly ### System Info Ubuntu 20.04, Python 3.9, TF 2.9.1, Nvidia Titan ### Who can help? @sayakpaul @NielsRogge
08-11-2022 14:37:15
08-11-2022 14:37:15
Have you tried using `model.save_pretrained()`? Cc: @amyeroberts <|||||>`model.save_pretrained()` indeed works, thanks :) Shall we close this issue, or is this model also supposed to be compatible with keras saving method? (their callback is widely used, https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint)<|||||>@joihn Glad to hear you were able to save with `save_pretrained` and thanks for responding so quickly @sayakpaul! I'll defer the issue to our TF gurus @Rocketknight1 @gante regarding compatibility with keras saving. <|||||>I see two possible options for the time being: * Implement a custom callback to use `save_pretrained()`. Shouldn't differ too much from the `ModelCheckpoint` callback. * You can refer to [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb) that makes use of `PushToHubCallback()` and achieves a similar result as `ModelCheckpoint` barring some differences. <|||||>I did some digging, this issue seems to appears when keras function [save_model] https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model) parmeters is `save_traces=True`. It's worth noting that hugging face `saved_pretrained(filepath, saved_model=True)` also crash with the same error. also related to https://github.com/huggingface/transformers/issues/13742 <|||||>Hi @joihn - there are some general difficulties when saving Hugging Face models as SavedModel. This is a general issue with any model where the model and layers are implemented by subclassing in Keras - SavedModel doesn't really have a good way to completely save and load those models (although you can save one or more model traces through SavedModel, this isn't usually what people want unless they're trying to export to TFLite or something!) Instead, we recommend that users save weights only, and if they want to save the entire model, to use the `save_pretrained` method, which will save the weights along with a config that will make it loadable with the `load_pretrained` method. Concretely, this means doing the following things: 1) When using `ModelCheckpoint`, set `save_weights_only` to `True`. 2) Replace `model.save` with either `model.save_weights` or `model.save_pretrained`<|||||>Perfect, thanks for the info :)
transformers
18,581
closed
Fix docstrings with last version of hf-doc-builder styler
# What does this PR do? Everything is said in the title :-) Merging so everyone can safely use the new version of `hf-doc-builder`.
08-11-2022 14:29:35
08-11-2022 14:29:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,580
closed
[FX] _generate_dummy_input supports audio-classification models for labels
For FX: - Adds support for audio-classification models for label generation in `_generate_dummy_input` - Adds a flag, `FX_DEBUG_MODE` to control what's being printed during tracing, this will save the user from seeing a lot of benign warnings, while still providing the possibility to have those while developing.
08-11-2022 13:33:13
08-11-2022 13:33:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,579
closed
Supporting seq2seq models for `bitsandbytes` integration
# What does this PR do? The previous checks to check which key to not convert in int8 was not sufficient. It appears that T5 models were not able to be converted correctly. This PR addresses this issue by adding an extra check consisting of checking whether the model has tied_weights inside the `get_key_to_not_convert` function. cc @philschmid @sgugger
08-11-2022 12:57:48
08-11-2022 12:57:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,578
closed
Return the permuted hidden states if return_dict=True
# What does this PR do? Fixes an issue where the shape of the returned hidden states is different if `return_dict` is True or False for ConvNext. The outputs of ConvNext are permuted in the final layer `TFConvNextMainLayer` to put them in `(batch_size, num_channels, height, width)` format, to match the pytorch model. However, if `return_dict=True` then the non-permuted hidden states are returned. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
08-11-2022 12:55:34
08-11-2022 12:55:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,577
closed
Add type hints for ViLT models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adding type hints for ` ViLT` model (PyTorch). Issue #16059. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? _Task requested [here](https://github.com/huggingface/transformers/issues/16059#issuecomment-1210179575)._ - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? _Ran `make fixup` before last commit._ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-11-2022 10:54:38
08-11-2022 10:54:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @Rocketknight1, this PR is ready to review. Can you help me here, please?
transformers
18,576
closed
update doc for perf_train_cpu_many, add intel mpi introduction
Signed-off-by: Wang, Yi A <[email protected]> # What does this PR do? update doc for perf_train_cpu_many, add intel mpi introduction Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger
08-11-2022 09:55:00
08-11-2022 09:55:00
@sgugger please help review, thanks very much<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Failure is unrelated to this PR (would disappear with a rebase) so merging. Thanks again!
transformers
18,574
closed
add opt "flush_denormal" in training_args.
To solve the low performance issue caused by denormal numbers, user could enable this opt Signed-off-by: Wang, Yi A <[email protected]> # What does this PR do? [Denormal number](https://en.wikipedia.org/wiki/Denormal_number) is used to store extremely small numbers which are close to 0. Computations with denormal numbers are remarkably slower than normalized number. To solve the low performance issue caused by denormal numbers, users can use the following opt "flush_denormal" ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, please help review it
08-11-2022 09:17:36
08-11-2022 09:17:36
@yao-matrix @sgugger please help review<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18574). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR but I don't completely understand why this should leave in the Trainer? Users can just add `torch.set_flush_denormal(True)` to their script.<|||||>Hi, I think not all the data scientists are familiar with the option, if we add it to the training arg. at least they could see it from --help and get the point. @sgugger > Thanks for your PR but I don't completely understand why this should leave in the Trainer? Users can just add `torch.set_flush_denormal(True)` to their script. <|||||>Maybe it could be documented somewhere in this case, but I doubt adding yet another training argument is going to make it any more visible. There are currently 90 of them.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,573
closed
Fix resizing bug in OWL-ViT
# What does this PR do? Fixes a resizing issue in `OwlViTFeatureExtractor` that lead to the image/s getting resized along only one dimension and getting cropped along the other dimension later on in the preprocessing pipeline. The issue was due to defining the size as a single value instead of a tuple (768 instead of (768, 768)). The configuration files are updated and the `OwlViTProcessor `can correctly resize the input images now. This PR changes the default target `size` and sets default `do_center_crop` argument as False. Fixes # 18553 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? @NielsRogge could you take a look?
08-11-2022 09:10:00
08-11-2022 09:10:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for fixing! However, all "owlvit" checkpoints on the hub have `"do_center_crop": true` in their preprocessor_config.json. > > Should we update them after this PR is merged? Good point, yes, we should update them after the merge
transformers
18,572
closed
Segformer TF: fix output size in documentation
Fixes # (issue) https://github.com/huggingface/transformers/issues/18557 ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sayakpaul -->
08-11-2022 08:16:44
08-11-2022 08:16:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, thanks for your PR. Could you also fix this in `modeling_segformer.py` for the PyTorch implementation? Thanks!
transformers
18,571
closed
Add type hints for ViLT models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adding type hints for ` ViLT` model (PyTorch). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? _Ran `make fixup` before last commit._ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-11-2022 05:51:16
08-11-2022 05:51:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,570
closed
How to load a fine-tuned model and inference after running run_clip.py?
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Hi, @ydshieh after I run run_clip.py, how do I load the fine-tuned model and do inference? My inference code is as follows: ``` import requests from PIL import Image from transformers import AutoModel, AutoProcessor model = AutoModel.from_pretrained("clip-roberta-finetuned") processor = AutoProcessor.from_pretrained("clip-roberta-finetuned") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) print("auto model probs:", probs) ``` The following error occurred: ``` D:\software\anaconda\envs\transformers\python.exe D:/NLU/transformers/examples/pytorch/contrastive-image-text/predict.py Traceback (most recent call last): File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\feature_extraction_utils.py", line 402, in get_feature_extractor_dict resolved_feature_extractor_file = cached_path( File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\utils\hub.py", line 300, in cached_path raise EnvironmentError(f"file {url_or_filename} not found") OSError: file clip-roberta-finetuned\preprocessor_config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\NLU\transformers\examples\pytorch\contrastive-image-text\predict.py", line 6, in <module> processor = AutoProcessor.from_pretrained("clip-roberta-finetuned") File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\processing_auto.py", line 249, in from_pretrained return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 182, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 226, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\feature_extraction_auto.py", line 289, in from_pretrained config_dict, _ = FeatureExtractionMixin.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\feature_extraction_utils.py", line 443, in get_feature_extractor_dict raise EnvironmentError( OSError: Can't load feature extractor for 'clip-roberta-finetuned'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'clip-roberta-finetuned' is the correct path to a directory containing a preprocessor_config.json file Process finished with exit code 1 ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction OSError: file clip-roberta-finetuned\preprocessor_config.json not found ### Expected behavior load and inference success
08-11-2022 03:15:09
08-11-2022 03:15:09
If I understand correctly, you have a (previously created) `clip-roberta` which is used to launch training. The processor is not saved after the model is finetuned, just the model is saved. You can copy the processor file(s) from `clip-roberta` to `clip-roberta-finetuned`. Otherwise, you can simply change ``` processor = AutoProcessor.from_pretrained("clip-roberta-finetuned") ``` to ``` processor = AutoProcessor.from_pretrained("clip-roberta") ```<|||||>Hi, @ydshieh thank you for your reply. 1, When I copy the processor file(s) from clip-roberta to clip-roberta-finetuned, and the inference code remains the same. The following error occurs: ``` D:\software\anaconda\envs\transformers\python.exe D:/NLU/transformers/examples/pytorch/contrastive-image-text/predict.py Traceback (most recent call last): File "D:\NLU\transformers\examples\pytorch\contrastive-image-text\predict.py", line 6, in <module> processor = AutoProcessor.from_pretrained("clip-roberta-finetuned") File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\processing_auto.py", line 243, in from_pretrained return processor_class.from_pretrained( File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 182, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 226, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 607, in from_pretrained tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\auto_factory.py", line 573, in __getitem__ raise KeyError(key) KeyError: <class 'transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig'> ``` 2,When I did not copy any files from `clip-roberta` to `clip-roberta-finetuned`, and changed the `processor` from `processor = AutoProcessor.from_pretrained("clip-roberta-finetuned")` to `processor = AutoProcessor.from_pretrained("clip-roberta")` as you asked. It works fine.<|||||>There might be something wrong, I will take a look. But great to know it works in some way. <|||||>@gongshaojie12 Could you check if you have copied all these files from `clip-roberta` to `clip-roberta-finetuned`: ``` config.json merges.txt preprocessor_config.json special_tokens_map.json tokenizer.json tokenizer_config.json vocab.json ``` I don't have any issue when running `AutoProcessor.from_pretrained("clip-roberta-finetuned")` when I copied all fiiles (of course, ignore the non-finetuned model file)<|||||>Hi, @ydshieh thanks. It works fine.
transformers
18,569
closed
The backticks in the example of transformers.BigBirdPegasusConfig documentation were not in the right spot…
…, causing the documentation to be displayed in a weird way. I moved the backticks a few lines down in the documentation to ensure the documentation is formatted correctly. # What does this PR do? Fixes the python documentation example in transformers.BigBirdPegasusConfig <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-11-2022 02:19:00
08-11-2022 02:19:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18569). All of your documentation changes will be reflected on that endpoint.<|||||>I can make that change, but when I run `make style` it changes a couple hundred files. Is that supposed to happen?<|||||>No, it probably means you do not have the right black version (22.3.0).<|||||>Oh yeah, I had a slightly newer version apparently. I had issues on my mac with the virtual environment. I'll get that fixed.
transformers
18,568
closed
Fix broken pipeline string
# What does this PR do? #18494 introduced a bug in a cuda device string: `RuntimeError: Invalid device string: 'cuda:{device}'` This PR adds the missing `f` before the string <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @julien-c @LysandreJik
08-10-2022 21:36:11
08-10-2022 21:36:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for fixing!
transformers
18,567
closed
Trainer Bug
### System Info I might be wrong, but looks this line here: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py#L176 introduces a bug. _gen_kwargs may not be an attribute for an instance. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction If we have a trainer that directly calls the function `prediction_step`, we'll have this error ### Expected behavior no error is thrown when we call `prediction_step` function
08-10-2022 20:58:57
08-10-2022 20:58:57
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,566
closed
Bump nbconvert from 6.0.1 to 6.3.0 in /examples/research_projects/visual_bert
[//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [nbconvert](https://github.com/jupyter/nbconvert) from 6.0.1 to 6.3.0. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/jupyter/nbconvert/commit/cefe0bfe303e5e9e194c393cb9280c64a77b8219"><code>cefe0bf</code></a> Release 6.3.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/a534fb901ff83e0b0c0c082ff47f3de01dc651b1"><code>a534fb9</code></a> Release 6.3.0b0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/87920c5a47c8ae99600be6c9b9b909ba440adce9"><code>87920c5</code></a> Add changelog for 6.3.0 (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1669">#1669</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/dd6d9c7d36d0a09db647a8fc993f7330388a1e48"><code>dd6d9c7</code></a> add slide numbering (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1654">#1654</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/5d2c5e2b79534c11678b73e707feb74d7827a557"><code>5d2c5e2</code></a> Update state filter (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1664">#1664</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/11ea5931f71fdaaaad8958f634132f45476bf006"><code>11ea593</code></a> fix: avoid closing the script tag early by escaping a forward slash (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1665">#1665</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/968c5fbabaf99f83d64720a1a6e90969052e978c"><code>968c5fb</code></a> Fix HTML templates mentioned in help docs (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1653">#1653</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/35c4d07eb7060b505412c0ad83886176fe8409fe"><code>35c4d07</code></a> Add a new output filter that excludes widgets if there is no state (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1643">#1643</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/c663c75339709c0e1c051d684dba0cf10fa9083e"><code>c663c75</code></a> 6.2.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/fd1dd15b63bfd898c21c90b78165c4c00c448896"><code>fd1dd15</code></a> 6.2.0rc2</li> <li>Additional commits viewable in <a href="https://github.com/jupyter/nbconvert/compare/6.0.1...6.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nbconvert&package-manager=pip&previous-version=6.0.1&new-version=6.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
08-10-2022 19:50:02
08-10-2022 19:50:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,565
closed
Bump nbconvert from 6.0.1 to 6.3.0 in /examples/research_projects/lxmert
Bumps [nbconvert](https://github.com/jupyter/nbconvert) from 6.0.1 to 6.3.0. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/jupyter/nbconvert/commit/cefe0bfe303e5e9e194c393cb9280c64a77b8219"><code>cefe0bf</code></a> Release 6.3.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/a534fb901ff83e0b0c0c082ff47f3de01dc651b1"><code>a534fb9</code></a> Release 6.3.0b0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/87920c5a47c8ae99600be6c9b9b909ba440adce9"><code>87920c5</code></a> Add changelog for 6.3.0 (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1669">#1669</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/dd6d9c7d36d0a09db647a8fc993f7330388a1e48"><code>dd6d9c7</code></a> add slide numbering (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1654">#1654</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/5d2c5e2b79534c11678b73e707feb74d7827a557"><code>5d2c5e2</code></a> Update state filter (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1664">#1664</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/11ea5931f71fdaaaad8958f634132f45476bf006"><code>11ea593</code></a> fix: avoid closing the script tag early by escaping a forward slash (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1665">#1665</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/968c5fbabaf99f83d64720a1a6e90969052e978c"><code>968c5fb</code></a> Fix HTML templates mentioned in help docs (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1653">#1653</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/35c4d07eb7060b505412c0ad83886176fe8409fe"><code>35c4d07</code></a> Add a new output filter that excludes widgets if there is no state (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1643">#1643</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/c663c75339709c0e1c051d684dba0cf10fa9083e"><code>c663c75</code></a> 6.2.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/fd1dd15b63bfd898c21c90b78165c4c00c448896"><code>fd1dd15</code></a> 6.2.0rc2</li> <li>Additional commits viewable in <a href="https://github.com/jupyter/nbconvert/compare/6.0.1...6.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nbconvert&package-manager=pip&previous-version=6.0.1&new-version=6.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
08-10-2022 19:39:27
08-10-2022 19:39:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,564
closed
Transformers Documentation translation to German (de)
Hi! Let's bring the documentation to all the German-speaking community :) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know here if you'd like to translate any, and we'll add your name to the list. Some notes: - Please translate using a formal tone; "wir" and "sie." If possible, please reformulate the sentences to use the first person plural (wir) unless the sentence describes an action the user has to take. - Please translate in a gender-neutral way. - Add your translations to the `de` folder inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). - Register your translation in [de/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/de/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). - Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. - 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) @flozi00 - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx). @flozi00 - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
08-10-2022 19:20:24
08-10-2022 19:20:24
installation is WIP and will be done tomorrow in the open PR other sections will follow in future if its still to do then<|||||>Great! Thank you @flozi00 🚀.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Will try to find time for more progress after sleeping a bit @ github-actions<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,563
closed
Properly move cache when it is not in default path
# What does this PR do? This PR respects the user env variable for the cache move when they set something different from the default path, as it's not working as expected right now (reported internally by @stas00 )
08-10-2022 17:50:40
08-10-2022 17:50:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,562
closed
Adds timeout argument to training_args to avoid socket timeouts in DDP
# What does this PR do? This PR follows the work done in #18081 and adds a `timeout` argument to `TrainingArgs` to avoid Socket Timeouts when using PyTorch's `torch.distributed.init_process_group`: https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group _`timeout` argument exists since 1.0.0: https://pytorch.org/docs/1.0.0/distributed.html. This prevents any regression._ Fixes #18054 #17106 and finishes the open PR #18081. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-10-2022 17:41:26
08-10-2022 17:41:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @gugarosa, thanks for your PR! I'm asking Sylvain to review it as he's the maintainer of the `Trainer`, but he's on the leave for the next few weeks. He'll review your PR when he's back! Thanks for your patience :pray: <|||||>No worries @LysandreJik! Thanks so much for the attention!<|||||>You just need to run `make style` and we should be good!<|||||>> You just need to run `make style` and we should be good! My bad! I always forget to run it. Just squashed the previous commits and added the `make style`. Hopefully, it will pass all tests in a couple minutes! Thanks for all the attention on this PR!
transformers
18,561
closed
Fixed the error so the default matches with the coment
Made it so that the num_output_group matches with the default setting.
08-10-2022 16:22:13
08-10-2022 16:22:13
cc @xvjiarui @NielsRogge <|||||>LGTM<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18561). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,560
closed
raise atol for MT5OnnxConfig
# What does this PR do? MT5 is newly added to ONNX tests, but currently failed with ``` AssertionError: mt5, seq2seq-lm -> Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.000148773193359375 AssertionError: mt5, seq2seq-lm-with-past -> Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.00020599365234375 ``` [failed job run](https://github.com/huggingface/transformers/runs/7718274393?check_suite_focus=true)
08-10-2022 16:08:35
08-10-2022 16:08:35
_The documentation is not available anymore as the PR was closed or merged._<|||||>I need to check the `# Copied` issue.<|||||>Hi, @ydshieh. Does this PR fix the two failing tests? If not, I can work on it too.<|||||>The test is still running. I think it should be fine :-) but will let you know once the test run is finished. Thanks.
transformers
18,559
open
[WIP]Add TF BEiT Implementation
Porting BEiT model from PyTorch to TensorFlow backend # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18085 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @gante @LysandreJik @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-10-2022 15:55:27
08-10-2022 15:55:27
@gante @amyeroberts Here's the WIP draft of BEiT! Please tell me if I have done anything wrong, I'll make the changes right away! Thanks!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @MadElf1337 - thanks for opening a PR and for adding this model! Outline looks good. As a quick overview, I see two main things that you'll want to add (alongside docs and tests): * `# Copied from` in the TF data2vec model definition * `TFBeitForXxx` classes Looking forward to seeing the full PR and having this model available for our TF users :) <|||||>@amyeroberts Sure! I'll make the changes!<|||||>@amyeroberts @gante So I think I'm done with the model, can you just look over it once while I'll finish writing the tests?<|||||>@MadElf1337 From a quick glance, the model code looks fine 👍 As always, the devil is in the details, so you likely come across issues in the tests. Lets us know if you get stuck in a particular test (tip: `breakpoint()` + comparing to PT are your friends) Will do an in-depth review when the tests are added.<|||||>@MadElf1337 As discussed on the issue #18085 [here](https://github.com/huggingface/transformers/issues/18085#issuecomment-1210544100) for this model, we want to copy the relevant code in data2vec to `modeling_tf_beit.py`, then add the necessary `#Copied from` statements in `modeling_tf_data2vec.py` i.e. `modeling_tf_beit.py` and modeling_tf_data2vec.py` should have the same structure and equivalent `#Copied from` statements as in `modeling_beit.py` and `modeling_data2vec.py`. Let me know if any of this isn't clear or you need any help. <|||||>Yeah yeah it was clear, just wanted to see if the broad architecture was written correctly or not, once I complete the tests(I’m a bit stuck on the attention output test for tf), I’ll do the formatting, add the comments and then ask for a complete review<|||||>If you follow the same structure as the pytorch data2vec vision and beit, including the copied from statements, then almost all of the architecture considerations will be taken care of for you, and it will be easier for us as reviewers. If you need any help with the tests, let us know and we can try and lend a hand. <|||||>Yeah so as I said, I just am stuck on the seq_len part in the attention output for TF, since that is one thing which is present in data2vec but not in BEIT, So just need to figure out that test<|||||>Hey @MadElf1337 -- we've just released a guide for TF conversions, might come handy to you :D https://huggingface.co/docs/transformers/main/en/add_tensorflow_model<|||||>Yep thanks! Mostly done with the tests as well, just a little hiccup that will be solved soon, else I’ll make sure to ask for help!<|||||>@gante @amyeroberts Terribly sorry for the delay, had to deal with some personal stuff that could not be avoided. I think I'm done writing the tests and the model, can I get a review to see if I've missed anything/done anything wrong? Thanks! (Also I'll add the comments of #Copied from TFData2vec in the final commit)<|||||>@amyeroberts @gante Can I get a review please?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.<|||||>@amyeroberts Thanks for the review! 1) As suggested I've added the comments of #Copied from...(Sorry that you had to ask twice, I thought they were just comments and did not know that it was a part of the review process) 2) I've also added the missing code and the torch references have been changed!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @MadElf1337 - thanks for the updates and iterating so quickly after review. There's still a few files that need to be added for the model to be importable and fully integrated into the library. The guidelines in the document @gante details these. Here's a [recent model PR for reference](https://github.com/huggingface/transformers/pull/17826). As the overall architecture looks good, this is the next step for this PR. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.<|||||>@amyeroberts @gante So I've done everything as specified in the docs(I think), can I get a review to see if I've missed anything?<|||||>Hey @amyeroberts @gante Can I get a review please?<|||||>@MadElf1337 Thanks for the update! The next stage for this PR is getting all of the tests running - the fun part! The tests aren't running at the moment as the models can't be imported: ``` E ImportError: cannot import name 'TFBeitForImageClassification' from 'transformers' (/home/circleci/transformers/src/transformers/__init__.py) ``` One thing I can see that needs to be added is included the beit models in `import_structure` in the `__init__.py` e.g. [here](https://github.com/huggingface/transformers/blob/7319850902ba9b2a44c36ccddd044f98abd9b597/src/transformers/__init__.py#L205). Some of the tests that are failing e.g. `check_code_quality` you can fix and/or find the issues by running `make fixup` locally. Finally, the ` # Copied from` statements should be added to the data2vec vision model in `modeling_tf_data2vec_vision.py` and the ones in `modeling_tf_beit.py` removed. `# Copied from transformers.models.beit.modeling_beit.TFBeitModelOutputWithPooling with Beit->Data2VecVision` <|||||>@amyeroberts Thanks for the review! I can see that the original repo does not have the import structures in __init__.py, however I have added those to the init file in my dev branch, which is why it is showing a conflict for the same file<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey, can I know what to do next to solve the merge conflict?<|||||>Hey @MadElf1337 -- You will have to rebase your PR with main :) 1. Get the latest main ``` git checkout main git pull ``` 2. Rebase ``` git checkout your_branch git rebase origin/main ``` 3. Handle conflicts manually (i.e. keep the desired changes and remove the unwanted ones in the conflicting files, and follow the instructions that git gives you) 4. Force-push your changes (force to avoid GitHub showing a diff of 666 files) ``` git push -u origin your_branch -f ```<|||||>There, I think I've solved the conflict but the test errors are occurring due to errors in data2vecvision<|||||>@MadElf1337 [Some of the failures](https://app.circleci.com/pipelines/github/huggingface/transformers/55390/workflows/2a766a55-113a-4c4e-8a4f-604926bcf9c4/jobs/668146) are because the `# Copied from` statements point to a path that doesn't exist e.g. `# Copied from transformers.models.data2vec.modeling_data2vec_vision.TFData2VecVisionEmbeddings with Data2VecVision->Beit` is copying the object `TFData2VecVisionEmbeddings` but is referring to the pytorch modeling file `transformers.models.data2vec.modeling_data2vec_vision`. Note: The copied from statement should be in the `modeling_tf_data2vec_vision.py` file and should copy from the beit model e.g. `# Copied from transformers.models.beit.modeling_tf_beit.TFBeitEmbeddings with Beit->Data2VecVision`. There shouldn't be any `# Copied from` comments in the BEiT modeling file `modeling_tf_beit.py`. If you run `make fixup` locally in the repo, you'll be able to reproduce the `check_copies` and it will make the `check_code_quality` checks pass. <|||||>Ah I see, I will fix that right away<|||||>So the tests run locally, but whenever I run the `make fix-copies` command it changes the docstring in data2vec from data2vec to beit, thus throwing the style change errors. How do I go about fixing this?<|||||>To do that, you'll need to add in the `# Copied from` all the patterns you want to change in the copied code e.g.: `# Copied from xxx with Beit->Data2VecVision,beit->data2vec_vision` What might help as well: * You can add `# Copied from` statements at the method / function level, rather at the top of the class * For some high-level classes, if it's essentially a wrapper around layers, it might be OK to not include the `# Copied from` For example, for `TFData2VecVisionForSemanticSegmentation`, I would add copied from headers on top of class methods `__init__` and `compute_loss` and not add it to `call`. <|||||>So now the only errors are in the actual tests for Imagenet and Masked Image Modeling, I think I'll need to add breakpoints and check the tensors manually?<|||||>Fixed the deprecated feature_extractor and fixed the comments, however tests still fail<|||||>@MadElf1337 - Thanks for updating the feature extractor tests. Regarding the currently failing tests, are you able to click on the `Details` link from the CI runs and see the CircleCI output? For tests like `check_code_quality` - you need to run `make fixup` in the top level of the repo. This is detailed in the doc @gante linked to previously: https://huggingface.co/docs/transformers/main/en/add_tensorflow_model#debugging-mismatches-across-ml-frameworks. Make sure to look through this and go through all the steps to get the model ready. Do you have an estimation on when you will be able to address these failing tests and have the PR merge ready? BEiT is a very impactful model to add to our TensorFlow library, we'd be excited to include it and we're a lot closer with the work you've done :) This PR has been open for over 6 months now, so would be good to push to get it over the finish line. <|||||>Hey Amy, thanks for the review! Sorry for the delay, I was out of town for a bit and could not work on the model, but I will try my best to complete the PR within a week's time, since now only the tests part needs to be fixed <|||||>Also regarding the tests, whenever I run them in the CLI, I get the Failed message for the following tests: 1) test_attention_outputs 2) test_determinism 3) test_for_masked_lm 4) test_for_semantic_segmentation 5) test_hidden_states_output 6) test_keyword_and_dict_args 7) test_model 8) test_model_common_attributes 9) test_model_from_pretrained 10) test_model_outputs_equivalence 11) test_numpy_arrays_inputs 12) test_prepare_serving_output 13) test_pt_tf_model_equivalence 14) test_save_load 15) test_save_load_config 16) test_saved_model_creation 17) test_inference_image_classification_head_imagenet_1k and the Imagenet22k test is killed<|||||>> Also regarding the tests, whenever I run them in the CLI, I get the Failed message for the following tests: Are these running tests for just this model? i.e. `pytest tests/models/beit/test_modeling_tf_beit.py`? <|||||>Yes, I used `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/beit/test_modeling_tf_beit.py`<|||||>OK, that's expected - even once the initial architecture's done there's always failures on the first go :) You'll need to look at the errors being thrown on the tests and debug from there - let us know if you need any help!<|||||>@amyeroberts So the main error seems to be in the `call` function for every failing test, i.e. `AttributeError: Exception encountered when calling layer 'tf_beit_model' (type TFBeitModel). 'TFBeitModel' object has no attribute 'beit'` When I tried running the tests for TFData2vec_vision, I came across the same errors<|||||>The error causing snippet of code is this - `outputs = self.beit( pixel_values=pixel_values, bool_masked_pos=bool_masked_pos, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, )`<|||||>@MadElf1337 This is because the `# Copied from ` statements in modeling_tf_data2vec.py need to be updated. Please refer to my comments in modeling file e.g. [this one](https://github.com/huggingface/transformers/pull/18559/files#r1081377920).<|||||>But then it should not throw that error for beit right? Because it is not copying from anything<|||||>And even after putting the `#copied` statements as suggested, it still throws the same error<|||||>Also `make fixup` now throws `Error 127` `make: /bin/sh: Argument list too long` `make: *** [Makefile:10: modified_only_fixup] Error 127`<|||||>> But then it should not throw that error for beit right? Yes - the updates should fix the issues for data2vec once `make fix-copies` has been run. The error is telling you that `self.beit` isn't defined for `TFBeitModel`. I can see, looking at the code `data2vec_vision` is defined instead in `modeling_tf_beit.py`. Any data2vec references will need to be updated in the beit modeling file. > Also make fixup now throws Error 127 I'm not sure what the issue is here. If I fetch your fork I'm able to run `make fixup` with no issues on this branch. Are you able to run if locally, with no problems on `main`? This will help diagnose whether there's an issue running `make` or the issue is with code on this branch. <|||||>Okay so now it is down to just 2 failed tests, one is `tests/models/beit/test_modeling_tf_beit.py:160: in create_and_check_for_masked_lm self.parent.assertEqual(result.logits.shape, (self.batch_size, num_patches, self.vocab_size))` `AssertionError: TensorShape([13, 225, 2]) != (13, 225, 100)` and second one is: `mask_tokens = tf.broadcast_to(self.mask_token, (batch_size, seq_len, projection_dim))` `ValueError: Exception encountered when calling layer 'embeddings' (type TFBeitEmbeddings).` `Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.` `Call arguments received by layer 'embeddings' (type TFBeitEmbeddings): • pixel_values=tf.Tensor(shape=(13, 3, 30, 30), dtype=float32) • bool_masked_pos=tf.Tensor(shape=(13, 225), dtype=int32)` `src/transformers/models/beit/modeling_tf_beit.py:173: ValueError` <|||||>I think for the first one I might've messed something up in the masked lm function but can't figure out what<|||||>@MadElf1337 You will also need to run the tests for the data2vec vision model to ensure it still works as expected. I can see at the moment some of these tests are currently failing. For the first test:`tests/models/beit/test_modeling_tf_beit.py::TFBeitModelTest::test_for_masked_lm` the output dimensions of the logits are different. You can compare the dimensions of the tensors in each step of `TFBeitForMaskedImageModeling.call` to `BeitForMaskedImageModeling.forward` to find where the difference is arising and from there figure out if it's coming from e.g. a specific layer or logic such as slicing. I would first check the creation of the layers to make sure they have the same settings as in `modeling_beit.py` For the second test: `mask_tokens = tf.broadcast_to(self.mask_token, (batch_size, seq_len, projection_dim))`, the error is telling you `self.mask_token` is `None`. I can see this line happens if `bool_masked_pos is not None`. You'll need to see what is being passed in at test time and if this is expected e.g. is self.config.use_mask_token set correctly? <|||||>Yeah I'll debug those 2 errors by today, and will also look at the data2vec model and tests afterwards<|||||>Okay I fixed the first one, I had mistakenly put `num_labels` instead of `vocab_size` in the Classifier head<|||||>@amyeroberts Okay so all tests are running locally now<|||||>@amyeroberts So if I run `pytest tests/models/beit/test_modeling_tf_beit.py`, it shows that all the tests are passing, however on running `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/beit/test_modeling_tf_beit.py` the imagenet1k test fails and the imagenet22k test is killed.<|||||>@MadElf1337 OK, thanks for the update! For the slow tests that are failing do you need any help in resolving these? Is imagenet22k killed because of OOM? As mentioned before, you'll also need to run these tests for data2vec vision. I can see on CI these are currently failing. <|||||>Yes I think Imagenet22k is being killed due to OOM, but I don't know why the imagenet1k test is failing, as I can't see the error message in the slow tests. And yes, once I finish with BEIT tests I'll take care of the data2vec vision tests as well<|||||>> as I can't see the error message in the slow tests. What do you mean exactly- is there no output at all? If you run: ``` NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 py.test -vv tests/models/beit/test_modeling_tf_beit.py::TFBeitModelIntegrationTest::test_inference_image_classification_head_imagenet_1k ``` pytest should show you the exception or assert that was raised e.g. ``` FAILED tests/models/beit/test_modeling_tf_beit.py::TFBeitModelIntegrationTest::test_inference_image_classification_head_imagenet_1k - {Reason the test failed} ``` and above the line in the code where the failure occurs. Is this an issue only for the slow tests? <|||||>Ah I did not run the test standalone, I ran the entire suite every time, which is why it just showed me the `FAILED` message.<|||||>Imagenet gives the following error - `AssertionError: TensorShape([1, 2]) != <tf.Tensor: shape=(2,), dtype=int32, numpy=array([ 1, 1000], dtype=int32)>` Masked Image Modeling head gives - `tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __wrapped__Sub_device_/job:localhost/replica:0/task:0/device:CPU:0}} Incompatible shapes: [1,3,8192] vs. [3,3] [Op:Sub]` I think the assertion errors are due to some error in the model architecture? <|||||>Any pointers on how I can fix the imagenet1k fail?, I tried going through the entire model but still can't understand why that test is failing<|||||>@MadElf1337 I'm not sure why it's failing and I don't see it as a failed test on the CI runs. If it's the same imagenet error [posted here](https://github.com/huggingface/transformers/pull/18559#issuecomment-1422953538): `AssertionError: TensorShape([1, 2]) != <tf.Tensor: shape=(2,), dtype=int32, numpy=array([ 1, 1000], dtype=int32)>` Then it seems it's a case of output shape. The shapes aren't wildly different, as one is a flattened version of the other. I would compare the shape of `pooled_outputs` before they're fed to the classifier between the pytorch beit and tensorflow beit models and work from there. If they're the same, then the issue is in the classifier. If they're different, keep going up the layers until you hit where the difference is arising. <|||||>I got the reason for the error, it is a problem in the classifier itself, the shape returned by the beit classifier head is (3,2) where it should be (3,1000), but I do not understand why it throws that error since the architecture is exactly the same as of data2vec and the test passes for data2vec<|||||>Everything else is done, just the imagenet test is failing due to some problems in the classifier, don't know why it is erroring out since it is the same as of data2vec. @amyeroberts <|||||>@MadElf1337 - I can see there are other tests failing for beit, outside of the imagenet test e.g. `tests/models/beit/test_modeling_tf_beit.py::TFBeitModelTest::test_attention_outputs`. Would you be open to me fetching this branch and pushing some commits to see if I can resolve the failing tests? <|||||>Oh that test is failing due to the breakpoint probably, wait I'll remove those completely. And sure, please feel free to push commits :)<|||||>@amyeroberts the attention tests are passing for me on local, not sure why they are failing here<|||||>The only tests failing for me are the pt_tf_model_equivalence and imagenet1k, imagenet 22k goes OOM<|||||>Excuse my lack of knowledge on this point, but when this feature is ready, can I use DiT with TFBEiT, since it is said that DiT uses the same Beit architecture?<|||||>@FelipeErmeson Yes! Just like the pytorch model, once this model is added you will be able to load in DiT weights into TFBeit like so: ``` from transformers import TFAutoModelForImageClassification model = TFAutoModelForImageClassification.from_pretrained("microsoft/dit-base-finetuned-rvlcdip") ```<|||||>Can I get some help on this? I can't seem to understand why the classifier throws an error for Imagenet, went through the entire architecture multiple times<|||||>Hi @MadElf1337 - yes, I'm looking into this now :) <|||||>@MadElf1337 - I've managed to track down a few of the issues. As this is from a fork on your repo - I couldn't push the diffs, but I'll comment on the relevant bits of code that need to be updated. <|||||>@amyeroberts Hey! Still getting the same errors for imagenet and pt_tf_model_equivalence, those are the only 2 tests failing<|||||>@MadElf1337 For the pt_tf_equivalence failures, you'll need to go through the layer outputs and compare with the pytorch model to figure out where the differences are coming from. I recommend starting with `TFBeitModel` first, and progressively checking the inputs and outputs of larger blocks before digging into the lower layers. The way I typically tests is passing the same input to the PyTorch and TensorFlow models and saving out activations at different steps to numpy arrays. Common causes of differences are layer parameters like `eps` for layer or batch norms. <|||||>@amyeroberts So all tests except imagenet are passing now :-( I think the problem is in the num_channels in the classifier, it is not being initiated properly or something as the output is squashed<|||||>@MadElf1337 - OK. I'll dig into the imagenet issues a bit later. Whilst that is happening, the next step for this PR would be rebasing on main, and re-running the tests to make sure everything is still compatible. There's been some recent changes to the formatting libraries used, so it's important to make sure these are up-to-date by running `pip install -e .[quality]` and then linting the code for the update with `make fixup`<|||||>@amyeroberts So I ran `make fixup` and I am getting the following error: `Exception: There were 1 failures: TFBeitForMaskedImageModeling is defined in transformers.models.beit.modeling_tf_beit but is not present in any of the auto mapping. If that is intended behavior, add its name to 'IGNORE_NON_AUTO_CONFIGURED' in the file 'utils/check_repo.py'.` I think this is because we removed MaskedImageModeling from the auto class?<|||||>@MadElf1337 - `BeitForMaskedImageModeling` isn't compatible with `AutoModelForMaskingImageModeling` and so isn't part of the auto mapping. As the error indicates, `TFBeitForMaskedImageModeling` should be added to `IGNORE_NON_AUTO_CONFIGURED` e.g. like here for the [pytorch model](https://github.com/huggingface/transformers/blob/89087597ba8c565e7b854ae719b0230de8435a10/utils/check_repo.py#L252). <|||||>I've ran `make fixup` according to the new formatting conditions, what to do next?<|||||>@MadElf1337 Have you rebased on main? I can't see a force push in the recent history cf [prev instructions here](https://github.com/huggingface/transformers/pull/18559#issuecomment-1383991956). If not, then after the rebase, you'll need to update the formatting settings with `pip install -e .["quality"]` and run `make fixup` again. The next step after that is to get all of the tests passing. <|||||>Completed the rebase, it shows a conflict here but says `Current branch MadElf1337-dev is up to date.` in my terminal<|||||>@MadElf1337 Are you fetching and rebasing on the most recent version of main? i.e. ``` git fetch upstream main git rebase upstream/main ```<|||||>There, Fixed it<|||||>@MadElf1337 - great :) The next steps are resolving the failing tests. For the quality checks, you'll need to make sure that you have the most recent formatting configuration locally by running `pip install -e .["quality"]` in the repo. Running `make fix-copies` and `make style` should resolve these. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @amyeroberts Sorry for the delay, I was extremely sick and hence could not contribute, but I think the bulk of the suggestions are done, just the comments and the test part is remaining, but the PR was closed<|||||>@amyeroberts I think I've covered everything that was suggested, still Imagenet throws errors<|||||>@MadElf1337 For the repo consistency tests, could you run `make fix-copies` and push those changes For the quality checks, could you run `make style` at the top level of the repo and push any changes? These should be run after running `make fix-copies` <|||||>@amyeroberts Done!<|||||>@MadElf1337 The repo consistency checks are still failing. Could you try running `make fix-copies` again and push those changes?<|||||>@amyeroberts I think the consistency check is fixed now<|||||>@MadElf1337 Consistency checks have been fixed, but there are still a few tests (outside of doctests and imagenet) which are failing, including the quality/style checks I mentioned fixing with `make style` above. These should be resolved before a final review. For the doc tests, you should add `from_pt=True` for the moment, to make sure that they run, and then we remove that before we merge in and once the checkpoints have been uploaded<|||||>@amyeroberts I've already added the `from_pt=True` flags to `test_modeling_tf_beit` file, still the doctest throws an error saying that the tf weights cannot be loaded<|||||>@MadElf1337 Could you please look at the tracebacks on the failing CI runs first? `from_pt=True` hasn't been added to the doc examples for `modeling_tf_beit.py`, this is [clear from the error message](https://app.circleci.com/pipelines/github/huggingface/transformers/67868/workflows/52e9162b-86bf-4d97-853e-ae75d410976a/jobs/850220). We're happy to help clarifying bugs if they're not clear, or help bug something esoteric. However, in this case, what I would tell you to do is exactly what the error message indicates. It's simply not scalable for us to help on every individual failing test.<|||||>Yes Yes I understand, however the tests for pr documentation were passing on earlier runs, which is why I was a bit confused on what suddenly happened so as they were failing now, turns out that `make fix-copies` might've overwrote over the stuff I wrote myself<|||||>@amyeroberts Do I have to manually write the examples in the docstrings of `TFBeitForImageClassification` & `TFBeitForMaskedImageModeling` as done with `TFBeitForSemanticSegmentation`? Because the example docstrings are nowhere to be found on my local repo<|||||>@MadElf1337 You don't need to have examples added to the docstring if the method has a `@add_code_sample_docstring` decorator. It might be the case that the standard docstring example isn't compatible with the model or doesn't exist for that task. In this case, a custom example should be added to the docstring, and no decorator used. <|||||>Ah okay! I''ll add the custom docstrings<|||||>@amyeroberts why does the code-quality check throw an error for `Dict` only in data2vec? The same name works in BEiT but not in the model file of data2vec
transformers
18,558
closed
Module 'seqeval' doesn't exist on the Hugging Face Hub either
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.4.0-77-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.9.1+cu111 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @LysandreJik Please help with this issue. Thank you very much. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` cd /lustre/scratch/client/vinai/users/datnq9/transformers/examples/pytorch/token-classification python3 run_ner.py \ --task_name $TASK_NAME \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --seed $SEED \ --per_device_train_batch_size $BATCH_SIZE \ --tokenizer_name $BERT_MODEL \ --num_train_epochs $NUM_EPOCHS \ --learning_rate $PEAK_LR \ --warmup_steps $WARMUP \ --train_file $TRAIN_FILE \ --validation_file $DEV_FILE \ --test_file $TEST_FILE \ --do_train \ --do_eval \ --do_predict \ --text_column_name words \ --label_column_name tags \ --evaluation_strategy epoch \ --save_strategy epoch \ --save_total_limit 3 \ --metric_for_best_model $METRIC \ --load_best_model_at_end ``` ### Expected behavior ``` Traceback (most recent call last): File "run_ner.py", line 630, in <module> main() File "run_ner.py", line 508, in main metric = evaluate.load("seqeval") File "/lustre/scratch/client/vinai/users/datnq9/miniconda3/lib/python3.7/site-packages/evaluate/loading.py", line 703, in load path, module_type=module_type, revision=revision, download_config=download_config, download_mode=download_mode File "/lustre/scratch/client/vinai/users/datnq9/miniconda3/lib/python3.7/site-packages/evaluate/loading.py", line 655, in evaluation_module_factory ) from None FileNotFoundError: Couldn't find a module script at /lustre/scratch/client/vinai/users/datnq9/transformers/examples/pytorch/token-classification/seqeval/seqeval.py. Module 'seqeval' doesn't exist on the Hugging Face Hub either. ``` I already installed "seqeval" as well as "evaluate" packages. Thus I am not sure why this issue happened.
08-10-2022 15:14:53
08-10-2022 15:14:53
Now I can make `run_ner.py` running by changing the line 508 from `metric = evaluate.load("seqeval")` to `metric = evaluate.load("/absoluate/path/to/seqeval.py")`. But I think this is still a potential bug as the fine-tuning script had worked well before, without changing the file `run_ner.py`. <|||||>I meet this problem too(but for squad dataset). Seems that they have not fixed it yet.<|||||>I ran into this problem, too. I think this problem may be caused by the version of `transformers` package. Here are my two solutions to work around it: - Install the latest version of `transformers` from source, which version is `4.32.0.dev0`, then install the corresponding dependencies through `pip install -r requirements.txt` - If you have problem install the latest version of `transformers`, then you could replace the evaluation part with the following codes: **BEFORE** ``` metric = evaluate.load("seqeval") def compute_metrics(p): predictions, labels = p predictions = np.argmax(predictions, axis=2) # Remove ignored index (special tokens) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] results = metric.compute(predictions=true_predictions, references=true_labels) if data_args.return_entity_level_metrics: # Unpack nested dictionaries final_results = {} for key, value in results.items(): if isinstance(value, dict): for n, v in value.items(): final_results[f"{key}_{n}"] = v else: final_results[key] = value return final_results else: return { "precision": results["overall_precision"], "recall": results["overall_recall"], "f1": results["overall_f1"], "accuracy": results["overall_accuracy"], } ``` **AFTER** ``` from seqeval.metrics import accuracy_score from seqeval.metrics import classification_report from seqeval.metrics import f1_score def compute_metrics(p): predictions, labels = p predictions = np.argmax(predictions, axis=2) # Remove ignored index (special tokens) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] results = { 'accuracy': accuracy_score(true_labels, true_predictions), 'f1': f1_score(true_labels, true_predictions), 'classification_report': classification_report(true_labels, true_predictions) } return results ```