repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
18,357
closed
generate with tf.function (xla) not working for tf model export
### System Info using the latest version with tf.function for generate still doesn't work for tensorflow model export which would be useful for serving. ### Who can help? @gante @patrickvonplaten ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import tensorflow as tf from transformers import TFAutoModelForSeq2SeqLM class MyOwnModel(tf.Module): def __init__(self, model_path="t5-small"): super(MyOwnModel, self).__init__() self.model = TFAutoModelForSeq2SeqLM.from_pretrained(model_path) @tf.function(input_signature=(tf.TensorSpec((None, 32), tf.int32, name="input_ids"), tf.TensorSpec((None, 32), tf.int32, name="attention_mask")), jit_compile=True) def serving(self, input_ids, attention_mask): return self.model.generate(input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=32) model = MyOwnModel() export_dir = "./" tf.saved_model.save( model, export_dir, signatures={ "serving_default": model.serving }) ``` error: ``` File "../python3.8/site-packages/transformers/generation_tf_utils.py", line 1561, in _generate * input_ids = self._prepare_decoder_input_ids_for_generation( File "../python3.8/site-packages/transformers/generation_tf_utils.py", line 1758, in _prepare_decoder_input_ids_for_generation * return tf.ones((batch_size, 1), dtype=tf.int32) * decoder_start_token_id TypeError: Expected int32, but got None of type 'NoneType'. ``` ### Expected behavior generate can be exported in tf model for serving
07-29-2022 09:53:40
07-29-2022 09:53:40
transformers
18,356
closed
[FX] Symbolic trace for Bloom
# What does this PR do? This PR adds `torch.fx` symbolic tracing support for Bloom. It also enables this feature for XLNet.
07-29-2022 09:48:41
07-29-2022 09:48:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Alright, I will make sure I review the PR once it's ready.
transformers
18,355
closed
Unable to set up developer environment on Mac M1
### System Info As MacBooks with the M1 chip need `tensorflow-macos` installed instead of` tensorflow>=2.3` (as listed in the setup.py) file trying to set up a developer environment on a M1 MacBook produces the following error: ``` ERROR: Could not find a version that satisfies the requirement tensorflow>=2.3; extra == "dev" (from transformers[dev]) (from versions: none) ERROR: No matching distribution found for tensorflow>=2.3; extra == "dev" ``` Is there any way around this? I tried replacing with `tensorflow-macos` but that creates a myriad of other issues when trying to set up the developer environment. transformers 4.21.0 python 3.9.12 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `pip install -e ".[dev]"` ### Expected behavior Developer environment should be installed without errors.
07-29-2022 08:06:18
07-29-2022 08:06:18
Hey! Thanks for the issue. I managed to get it work doing the following : 1. Replace `tensorflow` with `tensorflow-macos` in the setup.py file. 2. Install particular dependencies manually ``` brew install llvm conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu pip install fugashi==1.1.2a6 pip install numba brew install cmake brew install rust conda install -c conda-forge onnxruntime ``` 3. Just run `pip install -e ".[dev]"` and it should work :). <|||||>Thanks so much for the prompt response @ArthurZucker. This is great although when I run the final `pip install -e ".[dev]"` command I get the following error: ``` ERROR: Could not find a version that satisfies the requirement tensorflow-text; extra == "dev" (from transformers[dev]) (from versions: none) ERROR: No matching distribution found for tensorflow-text; extra == "dev" ``` when I remove tensor flow-text from setup.py the `pip install -e ".[dev]"` runs fine. Did you experience a similar issue?<|||||>At the time (~2 month ago), I did not. But it seems like it is a pretty known issue mentioned [here](https://developer.apple.com/forums/thread/700906). You should apparently build `tensorflow-text` from source or use the python wheel [made available](https://github.com/sun1638650145/Libraries-and-Extensions-for-TensorFlow-for-Apple-Silicon/releases) 👍🏻 (installing with `pip install tensorflow-text` does not work either)<|||||>Amazing that seems to work for now. Thanks for the help!<|||||>sadly this guide no longer works as using conda 3.9.11 conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal also fails <|||||>The issue is you want to use miniforge (community version of conda) and not the conda... <|||||>Following up here with what I've tried so far with my issue, in case it's useful for anyone in the future: - `tensorflow-text` doesn't have a Python 3.11 prebuilt wheel, so I used 3.10 for everything. - Install [miniforge](https://github.com/conda-forge/miniforge) as mentioned [here](https://github.com/huggingface/transformers/issues/18355#issuecomment-1356443992) instead of conda, because it has `tensorflow-deps` and conda doesn't. - follow instructions in [here](https://github.com/huggingface/transformers/issues/18355#issuecomment-1200940810), in an active conda environment. - Remove `decord` manually from `setup.py` because it's not actively maintained anymore according to [this](https://github.com/huggingface/transformers/issues/22617#issuecomment-1499915010), and my use case shouldn't need `decord`. - While conda environment is active, create virtual environment and try installing: `python3.10 -m venv venv && source venv/bin/activate && pip install --upgrade pip && pip uninstall transformers && pip install -e ".[dev]"` - I then got a resolution error: ``` ERROR: Cannot install transformers and transformers[dev]==4.30.0.dev0 because these package versions have conflicting dependencies. ERROR: Cannot install transformers and transformers[dev]==4.30.0.dev0 because these package versions have conflicting dependencies. The conflict is caused by: transformers[dev] 4.30.0.dev0 depends on jax!=0.3.2, <=0.3.6 and >=0.2.8; extra == "dev" flax 0.6.9 depends on jax>=0.4.2 transformers[dev] 4.30.0.dev0 depends on jax!=0.3.2, <=0.3.6 and >=0.2.8; extra == "dev" flax 0.6.8 depends on jax>=0.4.2 # ... many similar lines of text ``` - Installing with `pip install -e ".[quality]"` instead of `dev` worked, which is fine for my use case because I'm not modifying anything with Jax but not a complete solution unfortunately.
transformers
18,354
closed
Add hallucination filter in generate()
### Feature request Adding a filter of some sorts in the generate function to prevent x number of words from outside of the input from appearing in a generated text. This could work in a number of ways. It could be a filter on the number of out of source words appearing in the generated text (e.g 2 would mean that a maximum of 2 words could be present in the generated text but not in the source) or it could be some sort of damping variable (0 to 1) that's applied to the probabilities of each generated word therefore reducing the likelihood that out of source words would appear in the generated text. If the probability was set to 0 then the generation task would be purely extractive and have no risk of hallucinations. ### Motivation To control the risk of hallucinations in the generated text. ### Your contribution Happy to work on a PR for this if I just get a bit of guidance on the best place to start.
07-29-2022 08:04:16
07-29-2022 08:04:16
WDYT of this request @gante ?<|||||>Hi @KMFODA 👋 I'm not sure if I got the proposal right, let me try to explain in my own words to double-check :) You would like some sort of filter that prevents a large number of different tokens (that are not present in the input prompt) from being generated. Taking your example, considering words=tokens -- if the input is `this is` and the maximum number of new tokens is `2`, `this is a cat` would be a desirable output and `this is a brown dog` would be undesirable (because it uses 3 new words). However, `This is what it is` would be okay, because `is` is present in the original prompt. Correct? Before going into the implementation, let's take a step back. I have two questions: 1. One of the current issues with `generate` is that it has many options, so we need to be mindful when adding new functionality, in order to contain its complexity. Don't take this as "we don't want your suggestion", but rather as "let's see if we can make it happen without adding code" :) For instance, the [repetition penalty](https://huggingface.co/docs/transformers/v4.21.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.repetition_penalty) seems very close to what you want -- it adds a penalty to existing output tokens. A full list of logits processors and constraints can be seen [here](https://huggingface.co/docs/transformers/v4.21.0/en/internal/generation_utils). 2. Assuming there is no combination of logits processors and constraints that can achieve what you had on your mind: what is the use case? Can you give me an example? <|||||>Hi @gante. Apologies I don't think I've properly clarified the use case I think this could solve. I unfortunately can't share my model here or my dataset (but happy to do so privately) as they're both private and sensitive but I can try and give as much context as possible. This problem arrises for me when using a PEGASUS model for summarisation on a private dataset of meeting segments and summaries. The input for scenarios where this occurs looks like this: ``` Person A: text Person B: text Person A: text Person B :text ``` and the output ends up looking like this: `Person C and Person B met today to discuss..` In this example Person C was never in the input text. This is the type of hallucination I was hoping to fix in the generate function as it can be really off-putting to someone using the model. It isn't necessarily restricted to a person's name also. It could be an address / company name / product / number etc. Reading parts of the linked repetition penalty paper, I believe this parameter is designed to penalise previously generated tokens which is not exactly what is needed for this use case. This use case could be stated as reducing the probability that a new word such as Person C appears in the output. I've looked at other LogitsProcessors and I don't think any would do this out of the box. I don't know wether it would be helpful to have a processor that handles this use case. If it is I was thinking it could be done in one of 3 ways: 1. Penalising the generation of new tokens 2. Boosting the probabilities of input tokens 3. Having a hard limit on the number of new tokens that can appear in the output<|||||>Thank you for the clarification @KMFODA, it makes total sense for a summarization task! There is a chance you can solve it with existing code :D Can you have a look at the [constrained beam search documentation](https://huggingface.co/docs/transformers/v4.21.1/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.constrained_beam_search), which has an example, and attempt to constrain the generation to include the name of the individuals in the conversation? See the examples in this PR header as well -- https://github.com/huggingface/transformers/pull/15761 Let us know if this strategy helps :)<|||||>Thanks for the suggestion @gante. The constrained beam search option is really cool. Playing around with it though shows that it doesn't necessarily prevent hallucinations. If for example you put the constraint "Person A and Person B" (which in itself is not very generalisable as you don't always want a summary with every participant in the meeting) you still get Person C appear in the text. What would effectively solve this is a constraint to not include the tokens "Person C". Or potentially to boost all the vocabulary in the original text thereby boosting Person A and Person B over Person C. If this seems like a very specific case happy to just work on it privately. Just thought I'd raise it in case it would benefit the wider community.<|||||>Thank you for trying it @KMFODA 🙏 I wasn't sure whether it would help here. And apologies for adding so many speed bumps along the way -- our generate function has many many options, and we are trying to be more conservative before adding more flags. We need to rule out potential duplicates :) I'd like to get the input of @patrickvonplaten (who's currently off) on this topic. Maybe there are specific solutions for this problem that he knows of, or maybe we can create novel research work from this problem! Meanwhile, if you'd like to experiment, here are some pointers: 1. Adapting the logits can be done with subclasses of [`LogitsProcessor`](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_logits_process.py#L51). There are many examples below in this file, and they are relatively simple; 2. If a processor gets appended to the list of processors ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L700)), then your `generate` will feel the effects of the transformation; 3. For experimenting, I'd recommend to hardcode the inclusion of the new processor to the list of processors (the line linked in 2.) -- we can worry about the whole `generate` API later, if the results are positive; 4. For your particular problem: a. The input tokens are inside `encoder_input_ids`, which are an input to the function linked in 2. You can store then inside the processor's `__init__` for later use b. A modified version of [`RepetitionPenaltyLogitsProcessor`](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_logits_process.py#L144) will probably do the trick -- in its `__call__`, instead of gathering and scattering over `inputs_ids` (the generated tokens), you want to do it over everything that's NOT in `encoder_input_ids` (the input tokens). Or perhaps it will be more efficient to add the penalty everywhere (multiplication by a constant) and reverse the penalty addition to the tokens in the original input. c. Note: do not remove `input_ids ` and `scores` from the signature in `__call__` even if they are not used -- it will raise exceptions. Let us know if you get any interesting results. Depending on @patrickvonplaten's suggestion, we may build the tool ourselves!<|||||>Thanks @gante that's very helpful. I worked on a draft PR (just in case this is deemed useful to anyone else) and initial results look promising. If I feed a hallucination penalty of 2 (instead of the default 1) to the greedy search function the text goes from: `Person C will send Person B an email` where Person C was not in the input text and is therefore classed as a hallucination. to: `Person A will send Person B an email` I'm aware this is just one datapoint so I want to test this even further but I only have 3 data points with hallucinations in my sample so far. I'll try and find an open sourced dataset focused on hallucinations that I can use to test this out even further and report back.<|||||>Super interesting discussion here! Thanks for writing this all down @gante and @KMFODA :-) The PR looks nice to me in general - thanks a lot for opening it @KMFODA! Just FYI we also have a similar processor to not repeat ngrams: https://github.com/huggingface/transformers/blob/06d1ba1a55a12b3fb3ca081bdd4f812fda800c37/src/transformers/generation_logits_process.py#L364<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Commenting to confirm this is not stale. A request to review my latest changes to the PR is out.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Commenting again to confirm this is not stale. PR has incorporate all comments and passed all tests. I believe it's just waiting on a second pair of 👀<|||||>PR merged. Closing this now. Thanks for all the help @gante and @patrickvonplaten.
transformers
18,353
closed
Adding fine-tuning models to LUKE
# What does this PR do? This pull request adds the following four fine-tuning models to LUKE: * `LukeForMultipleChoice` * `LukeForQuestionAnswering` * `LukeForSequenceClassification` * `LukeForTokenClassification` LUKE was initially developed to solve entity-related NLP tasks, however, the model can also be used as a BERT-like pretrained model, thus has been frequently used to solve common NLP tasks such as text classification and question answering. This pull request aims to enable users to easily solve such NLP tasks using LUKE. Following BERT and RoBERTa, this pull request also adds the `classifier_dropout` property to `LukeConfig`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @NielsRogge
07-29-2022 06:29:16
07-29-2022 06:29:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Thank you so much for your detailed comments! > you force return_dict=True for the base model, but we have this flag because some optimization modules do not support dict outputs, the flag should be passed along and then the code should be adapted to deal with base model outputs that are either tuples or ModelOutput. I understand that we should not modify the `return_dict` value in the task-specific classes. However, the existing task-specific classes of LUKE (i.e., `LukeForMaskedLM`, `LukeForEntityClassification`, `LukeForEntityPairClassification`, and `LukeForEntitySpanClassification`) are also implemented by specifying `return_dict=True` to the base class. If we fix the code of these existing classes in this pull request, there is one issue: the `entity_last_hidden_state`, which is used in the existing task-specific classes, is positioned after the `hidden_states` and `attentions` in the `BaseLukeModelOutput` and `BaseLukeModelOutputWithPooling`. Therefore, to identify the numerical index of `entity_last_hidden_state`, we need to write some code that checks whether `output_attentions` and `output_hidden_states` are activated. To deal with this, I think it might be a good idea to change the position of the `entity_last_hidden_state` before the optional hidden-states and attention fields, but it breaks backward compatibility. I would appreciate any advices or tips to address this. The new task-specific classes proposed in this pull request use only `last_hidden_state` and `pooled_output`, so the problem above happens only if we fix the existing task-specific classes. > This model uses a lot of new inputs. Are we sur the standard example which will be included in the docstring will actually work (e.g. are all those entity inputs completely optional?) The entity input of LUKE is completely optional. The model can run with only word-based inputs without issues.<|||||>Ah, I hadn't caught the base model was doing it, and I see why it's more practical with the entity hidden states. Let's leave it as is for now then, and we will fix that in the future if a user opens an issue and actually needs the specific parts of PyTorch `return_dict=False`. Can just filter out the not-None values as asked and we should be good to merge then?<|||||>@sgugger Thanks! I've pushed the commit that filters out None entries from the return values when `return_dict` is set to `False`. Please let me know if there are any issues.<|||||>Thanks again!
transformers
18,352
closed
Refactor `TFSwinLayer` to increase serving compatibility
# What does this PR do? This PR refactors two parts of `TFSwinLayer` like below, to increase serving compatibility on multiple accelerators. ### Fix incompatible type cast on several serving architecture On the `window_reverse` function, there is incompatible computation on the graph when restoring batch size: ```python x = shape_list(windows)[0] y = tf.cast(height * width / (window_size * window_size), tf.int32) batch_size = int(x / y) # <- Here ``` This operation can cause tracing failures depending on the accelerator or compiler SDKs. I have confirmed that AWS Neuron SDK (w/ Inferentia) cannot trace this part properly. We have `batch_size` variable on graph inside `call()`, so I modified this function to use this value by passing it as a function argument. ### Fix mixed use of member & local variables Several parts are using window size, but some use `window_size`, and some use `self.window_size`. This can cause tracing failure because of unclear separation between compile-time constants and runtime values. This PR applies to fix this by not using `self.window_size` on everywhere, but only determining the actual window size on `call()`. ## Review This PR is related with Swin Transformer and TensorFlow. R: @amyeroberts
07-29-2022 04:46:54
07-29-2022 04:46:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,351
closed
Allow user-managed Pool in Wav2Vec2ProcessorWithLM.batch_decode
# What does this PR do? There are two issues being attacked: 1. Always creating a fresh pool within `Wav2Vec2ProcessorWithLM.batch_decode` generates a big overhead if it's called multiple times (this PR fixes #17879) 2. `pyctcdecode` can't use `spawn` Pools (this PR supersedes #17070) Changes: - adds a `pool` argument to `Wav2Vec2ProcessorWithLM.batch_decode`. This allows a user-managed `multiprocessing.Pool` to be (re)used across multiple calls to `batch_decode`. - updates `pyctcdecode` version requirement. The new version contains code to handle invalid pools. - adds some tests for TF's, torch's and flax's `Wav2Vec2ProcessorWithLM`. - adds usage example in `Wav2Vec2ProcessorWithLM.batch_decode`'s docs. An important implementation reference is [multiprocessing's Contexts and start methods](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods). Basically, `batch_decode`'s multiprocessing capabilities are useful only in Unix, which uses `fork` contexts. This PR introduces some checks in this regard. They can be removed once https://github.com/kensho-technologies/pyctcdecode/issues/65 is resolved. ## Breaking change The new `pool` argument can break currently valid codes like: `processor.batch_decode(logits, 5)`. Previously, the second argument meant `num_processes`. If that's an issue, some considerations are: - `pool` and `num_processes` are mutually exclusive, but a unique arg like `num_processes_or_pool` seemed weird - we could add `pool` as last argument - we could force kwargs-only ## Checklist I couldn't install all deps, so I didn't execute some tests and I didn't build the documentation changes. Let's see what CI shows about: - [x] ~Test `test_decoder_batch_1` from `test_processor_wav2vec2_with_lm.py` (it uses `fork`, but it fails in my Mac, probably because of the OS platform behavior)~ Fixed in 17efdddbeeac75eeacddff3bcc51ced70ec19217 - [ ] All other tests - [ ] `Wav2Vec2ProcessorWithLM.batch_decode`'s new Tips - [ ] `Wav2Vec2ProcessorWithLM.batch_decode`'s new usage example Also, the new tests are a copy and paste of previous tests. I couldn't figure out how to factor out the duplicated code (I'm more used to pytest's fixtures). Ideas to cut down the duplicated code are welcomed. ## After merge Once merged, it could be nice to adapt and re-run some scripts such as evaluation of [patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram](https://huggingface.co/patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram). In this case, for example, there are two improvements: - Current evaluation creates a fresh Pool for each inferred audio, yielding a huge overhead when compared to a sequential decoding or to a user-managed pool as introduced in this PR. - Current evaluation uses `Wav2Vec2ProcessorWithLM.batch_decode` without `batched=True`, which probably means that there's no advantage in using parallel batch decoding. The usage example from this PR allows proper parallel batch decoding. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? See: - #17879 - #17070. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten
07-29-2022 04:02:36
07-29-2022 04:02:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>@anton-l are you looking into this? I'm not super familiar with the `pyctcdecode` backend, but can get up to speed and take a look for @falcaopetri if need be :)<|||||>Hi @falcaopetri! Looks like `test_decoder_batch_1` indeed fails on Linux too, maybe you could take a look and find a workaround?<|||||>Hi everyone. Test setup was wrong, but new commit's diff should be self-explanatory. Let me know if there's something still missing. You can also check this [colab](https://colab.research.google.com/drive/1j4UNdqcafKH8WQUYIr871xc8h2A97B_z?usp=sharing). 34s -> 5s speed-up by just removing the overhead, even without using >1 processes, which should have a somewhat linear improvement when decoding multiple batches.<|||||>Hi @anton-l, @sanchit-gandhi. I've merged `main` trying to fix the _Add model like runner_ CI step, but with no success. Could you take a look on that, and the overall PR please?<|||||>This PR looks very nice to me! Let's try to get it merged<|||||>@falcaopetri let's see what the tests say and then IMO we can merge if they are all green<|||||>Thanks for re-opening it @patrickvonplaten! I've just fixed a code quality issue that made `check_code_quality` fail. But now CircleCI seems to be having some (internal?) problem. And should I rebase everything to get a nice and clean history or it's not necessary?<|||||>@patrickvonplaten, I've rebased everything so we (i) get a cleaner merge and (ii) re-trigger CircleCI. Unfortunately, CircleCI setup is still failing.<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)<|||||>Oh I see, I didn't realize CircleCI was talking about my own user's credential. Thanks for the the heads up. I've refreshed my credentials and force pushed things again to trigger CI steps. For future reference, I did make a mistake in the process: after refreshing credentials, CircleCI prompted me to set up a project within my own organization, which then made the `run_test*` steps fail with "Resource class docker for xlarge is not available for your project, or is not a valid resource class. **This message will often appear if the pricing plan for this project does not support docker use.**" Deleting my own project and force pushing made the `huggingface` org run the steps.<|||||>I just realized that my [Wav2Vec2ProcessorWithLM.batch_decode's Example](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18351/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.batch_decode.example) triggers: `WARNING:datasets.fingerprint:Parameter 'fn_kwargs'={'pool': <multiprocessing.pool.Pool object at 0x7f64ecc54990>} of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ` @patrickvonplaten, is there a way to indicate that a map arg shouldn't be taken into account when hashing the transformation? Or maybe `pool` could be a global variable?<|||||>> Uff yeah good question. Gently pinging @lhoestq and/or @mariosasko here as this seems to be `datasets` related<|||||>On the other hand, I don't think it's a must that results are cached with datasets here - think we can merge this PR without. <|||||>> @patrickvonplaten, is there a way to indicate that a map arg shouldn't be taken into account when hashing the transformation? Or maybe pool could be a global variable? You can pass `new_fingerprint=` in `map` with a unique fingerprint (string) that depends on the parameters of your transform. Alternatively you can just disable caching in `datasets` to remove the warning<|||||>Thanks again for all your work on this!<|||||>Hi @falcaopetri Thank you for adding this! After merging to the main branch, we have 3 test failures (when running on GPU) ```bash tests/models/wav2vec2/test_modeling_tf_wav2vec2.py::TFWav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_invalid_pool (line 243) RuntimeError: context has already been set tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_invalid_pool (line 243) RuntimeError: context has already been set tests/models/wav2vec2/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_wav2vec2_with_lm_pool (line 1639) TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. ``` More detailed information could be found [here](https://github.com/huggingface/transforsmers/actions/runs/3278475639/jobs/5396961021) and its raw logs. Would you like to take a look here 🙏? Thank you 🤗 <|||||>Hi @ydshieh. I'm really sorry about that. Both errors are my fault while setting up the tests. `RuntimeError: context has already been set` emerged only during GPU tests probably because tests are executed differently: a PR's pytest is invoked as with `-n 8`, while your tests [here](https://github.com/huggingface/transformers/actions/runs/3278475639/jobs/5396961021) aren't. I.e., `RuntimeError` was indeed an issue, but it was hidden during the PR because tests were being executed in different processes. `TypeError: can't convert cuda:0 device type tensor to numpy` was caused by a strange change I did at the time after copy and pasting other tests. This should fix both issues: https://github.com/huggingface/transformers/compare/main...falcaopetri:transformers:fix-w2v2-lm-pool. What is your workflow in this case? Should I open a new PR?<|||||>Thank you so much @falcaopetri. Yes, it would be nice if you can open a PR. Otherwise, we can do it ourselves by looking your branch above ❤️ . [Context of our CI] Our CircleCI tests run on CPU, so we use `-n 8`. After merging into `main`, we run (more) tests and on GPU machines, so we set `-n 1` to avoid multiple processes to access GPU at the same time.
transformers
18,350
closed
Global/local import with replicated name in the Trainer leading to UnboundLocalError
### System Info - `transformers` version: 4.21.0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) ### Who can help? @pacman100 @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running the `run_glue`([optimum version](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/text-classification/run_glue.py)) with the distributed launcher ``` python -m torch.distributed.run --nproc_per_node=2 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge --task_name MRPC --do_train --output_dir /tmp/deberta_res --fp16 --sharded_ddp simple --num_train_epochs 1 ``` Error message: ``` Traceback (most recent call last): File "run_glue.py", line 610, in <module> main() File "run_glue.py", line 503, in main trainer = ORTTrainer( File "/workspace/optimum/onnxruntime/trainer.py", line 144, in __init__ super().__init__( File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 569, in __init__ self.scaler = ShardedGradScaler() UnboundLocalError: local variable 'ShardedGradScaler' referenced before assignment ``` ### Expected behavior `ShardedGradScaler` was firstly imported as global variable https://github.com/huggingface/transformers/blob/da503ea02f7623542bd588b509d0fc31aff92735/src/transformers/trainer.py#L190 Then it was imported as a local variable for fsdp with the same name https://github.com/huggingface/transformers/blob/da503ea02f7623542bd588b509d0fc31aff92735/src/transformers/trainer.py#L568 And it won't fall back to the global `ShardedGradScaler`, even when the local one is not imported leading, to an UnboundLocalError. P.S. However I don't have problem running the `run_glue.py` in transformers, the problem seems to occur when using classes inherited from `Trainer`. Possible solution: use different name / both import locally *REF:* *https://docs.python.org/3/faq/programming.html#why-am-i-getting-an-unboundlocalerror-when-the-variable-has-a-value* *https://stackoverflow.com/questions/58750517/why-unboundlocalerror-occurs-when-importing-inside-function*
07-28-2022 20:01:06
07-28-2022 20:01:06
Hello @JingyaHuang, thank you for bringing this to the notice with detailed steps and possible solutions 🤗. Can you try the above draft PR and see if that fixes the issue?
transformers
18,349
closed
Add balanced strategies for device_map in from_pretrained
# What does this PR do? This PR brings to Transformers the functionality introduced in https://github.com/huggingface/accelerate/pull/534 . Basically `device_map` can now take several options: - `"sequential"` which corresponds to the current auto: fill each GPU sequentially (and if the user has lots of GPU spaces, some are not used at all) - `"balanced"` which will split the model evenly across GPUs - `"balanced_low_0"` which will split the model evenly across GPUs while leaving the most available memory on GPU 0, since that GPU might have more tensors on it when the outputs are used for some form of post-processing (generate and use_cache for instance) - `"auto"` which now defaults to `"balanced"`. When the user does not have enough GPU memory to accommodate the model, all the options are equivalent.
07-28-2022 19:33:27
07-28-2022 19:33:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,348
closed
Migrate metrics used in flax examples to Evaluate
Currently, tensorflow examples use the `load_metric` function from Datasets library, commit migrates function call to `load` function in from Evaluate library for flax examples. Fix for #18306 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
07-28-2022 18:43:29
07-28-2022 18:43:29
@sgugger , run_examples_flax successful ! Should be good to merge. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger , I was wondering if you could point me at a good first issue I can try contributing to next. I looked at the issues and most of them seem to have folks working on them <|||||>I think there are a couples of models left where help is needed on [this issue](https://github.com/huggingface/transformers/issues/16059) if you are interested.
transformers
18,347
closed
Fix OwlViT torchscript tests
# What does this PR do? Fix `OwlViTForObjectDetection` torchscript tests. The main problem comes from the fact that we provide ```python traced_model = torch.jit.trace(model, (input_ids, pixel_values)) ``` but `OwlViTForObjectDetection` has different order of arguments. I could simply change the order in the test method, but probably it is better to have `OwlViTForObjectDetection` and `OwlViTModel` have the same argument order . [current failed job run](https://github.com/huggingface/transformers/runs/7552486052?check_suite_focus=true)
07-28-2022 17:40:40
07-28-2022 17:40:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,346
closed
[Docs] Fix Speech Encoder Decoder doc sample
# What does this PR do? Fixes #18343 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
07-28-2022 17:25:46
07-28-2022 17:25:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,345
closed
Include tensorflow-aarch64 as a candidate
# Include tensorflow-aarch64 as a candidate Fixes # 18323 @LysandreJik
07-28-2022 16:03:28
07-28-2022 16:03:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>(I'm waiting for the tests to pass before merging)<|||||>Thank you! What is the next step -- do I close it or merge it somehow? Apologies for my unfamiliarity with the process.<|||||>I'll merge it as soon as everything is green :)<|||||>Okay great! Thank you for your responsiveness.<|||||>Done! Thanks for your contribution @ankrgyl!
transformers
18,344
closed
[BLOOM] Clean modeling code
# What does this PR do? **THERE'S A BREAKING CHANGE**: `past_key_values` changes in terms of format, is that acceptable @sgugger @patrickvonplaten ? ## Changes Make a pass at cleaning up some code: - convert `causal_mask` to bool tensor and use it via mask. - simplify `causal_mask` creation function. - revert back to `baddbmm` instead of `bmm`, it was unclear why this was changed, and training codebase uses `baddbmm`. - switch back `attention_scores` upcasting to fp32 as it mimics the training procedure the closest. - remove `self.layer_number` normalization. It was introduced in Meg-DS to prevent overflow when computing attention scores in float16, but it doesn't seem to impact the model at all here. - remove a `reshape` in `.split_head()` which reduces memory footprint and increases throughput - remove multiple reshapes when computing `QKV` with past values. Though this introduces a **BREAKING CHANGE** where `past_key` instead of being stored in `[batch_size, num_heads, seq_length, head_dim]` it's stored in `[batch_size * num_heads, head_dim, seq_length]`. - explicit all dimensions when computing `.view`, `.reshape` - standardize namings for `batch_size`, `seq_length` etc ... - type hint improvements ## Speed-up in generation Using the following script: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from timeit import timeit def main(): model_name = "bigscience/bloom-350m" max_length = 50 batch_size = 16 model = AutoModelForCausalLM.from_pretrained(model_name).cuda() tokenizer = AutoTokenizer.from_pretrained(model_name) texts = ["Hello my name is"] * batch_size input_ids = tokenizer.batch_encode_plus(texts, return_tensors="pt").to("cuda") print(timeit(lambda: model.generate(**input_ids, max_length=max_length), number=100)) if __name__ == "__main__": main() ``` Results on A100: ``` This branch: 67.85658120410517 main: 78.72558180289343 ```
07-28-2022 15:27:07
07-28-2022 15:27:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ran quick test on the generation speed: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from timeit import timeit def main(): model_name = "bigscience/bloom-350m" max_length = 50 model = AutoModelForCausalLM.from_pretrained(model_name).cuda() tokenizer = AutoTokenizer.from_pretrained(model_name) text = "Hello my name is" input_ids = tokenizer.encode(text, return_tensors="pt").cuda() print(timeit(lambda: model.generate(input_ids, max_length=max_length), number=100)) if __name__ == "__main__": main() ``` The results are the following on A100: ``` This branch: 62.44300797022879 Main: 70.0128909018822 ```<|||||>Now hitting: `62.44300797022879` on the test benchmark (Note that it's a small model, so it the improvement won't have the same percentage, but still I'd say that's quite nice). Added `batch_size=16` and we see better performance improvement: ``` This branch: 67.85658120410517 main: 78.72558180289343 ```<|||||>@thomasw21 We could remove https://github.com/huggingface/transformers/blob/323c07316727939518d0966e027c98ecc86ca316/src/transformers/models/bloom/modeling_bloom.py#L100 <|||||>Btw if that breaking change pattern is approved, I can probably implement that change on other models: `gpt2`, `opt` etc ... I think the smaller the model, the more influence the copies have on the throughput.<|||||>Agree with @sgugger regarding the shape of the past key values: fine for me for the shape to be changed. We did change the format for this output two years ago in https://github.com/huggingface/transformers/issues/9391 and https://github.com/huggingface/transformers/pull/9596, which could also have been considered breaking. I'd like for this change to be contained to BLOOM (i.e., not ported to GPT-2) until @patrickvonplaten can comment as he was very involved in that reshape.<|||||>Thanks @LysandreJik ! Yeah this PR is only changing BLOOM modeling.<|||||>Okay tested 176b on lambada: ``` ## This branch { "results": { "lambada": { "ppl": 3.9275610127374367, "ppl_stderr": 0.08456328894237608, "acc": 0.6716475839316903, "acc_stderr": 0.006542638265686496 } }, "versions": { "lambada": 0 }, } ## 4.21.0 { "results": { "lambada": { "ppl": 3.933199250271968, "ppl_stderr": 0.08471163223917309, "acc": 0.6708713370851931, "acc_stderr": 0.006546580975553108 } }, "versions": { "lambada": 0 }, } ``` I think this PR is ready to be checked.
transformers
18,343
closed
The doc sample for training on Speech Encoder Decoder does not work
I'm working on updating the doc to remove the use of `as_target_xxx` context managers and noticed the code sample [here](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/speech-encoder-decoder.mdx#training) does not work. First the call to `from_encoder_decoder_pretrained` does not load the pretrained BERT (there is a huge warning all weights are discarded). The the call to the model fails with the error ``` Make sure to set the decoder_start_token_id attribute of the model's configuration. ``` even if the lines before does set that. cc @patrickvonplaten @sanchit-gandhi
07-28-2022 14:55:17
07-28-2022 14:55:17
Hey @sgugger, thanks for flagging this! > First the call to from_encoder_decoder_pretrained does not load the pretrained BERT (there is a huge warning all weights are discarded) I've double checked this, and the `from_encoder_decoder_pretrained` does load all the **pre-trained** BERT weights! The weights that are randomly initialised are the cross attention weights (expected, as we're loading the Seq2Seq decoder from an encoder only model): <details> <summary> Randomly initialised weights (cross attention layers) </summary> ``` Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.encoder.layer.3.crossattention.self.query.weight', 'bert.encoder.layer.6.crossattention.output.dense.bias', 'bert.encoder.layer.2.crossattention.output.dense.bias', 'bert.encoder.layer.2.crossattention.self.query.weight', 'bert.encoder.layer.4.crossattention.output.dense.weight', 'bert.encoder.layer.4.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.4.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.self.value.bias', 'bert.encoder.layer.11.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.output.dense.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.11.crossattention.self.key.weight', 'bert.encoder.layer.5.crossattention.self.key.weight', 'bert.encoder.layer.5.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.7.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.output.dense.weight', 'bert.encoder.layer.8.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.query.bias', 'bert.encoder.layer.0.crossattention.self.query.bias', 'bert.encoder.layer.9.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.self.query.weight', 'bert.encoder.layer.8.crossattention.self.value.weight', 'bert.encoder.layer.4.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.self.value.bias', 'bert.encoder.layer.5.crossattention.output.dense.bias', 'bert.encoder.layer.8.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.8.crossattention.output.dense.bias', 'bert.encoder.layer.11.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.output.dense.bias', 'bert.encoder.layer.10.crossattention.self.value.weight', 'bert.encoder.layer.11.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.value.weight', 'bert.encoder.layer.3.crossattention.self.value.bias', 'bert.encoder.layer.3.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.1.crossattention.output.dense.bias', 'bert.encoder.layer.3.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.self.query.weight', 'bert.encoder.layer.3.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.query.weight', 'bert.encoder.layer.9.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.value.bias', 'bert.encoder.layer.7.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.1.crossattention.self.query.bias', 'bert.encoder.layer.6.crossattention.self.query.bias', 'bert.encoder.layer.0.crossattention.self.key.weight', 'bert.encoder.layer.9.crossattention.self.value.bias', 'bert.encoder.layer.11.crossattention.self.query.bias', 'bert.encoder.layer.7.crossattention.self.value.weight', 'bert.encoder.layer.4.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.query.bias', 'bert.encoder.layer.1.crossattention.self.value.bias', 'bert.encoder.layer.1.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.output.dense.bias', 'bert.encoder.layer.3.crossattention.self.key.bias', 'bert.encoder.layer.11.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.self.value.bias', 'bert.encoder.layer.10.crossattention.self.value.bias', 'bert.encoder.layer.3.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.key.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.9.crossattention.self.key.weight', 'bert.encoder.layer.2.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.key.weight', 'bert.encoder.layer.1.crossattention.self.key.weight', 'bert.encoder.layer.10.crossattention.self.key.weight', 'bert.encoder.layer.9.crossattention.output.dense.weight', 'bert.encoder.layer.6.crossattention.self.key.weight', 'bert.encoder.layer.10.crossattention.self.query.bias', 'bert.encoder.layer.4.crossattention.self.key.bias', 'bert.encoder.layer.8.crossattention.output.dense.weight', 'bert.encoder.layer.11.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.output.dense.weight', 'bert.encoder.layer.3.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.self.value.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.9.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.self.key.weight', 'bert.encoder.layer.4.crossattention.self.value.bias', 'bert.encoder.layer.8.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.self.value.bias', 'bert.encoder.layer.2.crossattention.self.key.bias', 'bert.encoder.layer.1.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.self.value.bias', 'bert.encoder.layer.6.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.self.key.bias', 'bert.encoder.layer.0.crossattention.self.value.weight', 'bert.encoder.layer.2.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.self.query.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.query.weight', 'bert.encoder.layer.9.crossattention.output.dense.bias', 'bert.encoder.layer.1.crossattention.self.key.bias', 'bert.encoder.layer.2.crossattention.self.value.weight', 'bert.encoder.layer.3.crossattention.self.query.bias', 'bert.encoder.layer.9.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.3.crossattention.output.dense.weight', 'bert.encoder.layer.9.crossattention.self.key.bias', 'bert.encoder.layer.6.crossattention.self.query.weight', 'bert.encoder.layer.1.crossattention.self.query.weight', 'bert.encoder.layer.5.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.self.key.bias', 'bert.encoder.layer.4.crossattention.self.value.weight', 'bert.encoder.layer.9.crossattention.self.query.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` </details> > The the call to the model fails with the error Addressed in #18346!
transformers
18,342
closed
[BLOOM] Deprecate `position_ids`
# What does this PR do? Deprecate `position_ids` as they never made sense for BLOOM.
07-28-2022 14:50:45
07-28-2022 14:50:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>With @NouamaneTazi we discovered that this actually breaks scripting for torch 1.11 (and seems to work fine in torch 1.12) . In particular, the `**` operator breaks it, should we revert back to set `position_ids=None` in the signature of the function? cc @sgugger > your solution works against everything :-) I guess it didn't ...<|||||>Does this mean `torchscript` can't work with `**kwargs`?<|||||>No it didn't work with kwargs, that's why it doesn't work for some specific models like XLNet. Since this seems fixed in PyTorch 1.12 and torchscripting is not the mainstream use, I would leave it as is (so users need to use PyTorch >= 1.12 for scripting).
transformers
18,341
open
[Flax] Add scan_with_axes
# What does this PR do? Adds `scan_with_axes` to Flax Bert and its derived models. TODO: - [ ] Fix cookie cutter template - [ ] Run `make fix-copies` (after review) Fixes [#17399](https://github.com/huggingface/transformers/issues/17399) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
07-28-2022 14:44:59
07-28-2022 14:44:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18341). All of your documentation changes will be reflected on that endpoint.<|||||>### Update Current API: 1. With automatic init: ```python model.scan_enable() # to enable scan model.scan_disable() # to disable scan ``` 2. Without automatic init: ```python model.scan_enable() # to enable scan in the nn.Module params = model.convert_unroll_to_scan(params) # to convert the unrolled params to scan model.scan_disable() # to disable scan in the nn.Module (i.e. unrolled) params = model.convert_scan_to_unroll(params) # to convert the scan params to unrolled ``` With automatic init, the params are converted from unrolled to scan under the hood ([`.convert_unroll_to_scan()`](https://github.com/huggingface/transformers/blob/98c42b01c2f1128b22bbda2cdc3d25e03b94af53/src/transformers/modeling_flax_utils.py#L435)). The params are converted to/from scan on the fly, with no memory overhead for conversion (_c.f._ [`.convert_unroll_to_scan()`](https://github.com/huggingface/transformers/blob/98c42b01c2f1128b22bbda2cdc3d25e03b94af53/src/transformers/modeling_flax_utils.py#L435)): ```python for i in range(self.config.num_hidden_layers): # Stack the params for the N layers into one super block # and remove the unrolled layer params on the fly # -> no memory overhead for conversion! unrolled_layer = params.pop(key.replace("0", str(i))) stacked_params.append(unrolled_layer) ``` ### Design question! Should we include the unroll -> scan weight conversion in the `.from_pretrained()` method if `use_scan=True` is passed? Currently, setting `use_scan=True` will enable scan in the nn.Module, but will leave the params untouched. This is fine if loading scanned params from pre-trained. If loading unrolled params from pre-trained, the shape of the weights will not match those expected by the nn.Module (unrolled params, scanned nn.Module), meaning the weights will not be loaded! Possible cases (loaded params, flag for `use_scan`): | Params | `use_scan` | Mismatch | Action | |----------|------------|------------------------------------|-------------------------------------------| | Unrolled | False | None | Fine! | | Unrolled | True | params unrolled, nn.Module scanned | Requires conversion of params to scan | | Scan | False | params scan, nn.Module unrolled | Requires conversion of params to unrolled | | Scan | True | None | Fine! |
transformers
18,340
closed
Efficient Attention
### Feature request The authors of https://openaccess.thecvf.com/content/WACV2021/papers/Shen_Efficient_Attention_Attention_With_Linear_Complexities_WACV_2021_paper.pdf propose to change the attention module in transformers to achieve linear complexity in the number of processed tokens (instead of the feared quadratic complexity). I'd like to request the following feature: The possibility to choose between standard and efficient attention, for instance via a flag in the corresponding config file when building a huggingface model. ### Motivation Any model with a variable size of input tokens can benefit from this, and it is especially useful if one wants to process a lot of tokens. Say, number of words in a text or number of patches in an image. The authors of the paper show that this can significantly reduce inference time, training time and memory load. In my case, I have implemented this change for the ViTMAE model and noticed that I can easily process images of size 592x592 (37x37 = 1369 (!!!) tokens) now, whereas before my machine was capped with 384x384-sized images (24x24 = 576 tokens). Due to the linear complexity, I could even go for higher token count/image resolution, trading off with batch size. The model size is not affected (it's magic, really), and for the ViTMAE model I have confirmed that this type of attention is working quite well. ### Your contribution The change is really easy to implement, but might take a bit of work because it most likely has to be done for every model of interest. For the ViTMAEmodel, I have done the following quick and dirty tweak to the class ViTMAESelfAttention: ``` # Copied from transformers.models.vit.modeling_vit.ViTSelfAttention ViT->ViTMAE class ViTMAESelfAttention(nn.Module): def __init__(self, config: ViTMAEConfig) -> None: super().__init__() if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): raise ValueError( f"The hidden size {config.hidden_size,} is not a multiple of the number of attention " f"heads {config.num_attention_heads}." ) self.num_attention_heads = config.num_attention_heads self.attention_head_size = int(config.hidden_size / config.num_attention_heads) self.all_head_size = self.num_attention_heads * self.attention_head_size self.query = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.dropout = nn.Dropout(config.attention_probs_dropout_prob) def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor: new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) x = x.view(*new_x_shape) return x.permute(0, 2, 1, 3) def forward( self, hidden_states, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False ) -> Union[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor]]: mixed_query_layer = self.query(hidden_states) key_layer = self.transpose_for_scores(self.key(hidden_states)) value_layer = self.transpose_for_scores(self.value(hidden_states)) query_layer = self.transpose_for_scores(mixed_query_layer) # OLD ATTENTION # Take the dot product between "query" and "key" to get the raw attention scores. # attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) # # attention_scores = attention_scores / math.sqrt(self.attention_head_size) # # # Normalize the attention scores to probabilities. # attention_probs = nn.functional.softmax(attention_scores, dim=-1) # # # This is actually dropping out entire tokens to attend to, which might # # seem a bit unusual, but is taken from the original Transformer paper. # attention_probs = self.dropout(attention_probs) # # # Mask heads if we want to # if head_mask is not None: # attention_probs = attention_probs * head_mask # # context_layer = torch.matmul(attention_probs, value_layer) # NEW EFFICIENT ATTENTION: ####################### key_layer = nn.functional.softmax(key_layer, dim=3) query_layer = nn.functional.softmax(query_layer, dim=2) G = torch.matmul(key_layer.transpose(-1, -2), value_layer) context_layer = torch.matmul(query_layer, G) ####################### context_layer = context_layer.permute(0, 2, 1, 3).contiguous() new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) context_layer = context_layer.view(*new_context_layer_shape) # THIS LINE HERE DID NOT MAKE SENSE WITH THIS ATTENTION # outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) return (context_layer,) ``` As you can see, I commented out 15 lines of code and added 4 new ones, and this worked out of the box. Of course, to make it more accessible, one would need to put in a switch and a flag in the config etc etc... On a side note, many thanks to @NielsRogge for implementing the ViTMAE model, I am loving it!
07-28-2022 13:44:11
07-28-2022 13:44:11
Hi, Loved reading that :D thanks for your interest in ViTMAE. It would definitely be nice to have one of these more efficient attention models in the library. However, we usually only add models that have dedicated pre-trained weights. As Transformers is not really a modular toolbox, we treat each model rather independently in the library. This means that, in case we would have a model with an efficient attention variant, it would have its own modeling files, doc page, etc.<|||||>Hey, thanks for the quick response. So I need to take this to facebook such that they do a retraining :D? I think I cannot spare 800 epochs to do it myself. Only providing models with pretrained weights is of course a dealbreaker here, looks like I will have to wait a bit longer for something like this.<|||||>An alternative option is to add the implementation to the [research projects](https://github.com/huggingface/transformers/tree/main/examples/research_projects) folder. That's also where [Performer](https://github.com/huggingface/transformers/tree/main/examples/research_projects/performer) lives for instance, which also lacks pre-trained weights.<|||||>Well, it clearly is experimental, so that would actually make sense. Let me know if I can help on this regarding the ViTMAE model.<|||||>Feel free to open a PR, you can add 2 files: - modeling_efficient_vitmae.py, which includes the efficient attention mechanism - a script or notebook to train the model<|||||>Will do, but might take some time. I want to find a public dataset where the unsupervised training shows some results after a reasonable amount of time (for the notebook).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,339
closed
fix(typo): Update automatic_speech_recognition.py
@sgugger
07-28-2022 13:30:53
07-28-2022 13:30:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,338
closed
Remove Flax OPT from `documentation_tests.txt`
# What does this PR do? Remove Flax OPT from `documentation_tests.txt` for now. The test uses the docker image `huggingface/transformers-all-latest-gpu` which has no JAX/Flax installed. [Failed job run](https://github.com/huggingface/transformers/runs/7551001967?check_suite_focus=true) Error: ```python src/transformers/models/opt/modeling_flax_opt.py:20: in <module> import flax.linen as nn E ModuleNotFoundError: No module named 'flax' ```
07-28-2022 13:02:22
07-28-2022 13:02:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker This file is used to test the docstrings in model files and some documentation files. It is something different from the usual model testing.
transformers
18,337
closed
fixed _toctree.yml
# What does this PR do? I update the _toctree.yml after the [last fixes and review on training.mdx](https://github.com/huggingface/transformers/pull/18333#event-7080871662) ![immagine](https://user-images.githubusercontent.com/11136646/181508592-a7336f29-52de-4f01-baf9-e8b0a8b70741.png) See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @omarespejel @sgugger
07-28-2022 12:50:17
07-28-2022 12:50:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,336
closed
Fix breaking change in `onnxruntime` for ONNX quantization
Using `onnxruntime.quantization.quantize` has been deprecated since a while: ``` WARNING:root:onnxruntime.quantization.quantize is deprecated. Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization. ``` and has been removed from `onnxruntime` in v1.12.0 (released 6 days ago). This breaks quantization in `transformers.convert_graph_to_onnx`. I have migrated `onnxruntime.quantization.quantize` to `transformers.convert_graph_to_onnx.quantize` to preserve compatibility. ## Who can review? @mfuntowicz
07-28-2022 10:58:54
07-28-2022 10:58:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>@JingyaHuang @lewtun @michaelbenayoun Happy to implement changes if you have any suggestions 😊 <|||||>Thanks @lewtun :pray: All tests are passing on my machine: ![image](https://user-images.githubusercontent.com/16133277/185413954-4a41a3ee-c6b3-41bd-a887-107ef1a66be0.png)
transformers
18,335
closed
AutoTokenizer behavior changing when enable_full_determinism is set to zero
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.14.0-1045-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @SaulLu However, I am quite sure this is a problem with `enable_full_determinism` rather than with tokenizers, but I don't know who to tag on that @LysandreJik ### Reproduction MWE: ```python from datasets import load_dataset from transformers import AutoTokenizer from transformers import enable_full_determinism # Loading tokenizer tk = AutoTokenizer.from_pretrained("prajjwal1/bert-tiny"); tk.model_max_length = 512 def tokenize_function(examples): return tk(examples["text"], truncation=True, max_length=512) # Loading dataset data = load_dataset('yelp_review_full')["train"].select(range(2000)) data = data.train_test_split(0.2) data_tk = data.map(tokenize_function) print(max([len(b) for b in data_tk["test"]["input_ids"]])) # 512 enable_full_determinism(0) data_tk = data.map(tokenize_function) print(max([len(b) for b in data_tk["test"]["input_ids"]])) # 965 !!!!!!! enable_full_determinism(1) data_tk = data.map(tokenize_function) print(max([len(b) for b in data_tk["test"]["input_ids"]])) # 512 enable_full_determinism(2) data_tk = data.map(tokenize_function) print(max([len(b) for b in data_tk["test"]["input_ids"]])) # 512 ``` ### Expected behavior The AutoTokenizer should systematically apply truncation, even when the seed of `enable_full_determinism` is set to 0. Seeds different than 0 do not seem to create problems. (I suspect the seed is cast to a boolean somewhere, since the boolean of an integer is always True, except when the integer is 0).
07-28-2022 10:44:00
07-28-2022 10:44:00
Hi @clefourrier , From your description, I agree with you that the output should always be 512. However, I can't reproduce your problem (I tried with the same version of `transformers` and `torch`), I let you see my attempt to reproduce your error in this [google colab notebook](https://colab.research.google.com/drive/1R_b8u6envps2yKFrPykVunfwF8NFrzkd?usp=sharing). I wonder if this could be caused by a cache problem for you, could you try again with: ```python enable_full_determinism(0) data_tk = data.map(tokenize_function, load_from_cache_file=False) print(max([len(b) for b in data_tk["test"]["input_ids"]])) ``` <|||||>@SaulLu It's not happening with `load_from_chache=False`, nor after restarting my kernel. I agree with you, must have been a cache problem (> closing). Thank you for having taken a look :hugs:
transformers
18,334
closed
Fail to reproduce tf2 bert-large f1-score on SQuADv1.1
### System Info transformers=4.20.1, python=3.7, tensorflow-gpu=2.9.1. ### Who can help? @Rocketknight1 @sgugger @patil-suraj ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Download bert-large tf2 pretrained models, .h5 file from [huggingface](https://huggingface.co/bert-large-uncased/tree/main). 2. Run [pytorch question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) for reference. My script is: ``` python run_qa.py \ --model_name_or_path $BERT_DIR \ --dataset_name $SQUAD_DIR \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 128 \ --doc_stride 48 \ --output_dir $OUTPUT \ --save_steps 10000 \ --overwrite_cache \ ``` I got an f1-score **90.3953%**. Note that the pretrained model are loaded from tf2 .h5 checkpoints. 3. Run [tf2 question-answering example](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) with the same setting of pytorch example. My script is: ``` python run_qa.py \ --model_name_or_path $BERT_DIR \ --dataset_name $SQUAD_DIR \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 128 \ --doc_stride 48 \ --output_dir $OUTPUT \ --save_steps 10000 \ --overwrite_cache \ ``` I only got an f1-score **88.5672%**, which is much lower than expected and pytorch results (90.3953%). ### Expected behavior TF2 question-answering example should achieve similar f1-score with results of corresponding pytroch example. Or you guys could provide script examples that achieves the target F1-score. Thanks.
07-28-2022 10:26:07
07-28-2022 10:26:07
Hi @zhuango, it's possible there's a bug here, but there is also always some variation between runs. Can you run the training a couple of times for both PyTorch and TensorFlow and report the range of F1-scores you get for both? <|||||>@Rocketknight1 Sure, I willl try more runs with exactly the same hyper-parameters. But the hyper-parameter setting I showed above is the one that can achieve the highest f1-score. I have run the tf2 bert-large fine-tuning on SQuADv1.1 with different hyper-parameter setting combinations, like total batchsize=12, 24/warmup_rate=0, 0.1/learning rate=3e-5, 1e-5/sequence length=384, 128/doc_stride=128, 48/Adam, AdamW optimizers. All the results are lower than 87% except the one I showed above. So I doubt that maybe I miss something in the hyper-parameter setting if there is no bug in the example. And Is it pissible that you share the setting that you guys used to reproduced the F1-score? Note that I modified the example code to make sure the model's inputs are complete, Please have a check on [#18223 ](https://github.com/huggingface/transformers/issues/18223) for details.<|||||>Hi @zhuango - I just saw your other issue, I don't know how I missed it the first time. You're completely right - that example is hardcoding the input columns to the model, which is totally wrong and will degrade performance for models with `token_type_ids` inputs. I'll work on a PR and let you know when it's ready.<|||||>@Rocketknight1 Thanks, I tried more runs with the same setting (and fixed #18223 ) and still got 88% like F1-socre. But, I tried a linear decay learning rate scheduler plus 0.1 * total training steps warmup and AdamW optimizers (from tensorflow_addons), and I finally got **90.247%** F1-score. Here is my optimizers: ``` # Calculate the total trainnig steps. total_steps = int(training_args.num_train_epochs * ( len(processed_datasets["train"]) / (training_args._n_gpu * training_args.per_device_train_batch_size) )) # Linear decay learning rate. linear_decay = tf.keras.optimizers.schedules.PolynomialDecay( training_args.learning_rate, total_steps, end_learning_rate=0.0, power=1.0, cycle=False, name=None ) # 0.1 * total_steps warmup. warmup_schedule = tfm.optimization.lr_schedule.LinearWarmup( warmup_learning_rate = 0, after_warmup_lr_sched = linear_decay, warmup_steps = total_steps * 0.1, ) # AdamW optimizers with no weight decay. optimizer = tfa.optimizers.AdamW( weight_decay=0.0, learning_rate=warmup_schedule, beta_1=training_args.adam_beta1, beta_2=training_args.adam_beta2, epsilon=1e-06, amsgrad=False, name='AdamW', clipnorm=training_args.max_grad_norm, ) ``` Maybe you would like to try the above setting with the tf example code.<|||||>@zhuango Thank you for that! It's reminding me that all of our examples need to be overhauled, honestly. I'm going to make a PR to modernize all of them - I'll ping you when it's ready if you want to review it. <|||||>@Rocketknight1 Sure, Thank you.<|||||>Hi @zhuango, I've [made a PR](https://github.com/huggingface/transformers/pull/18451) to rewrite all of our examples. If you get a chance, please confirm that the SQUaD performance using the new examples matches the PyTorch performance!
transformers
18,333
closed
updated translation
Left the term fine-tuning since there is no correct translation into Italian and the English term is generally used. The same was done with some terms like "learning rate"
07-28-2022 10:12:03
07-28-2022 10:12:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @banda-larga @sgugger we discuss a lot with @mfumanelli for the better italian translation. For me this version is ok, you can merge the PR
transformers
18,332
closed
fix run_clip README
# What does this PR do? Fix `run_clip` example's README. The path should be the absolute path to the downloaded data directory (using relative path will make `load_dataset` look for the data directory on the Hub). From slack discussion: - _basically the base path used to resolve relative path corresponds to where the dataset is loaded from_ - _relative paths are transformed to the corresponding URL in the repo the script comes from_ Fix #18291
07-28-2022 09:38:57
07-28-2022 09:38:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,331
closed
fixed typo
# What does this PR do? Fixes a typo on the italian translation
07-28-2022 09:27:52
07-28-2022 09:27:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,330
closed
DeepSpeed Stage 2, Gradient computed twice for this partition.
### System Info transformers == 4.20.1 python == 3.8.13 OS == ubuntu 20.4 DeepSpeed == 0.6.7 ### Who can help? @stas00 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **Error message** ``` File "../train_t5_encoder.py", line 218, in <module> main(args) File "../train_t5_encoder.py", line 186, in main train_result = trainer.train() File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train return inner_training_loop( File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/transformers/trainer.py", line 1651, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/transformers/trainer.py", line 2361, in training_step loss = self.deepspeed.backward(loss) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn return func(*args, **kwargs) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1709, in backward self.optimizer.backward(loss) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1946, in backward self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 51, in backward scaled_loss.backward(retain_graph=retain_graph) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/torch/autograd/function.py", line 253, in apply return user_fn(self, *args) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 146, in backward torch.autograd.backward(outputs_with_grad, args_with_grad) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 798, in reduce_partition_and_remove_grads self.reduce_ready_partitions_and_remove_grads(param, i) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1269, in reduce_ready_partitions_and_remove_grads self.reduce_independent_p_g_buckets_and_remove_grads(param, i) File "/home/kyungmin.lee/anaconda3/envs/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 833, in reduce_independent_p_g_buckets_and_remove_grads assert self.params_already_reduced[param_id] == False, \ AssertionError: The parameter 191 has already been reduced. Gradient computed twice for this partition. Multiple gradient reduction is currently not supported ``` **My model** ``` python3 class GPT2ForScoreModel(GPT2ForSequenceClassification): _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.masked_bias"] def __init__(self, config): config.num_labels = 1 self.main_input_name = 'input_ids_pos' super().__init__(config) def forward(self, input_ids=None, attention_mask=None, input_ids_pos=None, attention_mask_pos=None, input_ids_neg=None, attention_mask_neg=None, past_key_values=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None): if input_ids is None: input_ids = input_ids_pos attention_mask = attention_mask_pos outputs_pos = super().forward( input_ids, attention_mask=attention_mask, past_key_values=past_key_values, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, ) loss = None if input_ids_neg is not None: outputs_neg = super().forward( input_ids_neg, attention_mask=attention_mask_neg, past_key_values=past_key_values, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=True, ) loss = - F.logsigmoid(outputs_pos.logits - outputs_neg.logits).mean() return SequenceClassifierOutputWithPast( loss=loss, logits=outputs_pos.logits, past_key_values=past_key_values, hidden_states=outputs_pos.hidden_states, attentions=outputs_pos.attentions, ) ``` I used GPT2ForSequenceClassification and T5EncoderModel. My training code works without Deepspeed, and with Deepspeed Stage 3 (but there is another problem in stage3). But this error occurs on stage 2. ### Expected behavior Both with and without Deepspeed work normally and the training is completed.
07-28-2022 04:36:42
07-28-2022 04:36:42
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>the same
transformers
18,329
closed
T5 Tokenizer misses a space
### System Info ``` tokenizer = AutoTokenizer.from_pretrained("t5-3b", use_fast=False)#metrics_util.tokenizer tokenizer.decode(tokenizer("Intel unveils Project Alloy 'merged reality' headset")["input_ids"], skip_special_tokens=True) Output: Intel unveils Project Alloy'merged reality' headset ``` Note that the space between Alloy and 'merged reality' is missing. System --------------------- transformers == 4.21.0 python == 3.7.11 Ubuntu 18.04.6 LTS ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("t5-3b", use_fast=False)#metrics_util.tokenizer print(tokenizer.decode(tokenizer("Intel unveils Project Alloy 'merged reality' headset")["input_ids"], skip_special_tokens=True)) ### Expected behavior Intel unveils Project Alloy 'merged reality' headset
07-28-2022 01:46:06
07-28-2022 01:46:06
One workaround is adding `clean_up_tokenization_spaces=False` in tokenizer.decode<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,328
closed
Split model list on modality
This PR splits the model API list on modality because it was becoming difficult for users to discover what vision or audio models are supported because they just disappear in the long list. I set these sections to not expand by default since users are more likely interested in going straight to a modality and then expanding the list of supported models instead of scrolling past all the text models, for example.
07-27-2022 22:34:11
07-27-2022 22:34:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>This requires more work than just splitting the navigation bar (as seen with the failing quality test). There are scripts that format this model ToC/check all models are documented which then need to be adapted. Likewise splitting the big tables of models in the main README will require to rewrite a lot of the scripts that copy it to other READMEs and the index.<|||||>I have made the necessary changes to the quality script that is failing but: - I don't have permission to push on this branch, so can't push them here - Your fork does not appear in the GitHub website for some reason, so I can't make a pull request either You can find the changes in in [this branch](https://github.com/huggingface/transformers/tree/split-model), just cherry-pick the commit. Giving authorization to the maintainers to push on your branch when opening a pull request would make the whole process way smoother. Or just use the main fork (huggingface/transformers) instead of your personal fork :-) Edit: Managed to open a PR by hacking some urls in GitHub (thank you for showing me @LysandreJik !). It's [here](https://github.com/stevhliu/transformers/pull/1). The changes in the toc are just rewrites from the Python yaml library.
transformers
18,327
closed
Migrate metric to Evaluate library for tensorflow examples
# What does this PR do? Currently, tensorflow examples use the `load_metric` function from Datasets library, commit migrates function call to `load` function to Evaluate library. Fix for #18306 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-27-2022 21:08:39
07-27-2022 21:08:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger, Working on the other examples. In contrast to pytorch examples, in the tensorflow examples, some of the different tasks (summarization, translation token-classification ) are missing requirements.txt. Should I add a requirements.txt to these folders? I believe this will aid in the future when we are building tests for these examples.<|||||>Yes please, that would be great!<|||||>@sgugger, all the examples have now been migrated. Will make PR open for review/merge
transformers
18,326
closed
add t5 for text generation
# What does this PR do? Add support for `T5ForConditionalGeneration` to the text generation pipeline ## Who can review? @LysandreJik @patrickvonplaten
07-27-2022 20:26:30
07-27-2022 20:26:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18326). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for opening the PR @sam-h-bean, can you explain a bit more why you require this change? Can't you use the text-to-text pipeline in your application to leverage T5 instead? I'm **not** in favor of this PR as is because `T5ForConditionalGeneration` is clearly a different architecture (and input/output format) then `...CausalLM` which is GPT-like style<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,325
closed
Replace `as_target` context managers by direct calls
# What does this PR do? This PR deprecates the context managers `as_target_tokenizer` and `as_target_processor` to the profit of passing more arguments to the `__call__` method (or the `pad` method for certain processors). Let's look at one example for a tokenizer in a seq2seq task. The current workflow is: ```python # Tokeniz inputs model_inputs = tokenizer(inputs, max_length=128, truncation=True) # Tokenize labels inside the context manager with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=128, truncation=True) # Put everything together model_inputs["labels"] = labels["input_ids"] ``` After this PR, this simply becomes: ```python model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) ``` which is more natural and way easier. It gets tricky if: 1. you want to tokenizer the targets with different keyword arguments 2. you want to add more than the input IDs for the targets to your model inputs. In this case you still need to do two calls: ```python # Tokenize inputs model_inputs = tokenizer(inputs, max_length=128, truncation=True) # Tokenize labels labels = tokenizer(text_target=targets, max_length=64, truncation=True) # Put everything together model_inputs["labels"] = labels["input_ids"] model_inputs["labels_mask"] = labels["attention_mask"] ``` Like before, if you forget to indicate to the tokenizer you are tokenizing labels (here by passing them as `text_target=...` (and before by tokenizing under the context manager), the labels will be tokenized like the inputs. For processors, the same changes are done, except you can directly use modality names: ```python input_values = processor(ds[0]["audio"]["array"], return_tensors="pt") with processor.as_target_processor(): labels = processor(ds[0]["text"], return_tensors="pt") input_values["labels"] = labels["input_ids"] ``` can now simply be: ```python input_values = processor(audio=ds[0]["audio"]["array"], text=processor(ds[0]["text"], return_tensors="pt") ``` Like before, you can also do it in two individual calls (with `audio` and `text`) to get the objects if you need to use different values of keyword arguments, or want to do a more complex merge than just taking the label input IDs. Padding is also treated: previous code required to do something like this: ```python batch = self.processor.pad(input_features, padding=padding, return_tensors="pt") with self.processor.as_target_processor(): labels_batch = self.processor.pad(label_features, padding=self.padding, return_tensors="pt") batch["labels"] = labels_batch["input_ids"] ``` This can now be done with: ```python batch = self.processor.pad(input_features, labels=label_features, padding=padding, return_tensors="pt") ``` or in two calls like before if something more involved (different keyword arguments for labels or accessing more than the labels input IDs) is needed. This comes at no breaking change. Current version does not touch any of the documentation, examples and tests (to double-check there is no breaking change), those will need to be adapted. This can be done in this PR or in followups if you prefer to read lighter diffs.
07-27-2022 19:47:10
07-27-2022 19:47:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,324
closed
Update feature extractor docs
As pointed out by @NielsRogge, a feature extractor is used to prepare inputs for a model with a single modality rather than multimodal models.
07-27-2022 18:38:36
07-27-2022 18:38:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,323
closed
tensorflow-aarch64 is not a viable candidate tensorflow version
### System Info On ARM64 machines, the official tensorflow package is named `tensorflow-aarch64`. On transformers==4.20.1, in `utils/import_utils.py`, there is a list of `candidates` for tensorflow packages which does not include this package. As a result, you cannot use TF+transformers on arm64 machines that have installed tensorflow via this package. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction On an ARM64 machine `pip install tensorflow-aarch64 transformers` and then try using a tensorflow transformer. ### Expected behavior If you've installed `tensorflow-aarch64` you should be able to use tensorflow with transformers.
07-27-2022 17:46:35
07-27-2022 17:46:35
Hey @ankrgyl, would you like to open a PR to enable that candidate?<|||||>Happy to! I am a bit rusty on the PR process but here it is: https://github.com/huggingface/transformers/pull/18345<|||||>Closed in https://github.com/huggingface/transformers/pull/18345 Thanks for your contribution!
transformers
18,322
open
Transformers documentation translation to French
Hi! Let's bring the documentation to all the French-speaking community :) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any and we'll add your name to the list. Some notes: - Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). - Please translate in a gender-neutral way. - Add your translations to the folder called fr inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). - Register your translation in [fr/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/it/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). - Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @omarespejel and @sgugger for review. - 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx). - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
07-27-2022 15:39:28
07-27-2022 15:39:28
I'd love to help you for these tasks ! une bonne idée pour adopter un ton sympa et pédagogue, serait d'utiliser la **1ere personne du pluriel** (NOUS) le plus souvent quand c'est possible. Par exemple : " Start by creating a virtual environment in your project directory: " —› "**Commençons** par créer un environnement virtuel dans le dossier de **notre** projet:" Plutôt que " _Commencez_ par créer un environnement virtuel dans le dossier de _votre_ projet:" En utilisant la 1ere personne du pluriel, l'utilisateur aura le sentiment d'être accompagné, et sera plus à l'aise pour comprendre ;)<|||||>Yes, this is something we should do more for the English doc as well. The only exception in my opinion is when something is clearly left to the user like "You need to adapt this constant to your username."<|||||>> Yes, this is something we should do more for the English doc as well. The only exception in my opinion is when something is clearly left to the user like "You need to adapt this constant to your username." Le Site du Zéro à l'époque le faisait très bien, avec parcimonie, c'était très agréable à suivre ;) Les tutos sont archivés ici: http://sdz.tdct.org/ ——— EN: The Site du Zéro at the time did it very well, sparingly, it was very pleasant to follow;) The tutorials are archived here: http://sdz.tdct.org/<|||||>Great idea for consistently using 1st person plural! If possible let's keep the discussion on the issues in English as some of the reviewers might not be French speakers even if we help with making sure the structure/format looks good! :D <|||||>Looks good, is writing in the **first person of the plural** something we would like to add to the [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md), and the description of all the translation issues? @osanseviero @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi ! Can I work on this translation ?
transformers
18,321
closed
Fix sacremoses sof dependency for Transformers XL
# What does this PR do? `sacremoses` is not a hard dependency of Transformers, but is used without protection the tokenization module of Transformers XL. This PR fixes that.
07-27-2022 13:04:35
07-27-2022 13:04:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,320
closed
sentencepiece shouldn't be required for the fast LayoutXLM tokenizer
This PR duplicates the `LAYOUTXLM_ENCODE_KWARGS_DOCSTRING` variable which is currently imported from a file that is behind a `sentencepiece` requirement.
07-27-2022 12:59:28
07-27-2022 12:59:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,319
closed
FeatureExtractor in Multimodal & ViT based models
### Feature request Customisable factor for normalisation in FeatureExtractor, i.e., currently 1000/w & 1000/h is used so internally 1000 is being used. If this (1000) can be made customisable, apply_ocr=True can be used without worrying about model training being interrupted due to bbox[I] > 1000. ### Motivation Currently, the LayoutLM models need the normalised box values 0-1000 range, that places a constraint of using images wherein the normalisation may produce bbox[I]>1000. Currently, 1000/w & 1000/h are used for scaling, but if 1000 was converted into a customisable figure, it would make things a lot easier. In cases where the image dimensions are very small, the dataset could be normalised using a factor < 1000 as well. Currently, if I have to feed in any such images where unacceptable bounding boxes may come up, I have to double normalise or scale once & then normalise (tell me if I shouldn't be doing this & what instead I should be doing) to bring it <1000. When using the FeatureExtractor classes, this makes it difficult to use apply_ocr = True since that will make me lose the control of making sure that my bbox[I] < 1000 while passing for training or inference. ### Your contribution The AutoFeatureExtractor class can be updated to receive an __init__ argument, say normalize_factor, wherein this can be used to bring bbox[i] < 1000 after which normal normalisation may be carried out. Although still a bit fuzzy on whether double normalization should be done, first say with left_scaled = w_min * (box[0] / w) top_scaled = h_min * (box[0] / h) wherein w_min & h_min can be found via examination during training, and images with dims < w_min or h_min will have to be discarded. Further after this left_double_scaled = 1000 * (left_scaled / w) top_double_scaled = 1000 * (top_scaled / h) I have observed that with LayoutLM wherein this process had to be done manually, using w_min=h_min gives better results than using these differently. And these values might work better if they are multiples of 10 and/or 2. Honestly, tell me if double normalisation is wise or not. And if it is, is this worth doing/implementing in LayoutLMv2, v3 FeatureExtractor classes so that code breakage is avoided at least for a larger array of samples in prod. In this case, any image with dim < w_min or h_min will naturally suffer again & have to be rejected.
07-27-2022 12:33:49
07-27-2022 12:33:49
cc @NielsRogge @amyeroberts <|||||>Thanks for raising @pikaduck! I have a few questions to make sure I understand the issue and your proposal. * When you say that the normalization may produce `bbox[i] > 1000`, can you confirm at least one of the bboxes coordinates for the image `(x0, y0)` or `(x1, y1)` are outside of the image? * Are there typical cases e.g. types of images you notice when this happens? It would be useful to know to dig into the code to see if there's any bugs. It could just be tesseract returning bbox coordinates outside of the image dimensions. I'm not sure the global rescaling factor before normalization is necessarily a good idea. As far as I understand, the assumption is that the bbox falls within the image and the model has been pretrained to learn where the bboxes and tokens align. At the moment, if x=0 the bbox corner is at the origin and x=1000 is at the opposite edge. Changing the rescaling factor means the bboxes are within the acceptable range, but their relative position changes. For example, a point halfway in the image, would have a coordinate < 500. This breaks the correspondence between the bboxes and the text. If we do legitimately get coordinates outside the image, my proposal would be to clip the bbox coordinates (x, y) to be within [0, image_width] and [0, image_height] respectively before scaling by 1000. For example, in `feature_extration_layoutlmv2.py` in `apply_tesseract` ``` # turn coordinates into (left, top, left+width, top+height) format image_width, image_height = image.size actual_boxes = [] for x, y, w, h in zip(left, top, width, height): x0 = min(0, max(x, image_width)) y0 = min(0, max(y, image_height)) x1 = min(0, max(x + w, image_width)) y1 = min(0, max(y + h, image_height)) actual_box = [x0, y0, x1, y1] actual_boxes.append(actual_box) # finally, normalize the bounding boxes normalized_boxes = [] for box in actual_boxes: normalized_boxes.append(normalize_box(box, image_width, image_height)) ``` What are your thoughts @NielsRogge ? Is my understanding correct? What would you recommend? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,318
closed
Remove all uses of six
# What does this PR do? This PR removes all uses of the `six` package since Python 2 is dead and Transformers never had support for it. Those slipped through the crack during review.
07-27-2022 12:22:54
07-27-2022 12:22:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,317
closed
Decouple inference from preprocessing and postprocessing steps from pipeline
### Feature request Each pipeline under the pipeline abstraction has a preprocessing, forward and postprocessing method. In order to use any pipeline it should be initiliazed providing a model so preprocessing and postprocessing methods cannot be easily used by other services. It would be great to provide the model just for the forward method, so methods that don't need it can be use independently. For instance, if I want to serve a [zero shot classification pipeline](https://huggingface.co/docs/transformers/v4.21.0/en/main_classes/pipelines#transformers.ZeroShotClassificationPipeline) the model is a required parameter so I can't just split the preprocess logic to send it to an inference service hosting the model itself. ### Motivation When deploying models using a pipeline I would like to use an inference server and keep any preprocessing and postprocessing logic as a different service. The goal is to improve throughput by using different services that can scale independently while using specialized inference servers for predictions and model optimizations. ### Your contribution If my previous assumptions are correct and there is no workaround that I missed I would be happy to discuss a possible solution and create a PR
07-27-2022 12:22:14
07-27-2022 12:22:14
cc @Narsil <|||||>@jspablo , This is a very sound request which does make sense. I think it's going to be tricky to implement in a streamlined fashion since a lot of postprocessing actually uses the model (to get the config). We could have the config be on its own but that require real care since then we have to be extremely consistent (to use the same location) and that would force users that use custom models (with `pipeline(model=MyAwesomeModel()`) to also pass a config along to make it fit the model. IMO, the easiest route you can take is actually send a mocked model (like an empty class or something). Override the `forward` method of your model to do the remote calling. and then implement everything that might be required for later steps. Does this make sense ? If you have better proposal/ergonomics suggestion, I am all hears !<|||||>Thank you @Narsil Yes, it makes sense, tested something similar for using ONNX models before they were included in the pipeline abstraction. Just was looking for a more generic way to reuse any pipeline.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,316
closed
"AttributeError: cls.seq_relationship.weight not found in PyTorch model" when converting pytorch to tensorflow
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.9.1 (True) ### Who can help? @Rocketknight1 @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to convert [this model](https://huggingface.co/BramVanroy/bert-base-dutch-cased-hebban-reviews) to a tensorflow checkpoint on the command line. (I know that users can specify `from_pt` but I'd like the models to be present in both formats from the start). The model has been finetuned on a text classification task. When I run ```python from transformers.convert_pytorch_checkpoint_to_tf2 import convert_pt_checkpoint_to_tf convert_pt_checkpoint_to_tf("bert", "/path/to/pytorch_model.bin", "/path/to/config.json", "my_output_dir") ``` I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/convert_pytorch_checkpoint_to_tf2.py", line 326, in convert_pt_checkpoint_to_tf tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path) File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/modeling_tf_pytorch_utils.py", line 126, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model( File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/modeling_tf_pytorch_utils.py", line 210, in load_pytorch_weights_in_tf2_model raise AttributeError(f"{name} not found in PyTorch model") AttributeError: cls.seq_relationship.weight not found in PyTorch model ``` ### Expected behavior No error and a correct conversion of my model to TF.
07-27-2022 09:32:25
07-27-2022 09:32:25
Hi @BramVanroy, you shouldn't ever need to directly call `convert_pt_checkpoint_to_tf`. If you want to make a native TensorFlow checkpoint for that model, then the easiest way would be just: ``` model = TFAutoModelForSequenceClassification.from_pretrained("BramVanroy/bert-base-dutch-cased-hebban-reviews", from_pt=True) model.save_pretrained("bert-base-dutch-cased-hebban-reviews-tf") ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,315
closed
[bugfix] Loading sharded model with torch_dtype='auto' causes TypeError
# What does this PR do? Fixes #18314 ## Before submitting - [n] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [n] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [na] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [n] Did you write any new necessary tests? ## Who can review? Anyone can review as it is a very small bugfix, but I will tag @sgugger based on git blame.
07-27-2022 09:29:33
07-27-2022 09:29:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi there, thanks for the fix! This is a duplicate of #18061 so we will merge this one as it came first :-)<|||||>Hi. Sorry for dup and thanks for the fix! I am closing this PR (and the issue) as it is no longer necessary.
transformers
18,314
closed
Loading sharded model with torch_dtype='auto' causes TypeError
### System Info - `transformers` version: 4.20.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Install latest version of `transformers` as above: ```bash pip install pytorch transformers datasets ``` Load a sharded model (the following example creates sharded model for easier reproducibility, but this reproduces on other sharded models as well (at least for BLOOM)). ```python import os, tempfile from transformers import AutoModel model = AutoModel.from_pretrained("distilbert-base-uncased") with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir, max_shard_size="100MB") print(sorted(os.listdir(tmp_dir))) # Now, actually reproduce the problem AutoModel.from_pretrained( tmp_dir, torch_dtype='auto') ``` ### Expected behavior We expect it to load the model without an error. Instead it gives out a TypeError: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/torch/serialization.py:308, in _check_seekable(f) 307 try: --> 308 f.seek(f.tell()) 309 return True AttributeError: 'list' object has no attribute 'seek' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/transformers/modeling_utils.py:461, in load_state_dict(checkpoint_file) 460 try: --> 461 return torch.load(checkpoint_file, map_location="cpu") 462 except Exception as e: File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/torch/serialization.py:699, in load(f, map_location, pickle_module, **pickle_load_args) 697 pickle_load_args['encoding'] = 'utf-8' --> 699 with _open_file_like(f, 'rb') as opened_file: 700 if _is_zipfile(opened_file): 701 # The zipfile reader is going to advance the current file position. 702 # If we want to actually tail call to torch.jit.load, we need to 703 # reset back to the original position. File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/torch/serialization.py:235, in _open_file_like(name_or_buffer, mode) 234 elif 'r' in mode: --> 235 return _open_buffer_reader(name_or_buffer) 236 else: File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/torch/serialization.py:220, in _open_buffer_reader.__init__(self, buffer) 219 super(_open_buffer_reader, self).__init__(buffer) --> 220 _check_seekable(buffer) File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/torch/serialization.py:311, in _check_seekable(f) 310 except (io.UnsupportedOperation, AttributeError) as e: --> 311 raise_err_msg(["seek", "tell"], e) 312 return False File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/torch/serialization.py:304, in _check_seekable.<locals>.raise_err_msg(patterns, e) 301 msg = (str(e) + ". You can only torch.load from a file that is seekable." 302 + " Please pre-load the data into a buffer like io.BytesIO and" 303 + " try to load from it instead.") --> 304 raise type(e)(msg) 305 raise e AttributeError: 'list' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) Input In [10], in <cell line: 1>() 3 print(sorted(os.listdir(tmp_dir))) 5 # Now, actually reproduce the problem ----> 6 AutoModel.from_pretrained( 7 Path (tmp_dir), 8 torch_dtype='auto') File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py:446, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 444 elif type(config) in cls._model_mapping.keys(): 445 model_class = _get_model_class(config, cls._model_mapping) --> 446 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 447 raise ValueError( 448 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" 449 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." 450 ) File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/transformers/modeling_utils.py:2148, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2146 torch_dtype = get_state_dict_dtype(state_dict) 2147 else: -> 2148 one_state_dict = load_state_dict(resolved_archive_file) 2149 torch_dtype = get_state_dict_dtype(one_state_dict) 2150 del one_state_dict # free CPU memory File ~/.pyenv/versions/3.9.13/envs/transformers-bugfix/lib/python3.9/site-packages/transformers/modeling_utils.py:464, in load_state_dict(checkpoint_file) 462 except Exception as e: 463 try: --> 464 with open(checkpoint_file) as f: 465 if f.read().startswith("version"): 466 raise OSError( 467 "You seem to have cloned a repository without having git-lfs installed. Please install " 468 "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " 469 "you cloned." 470 ) TypeError: expected str, bytes or os.PathLike object, not list ```
07-27-2022 09:15:32
07-27-2022 09:15:32
I've created PR to fix the problem as it is an easy fix. The variable names clearly suggest the intended behavior, so I just followed it.<|||||>#18061 fixes this problem. Closing the issue.
transformers
18,313
closed
Fixes torch jit tracing for LayoutLMv2 model (re-open)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR re-opens #14462 with the current changes in master merged into the branch. Please see the original PR for more details. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-27-2022 08:43:55
07-27-2022 08:43:55
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,312
closed
BLOOM fix module order
# What does this PR do This PR addresses a small issue where the operations of `BloomMLP` are not displayed in the correct order. This is a bit confusing for users, see the related issue: https://huggingface.co/bigscience/bloom/discussions/64 cc @sgugger
07-27-2022 08:13:31
07-27-2022 08:13:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,311
closed
Only use one gpu when generating the hidden states (text embeddings) by "Model output" api
### System Info Report an issue about model output efficiency. I want to get the certain hidden states from the fine-tuned roberta model, then I chose Model Output api([link](https://huggingface.co/docs/transformers/main_classes/output#transformers.utils.ModelOutput)) to get those dense embeddings. However, when executing below model output code, only ONE gpu is used (actually, I have one worker with 4 gpus), which severely hinders inference efficiency. Also, I cannot find the instructions or docs to help to use multiple gpus when using model output. <img width="1397" alt="Screen Shot 2022-07-27 at 15 51 37" src="https://user-images.githubusercontent.com/12815760/181194206-1483d4ce-39db-4f20-9355-a46844f6e39c.png"> <img width="681" alt="Screen Shot 2022-07-27 at 15 52 01" src="https://user-images.githubusercontent.com/12815760/181194264-893ceab2-c63e-4fa7-87c0-a9cf5b530782.png"> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction please refer to the above info form. ### Expected behavior Multiple gpus can be used when using transformers Model Output api.
07-27-2022 08:02:26
07-27-2022 08:02:26
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? As an aside, we recommend looking into the [accelerate](https://github.com/huggingface/accelerate) package for multi-GPU usage. Thanks!<|||||>Cannot loggin to hugging face, so just to pose the issue here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,310
closed
Remove duplicated line
# What does this PR do? Removes a duplicated instantiation of `device`. I removed the second instance of the line to maintain code alignment with the GPT-J implementation of forward. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-26-2022 21:52:05
07-26-2022 21:52:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,309
closed
`activation_dropout` in OPT is never used
### System Info main ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://github.com/huggingface/transformers/blob/ee67e7ad4fd7a766891b68f708cf03e30f609976/src/transformers/models/opt/modeling_opt.py#L279 `activation_dropout` in `modeling_opt.py` is never used. It would not behave as expected if one initial a model randomly while setting it to non-zero. ### Expected behavior `activation_dropout` is used or removed.
07-26-2022 21:46:27
07-26-2022 21:46:27
cc @ArthurZucker <|||||>Will have a look asap 😀<|||||>Okay I think we should just remove it, we don't use it in either the flax or the tf version and it is probably a typo. 😄 I will open a PR soon to fix that !<|||||>I'm happy to contribute if removing is what we want 😊<|||||>Feel free to open a PR and tag me when it is ready! 😄 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry about the delay! I opened a PR https://github.com/huggingface/transformers/pull/18842 @ArthurZucker <|||||>Gonna merge it to main 🥳
transformers
18,308
closed
Extracting embeddings through 'pipeline' and 'feature-extraction' not outputting correct length of values.
### System Info transformers==4.20.1 Python3.7 ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hello, I am trying to extract embeddings from a fine tuned multi-label model with added vocab. I have used the the following example https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb and modified with https://github.com/huggingface/transformers/issues/1413 to be able to add new tokens. Once I have fine-tuned the model on my own data I have saved it as such: ### Saving the files for inference output_full_model_file = './models/pytorch_distilbert_categorization_complaints_test.bin' output_embedding_model_file = './models/pytorch_distilbert_embedding_complaints_test.bin' output_tokenizer_file = './models/tokenizer_distilbert_complaints_test' torch.save(model, output_full_model_file) torch.save(model.l1, output_embedding_model_file) tokenizer.save_pretrained(output_tokenizer_file) Now I am trying to use 'pipeline' with 'feature-extraction' to find the embeddings for the new vocab I have added, here is the code I use to do so: from transformers import pipeline,AutoTokenizer,DistilBertTokenizer import torch output_full_model_file = './models/pytorch_distilbert_categorization_complaints_test.bin' output_embedding_model_file = './models/pytorch_distilbert_embedding_complaints_test.bin' output_tokenizer_file = './models/tokenizer_distilbert_complaints_test' model = torch.load(output_embedding_model_file) tokenizer1 = AutoTokenizer.from_pretrained(output_tokenizer_file) tokenizer2 = DistilBertTokenizer.from_pretrained(output_tokenizer_file) pipe = pipeline('feature-extraction', model=model, tokenizer=tokenizer1) data_zelle = pipe("zelle") len(data_zelle[0]) 3 ### Expected behavior I was expecting the output to by a single vector of length 768 corresponding to the word 'zelle' instead it is an array of (3,768). I was expecting a single vector as it is a single word. Am I missing something here? Or is there another way to do what I am attempting? Thanks in advance for the help!!
07-26-2022 21:22:33
07-26-2022 21:22:33
Hey, this is perfectly normal. you tokenizer most likely has 3 input_ids when tokenizing this string (most likely BOS, "zelle", EOS). Then the output of the model is actually (3, 768). 3 is the sequence_length and is perfectly normal. You're probably more interested in some locations that others so you can filter them out but by default the model does output all of this. `sentence-transformers` is another library which does full sentence embedding and will do a reduction for you to get a single `(768,)` representation from this `(seq_length, 768)` . (Model has to be finetuned respective to a reduction, afaik, non finetuned models are hard to reduce blindly)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,307
closed
layoutlmv3-base-chinese tokenizer could not be loaded.
### System Info ``` File ~/anaconda3/envs/paddle_env/lib/python3.8/site-packages/transformers/models/layoutlmv3/tokenization_layoutlmv3.py:325, in LayoutLMv3Tokenizer.__init__(self, vocab_file, merges_file, errors, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, add_prefix_space, cls_token_box, sep_token_box, pad_token_box, pad_token_label, only_label_first_subword, **kwargs) 305 mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token 307 super().__init__( 308 errors=errors, 309 bos_token=bos_token, (...) 322 **kwargs, 323 ) --> 325 with open(vocab_file, encoding="utf-8") as vocab_handle: 326 self.encoder = json.load(vocab_handle) 327 self.decoder = {v: k for k, v in self.encoder.items()} TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction none ### Expected behavior ``` from transformers import AutoProcessor, AutoModel, XLMRobertaTokenizer, LayoutLMv3 chinese_processor = AutoProcessor.from_pretrained("./layoutlmv3_base_chinese", apply_ocr=False, local_files_only=True) ``` But we seem need vocab.json and merges.txt to load the LayoutLMv3Tokenizer . So could you provide a function to convert them or confirm whether there is a diff between these two tokenizers?
07-26-2022 17:30:15
07-26-2022 17:30:15
The same problem, can you please solve it<|||||>**The problem still exists.** processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) It can run successfully,fail to load "microsoft/layoutlmv3-base-chinese", such as: processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base-chinese", apply_ocr=False) TypeError: expected str, bytes or os.PathLike object, not NoneType Version: 4.22.0.dev0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors <|||||>As far as I know, transformers doesn't support chinese layoultlmv3, but unilm is OK. https://github.com/microsoft/unilm/tree/master/layoutlmv3<|||||>> As far as I know, transformers doesn't support chinese layoultlmv3, but unilm is OK. https://github.com/microsoft/unilm/tree/master/layoutlmv3 But I see it also requires vocab.json and merges.txt. I cannot load tokenizer either. https://github.com/microsoft/unilm/blob/master/layoutlmv3/layoutlmft/models/layoutlmv3/tokenization_layoutlmv3.py ![1663659780(1)](https://user-images.githubusercontent.com/29231853/191197738-3eee84fe-57c2-47b6-ab43-1a91ea2fb823.png) How did you solve it, please?
transformers
18,306
closed
Migrate metrics used in all examples from Datasets to Evaluate
The metrics are slowly leaving [Datasets](https://github.com/huggingface/datasets) (they are being deprecated as we speak) to move to the [Evaluate](https://github.com/huggingface/evaluate) library. We are looking for contributors to help us with the move. Normally, the migration should be as easy as replacing the import of `load_metric` from Datasets to the `load` function in Evaluate. See a use in this [Accelerate example](https://github.com/huggingface/accelerate/blob/1486fa35b19abc788ddb609401118a601e68ff5d/examples/nlp_example.py#L104). To fix all tests, a dependency to evaluate will need to be added in the [requirements file](https://github.com/huggingface/transformers/blob/main/examples/pytorch/_tests_requirements.txt) (this is the link for PyTorch, there is another one for the Flax examples). If you're interested in contributing, please reply to this issue with the examples you plan to move.
07-26-2022 16:37:16
07-26-2022 16:37:16
Hi, I would like to try to move Pytorch examples.<|||||>Please go ahead and make a PR when you're ready then @atturaioe. You can check our [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) to get started.<|||||>@sgugger , I would like to work on the tensorflow examples. <|||||>Please go ahead @VijayKalmath :-)<|||||>@sgugger , I wanted to know if there are tests to run the examples in the tensorflow folder.<|||||>No tests for TensorFlow yet, we'll add those in the near future.<|||||>@sgugger , I don't think anyone is working on the flax examples. Will open a new PR for migrating the flax examples?<|||||>By all means, please go ahead! The only change will be that those are actually tested.<|||||>@sgugger, can I work on `research_projects` examples?<|||||>@atturaioe We don't update research projects to the new APIs as they use pinned versions of libraries. So this one should stay as is unless the author want to do a general update of them. Thanks for offering though!
transformers
18,305
closed
Add ONNX support for Pegasus
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR adds ONNX support for the Pegasus model. Linked to https://github.com/huggingface/transformers/issues/16308 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @lewtun <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-26-2022 16:01:01
07-26-2022 16:01:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @pramodith, this PR looks excellent! Did you try to convert a `Pegasus` model with this config? I'm pinging @lewtun for reviewing.<|||||>@ChainYo besides the pytests, I also tried executing the Onnx export and validation using this script. The difference between the pytorch tensors and the Onnx tensors was in the order of 1e-5. However, I did notice that the onnx export would fail when I set use_past=True, not sure if I need to do anything about that. ```from transformers import AutoConfig, AutoTokenizer, PegasusForCausalLM,PegasusForConditionalGeneration,PegasusModel from transformers.models.pegasus import PegasusOnnxConfig from transformers.onnx import export, validate_model_outputs from pathlib import Path def check_onnx_model(task): if task == "default": config = AutoConfig.from_pretrained("google/pegasus-large") model = PegasusModel.from_pretrained("google/pegasus-large") tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large") elif task == "seq2seq-lm": config = AutoConfig.from_pretrained("google/pegasus-xsum") model = PegasusForConditionalGeneration.from_pretrained("google/pegasus-xsum", add_cross_attention=True) tokenizer = AutoTokenizer.from_pretrained("google/pegasus-xsum") else: config = AutoConfig.from_pretrained("google/pegasus-xsum") model = PegasusForCausalLM.from_pretrained("google/pegasus-xsum") tokenizer = AutoTokenizer.from_pretrained("google/pegasus-xsum") onnx_config = PegasusOnnxConfig(config, task=task, use_past=False) onnx_path = Path("model.onnx") onnx_inputs, onnx_outputs = export(tokenizer, model, onnx_config, onnx_config.default_onnx_opset, onnx_path) print(onnx_inputs) print(onnx_outputs) print(validate_model_outputs(onnx_config,tokenizer,model,onnx_path,onnx_outputs,onnx_config.atol_for_validation)) check_onnx_model("causal-lm") ```<|||||>> @ChainYo, besides the pytests, I also tried executing the Onnx export and validation using this script. The difference between the PyTorch tensors and the Onnx tensors was in the order of 1e-5. However, I did notice that the onnx export would fail when I set use_past=True, not sure if I need to do anything about that. I'm not sure you can use past when you convert your model to ONNX. You have to rewrite the logic when using that feature with ONNX. **Edit**: I never converted a model which uses `past`, but it could work if your config inherits from `OnnxSeq2SeqConfigWithPast`, which is the case.<|||||>> > @ChainYo, besides the pytests, I also tried executing the Onnx export and validation using this script. The difference between the PyTorch tensors and the Onnx tensors was in the order of 1e-5. However, I did notice that the onnx export would fail when I set use_past=True, not sure if I need to do anything about that. > > I'm not sure you can use past when you convert your model to ONNX. You have to rewrite the logic when using that feature with ONNX. > > **Edit**: I never converted a model which uses `past`, but it could work if your config inherits from `OnnxSeq2SeqConfigWithPast`, which is the case. Gotcha! I'll wait for any comments by the reviewers in that case.<|||||>Hi, I just checked the failed test in the CI/CD pipeline, and it doesn't seem to come from your code. Could you please try to fetch/rebase your branch with the main upstream branch? We will see if it's solved. [CI/CD error](https://app.circleci.com/pipelines/github/huggingface/transformers/44492/workflows/4da69f9f-f332-4ba8-97e7-3446ec6a559e/jobs/517244?invite=true#step-111-3911) `FAILED tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_decoder_model_past_large_inputs`<|||||>@ChainYo I've rebased and all the tests pass now!<|||||>> @ChainYo I've rebased, and all the tests pass now! Super let's just wait for a reviewer now!
transformers
18,304
closed
Ignore small batches of examples for CLM training
# What does this PR do? Fix #17875. In https://github.com/huggingface/transformers/blob/a5d504834d01dd1c7edf7c46d93c080cb5274eec/examples/pytorch/language-modeling/run_clm.py#L440 we drop remainder when `if total_length >= block_size`. However, If a batch of examples is too small to have long enough length, that batch will be used. If the whole dataset has only that batch, it is fine. Otherwise, we might get examples of different lengths, and get an error `expected sequence of length X at dim 1 (got Y) during training`. The same question is also asked on [StackOverflow](https://stackoverflow.com/questions/71166789/huggingface-valueerror-expected-sequence-of-length-165-at-dim-1-got-128). With this fix, an edge case is that the whole dataset is dropped (if it contains only small batches that are all dropped) - but this is very unlikely. **We can throw an error in this case however**. **Once the fix is approved, I can apply the same change to TF/Flax example scripts too (if relevant there)**
07-26-2022 14:29:13
07-26-2022 14:29:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>I was thinking to throw an error with some information. But since this has been discussed several times and the decision is made, I am going to close this PR.<|||||>If you throw an error later on if the dataset is empty, that fix then works for me (but there is a simpler way which is just to remove the test).
transformers
18,303
closed
Owlvit test fixes
# What does this PR do? - Fixes assertion error in slow OWL-ViT integration tests - Fixes incompatible cpu/gpu test errors caused by `OwlViTForObjectDetection.normalize_grid_corner_coordinates()` - Sets `attention_mask` input argument as optional
07-26-2022 13:12:22
07-26-2022 13:12:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,302
closed
Add PyTorch 1.11 to past CI
# What does this PR do? Add PyTorch 1.11 to past CI (as PyTorch 1.12 is the current latest version). I will re-run past CI once this PR is merged (for [this comment](https://github.com/huggingface/transformers/issues/18181#issuecomment-1189047657)).
07-26-2022 12:39:03
07-26-2022 12:39:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,301
closed
Added italian translation for parallelism.mdx
## What does this PR do? Italian translation of parallelism.mdx See issue: https://github.com/huggingface/transformers/issues/17459 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/17459 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @mfumanelli <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-26-2022 12:18:23
07-26-2022 12:18:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18301). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you, @Xpiri, for the translation! @sgugger it seems this doc no longer exists in the current documentation (it used to be in the `how-to-guides` section). What do you think we should do?<|||||>Yes, this document has been split into several pages, performance.mdx and each perf_xxx document. So we should translate those pages and not the document that doesn't exist anymore.<|||||>Well, I guess if the other documents use parts that are identical to this document, perhaps we could just split this file. @mfumanelli I would also suggest to review the main thread, just to make sure we are not working on an older version of the docs 😄 <|||||>Hey @Xpiri! Sorry for the confusion. I have updated issue #17459 with the docs in which `parallelism` was partitioned. Would it be a bad idea to update this PR with this new info? We would completely understand if this is the case.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,300
closed
Fix Sylvain's nits on the original KerasMetricCallback PR
null
07-26-2022 11:56:40
07-26-2022 11:56:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,299
closed
Update push_to_hub to leverage HTTP API
### Feature request Would be great to update the current `model.push_to_hub`, `tokenizer.push_to_hub`, `processor.push_to_hub`, etc. functionalities to leverage the new `upload_file` method of the `huggingface_hub` library, which only leverages HTTP calls instead of git, making the user experience much, much nicer. ### Motivation I'm porting models to the hub, and in case I need to update for instance a tiny file (like preprocessor_config.json), the current push_to_hub first pulls all files from the hub (including the weights of the PyTorch checkpoint), then cleans it, etc. This takes a long time (> 10 minutes). This wouldn't be the case anymore with the new [upload_file](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) method. ### Your contribution Docs is here: https://huggingface.co/docs/huggingface_hub/main/en/how-to-upstream
07-26-2022 11:11:33
07-26-2022 11:11:33
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing as this has been fixed per #18366
transformers
18,298
closed
Fix failing tests for XLA generation in TF
# What does this PR do? This PR fixes the failing tests for text generation in XLA mentioned in #17935. The issue was that the failing models had additional settings limiting the sequence length, which weren’t updated when creating the config-objects for the tests. Fixes # 17935 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante @LysandreJik
07-26-2022 10:26:23
07-26-2022 10:26:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @dsuess! Thank you so much for looking into this -- I can confirm that the tests are passing after these changes. This implies two things: 1. The models are indeed XLA-ready, so I will tick their boxes in the original issue #17935 2. The tests were failing due to poor model parametrization at test time -- I will open a follow-up PR to fix them<|||||>cc @sgugger (with these changes the XLA tests will run instead of being skipped, but the root cause still needs to be addressed to avoid this ad hoc logic, see my comment above)
transformers
18,297
closed
Add Flax BART pretraining script
# What does this PR do? Fixes #6743 #18030 #4151 #5096. Adds Flax script for BART pretraining. Inspired by @patil-suraj's [suggestion](https://github.com/huggingface/transformers/issues/6743#issuecomment-1046717336), I modified the @morganmcg1's [DataCollatorForDenoisingTasks](https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223) to create a BART denoising pretraining script in Flax. Implementation details from the paper: - [x] Text infilling - [x] Sentence permutation - [ ] Large training batch sizes (will add gradient accumulation for this and other Flax language modeling scripts in the next PR) [Training statistics](https://tensorboard.dev/experiment/Maw62QlaSXWS0MOf2V2lbg/) when pre-train bart-base in Norwegian on a single TPUv3-8 pod: <img src="https://imgur.com/N3PnWtn.png" width="300" height="220" /> <img src="https://i.imgur.com/1SKROC2.png" width="300" height="220" /> <img src="https://i.imgur.com/KDACVb4.png" width="300" height="220" /> ## Who can review? cc potential reviewers: @patrickvonplaten, @patil-suraj, @sgugger, @LysandreJik
07-26-2022 10:09:18
07-26-2022 10:09:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>This looks very nice to me! Just a small question, why is the file called `run_bart_dlm_flax.py`, *i.e.* why the `dlm` and not just `lm` ?<|||||>@patil-suraj could you take a look here as well ?<|||||>> This looks very nice to me! Just a small question, why is the file called `run_bart_dlm_flax.py`, _i.e._ why the `dlm` and not just `lm` ? @patrickvonplaten It stands for `denoising language modeling`, which is consistent with `mlm` and `clm`. What do you think?<|||||>Your reviews are extremely helpful @sanchit-gandhi! All of them have been resolved. Thank you! <|||||>@patil-suraj Could you have a look at this PR? I'd love to hear your feedback.<|||||>Hey @duongna21! Great job on this PR, and thank you for addressing the comments! For the time being, let's hold off on gradient accumulation. If you require it for your personal experiments, it can be achieved quite easily using the Optax wrapper [MultiSteps](https://optax.readthedocs.io/en/latest/api.html#multi-step-update). However, in my experience, this wrapper is pretty memory inefficient and does not yield particularly good performance; it applies a dummy update of zeros for $K-1$ train steps (redundant), and then the accumulated gradient update step on the $K$-th train step. Instead, writing a custom gradient accumulation training loop is more efficient (c.f. [seq2seq-speech](https://github.com/sanchit-gandhi/seq2seq-speech/blob/95857f52b4d8dc5c1ab48835899d6f66e86ba9b1/run_flax_speech_recognition_seq2seq.py#L1226)), but this involves quite a lot of additional code and is significantly more involved, so I'm not particularly in favour of using it for these streamlined examples scripts! Otherwise, all the implementation TODOs are complete, the code review approved, and the training results on track, so happy to go ahead and merge!<|||||>@sanchit-gandhi Yeah, I agree that gradient accumulation shouldn't be added until there is a more elegant way to implement it, so feel free to merge this PR. Thank you for the helpful advice!<|||||>@duongna21 I wonder why the [`permute_sentences`](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_bart_dlm_flax.py#L318) only contains the pad token rather the full stop token?
transformers
18,296
closed
[WIP] Add Efficientformer
# What does this PR do? This PR adds Efficientformer, a model that has similar latency as MobileNets, but achieves better accuracy on ImageNet. Paper: https://arxiv.org/abs/2206.01191 Code and weights: https://github.com/snap-research/EfficientFormer To-do: - [ ] Improve documentation - [ ] Verify tests pass Fixes #18041 ## Who can review? @NielsRogge
07-26-2022 10:04:45
07-26-2022 10:04:45
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @novice03 @NielsRogge @alaradirik :) - what is the status of adding this model? @novice03 if you do not what to continue working on it I can pick it up from here.<|||||>Hi @Bearnardd, I'm afraid I don't have time in the next few weeks to continue working on this model. But, please pick it up from here if you can. Thanks!<|||||>Hi @alaradirik - I have just started working on this PR locally and I have applied most of the style changes that you have requested and in the following days I will apply the rest of the changes that you have asked for. Is there any possibility to catch you on Slack or on the other medium since I am pretty confident I will have some questions regarding this task. Thanks!<|||||>Hi @Bearnardd, of course, could you give me your email address? I can create a Slack channel and invite you. If possible, it'd be great if @novice03 could add you as a collaborator to his transformers repo and you work on the same repo/branch so that he gets credit for the PR too.<|||||>Hi @alaradirik I have sent you my email address on LinkedIn also great idea about the collaboration. @novice03 if it suits you well you can send me an invitation :)<|||||>Hi @novice03 ! - I have done some work on top of your changes. Not everything is ready but I think that I am in good spot to push the changes and get some review before going further. Would you mind adding me as a collaborator to your repo so I can push there directly or do you want me to open a new PR on top of yours?<|||||>Hello @Bearnardd, I added you as a collaborator on my repo. Thanks!
transformers
18,295
closed
Apply type correction to `TFSwinModelOutput`
# What does this PR do? ## Problem & Fix This PR fixes small type error on `TFSwinModelOutput`. `TFSwinModelOutput`'s `pooler_output` is actually optional, which can be disabled by using `add_pooling_layer=False` option on `TFSwinModel` or `TFSwinMainLayer`. This fix applies `Optional` type on `pooler_output` to fix this error. ## Review This PR is related with Swin Transformer. R: @amyeroberts, @NielsRogge
07-26-2022 09:08:43
07-26-2022 09:08:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,294
closed
Support NLLB's LID model
### Model description Thanks for supporting NLLB and closing this issue https://github.com/huggingface/transformers/issues/18043. I'm wondering if huggingface can further support the language identification model of NLLB? "LID (Language IDentification) model to predict the language of the input text." ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/facebookresearch/fairseq/tree/nllb#lid-model
07-26-2022 08:46:52
07-26-2022 08:46:52
I figured out how to use it. No issue now.<|||||>First download model `wget https://dl.fbaipublicfiles.com/nllb/lid/lid218e.bin` Then use it for inference ```python import fasttext pretrained_lang_model = "lid218e.bin" model = fasttext.load_model(pretrained_lang_model) text = "これ、浅草に、行きますか" predictions = model.predict(text, k=1) print(predictions) ```
transformers
18,293
closed
Error while loading a pre-trained wav2vec2 model
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-1085-azure-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @patrickvonplaten , @anton-l ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have been using the code from [this](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining) to pre-train a wav2vec2 large model (model_name_or_path : "facebook/wav2vec2-large-lv60"). After the training is completed and the model is saved, I am trying to load the saved model using `model = Wav2Vec2ForPreTraining.from_pretrained("/path/to/model", local_files_only=True,)` This results in an error: ``` RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForPreTraining: size mismatch for wav2vec2.feature_extractor.conv_layers.1.conv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3]). size mismatch for wav2vec2.feature_extractor.conv_layers.2.conv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3]). size mismatch for wav2vec2.feature_extractor.conv_layers.3.conv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3]). size mismatch for wav2vec2.feature_extractor.conv_layers.4.conv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3]). size mismatch for wav2vec2.feature_extractor.conv_layers.5.conv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 2]). size mismatch for wav2vec2.feature_extractor.conv_layers.6.conv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 2]). size mismatch for wav2vec2.feature_projection.projection.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 512]). size mismatch for wav2vec2.encoder.pos_conv_embed.conv.weight_v: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 64, 128]). size mismatch for wav2vec2.encoder.layers.0.attention.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for wav2vec2.encoder.layers.0.attention.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for wav2vec2.encoder.layers.0.attention.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for wav2vec2.encoder.layers.0.attention.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for wav2vec2.encoder.layers.0.feed_forward.intermediate_dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([4096, 1024]). size mismatch for wav2vec2.encoder.layers.0.feed_forward.output_dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 4096]). size mismatch for wav2vec2.encoder.layers.1.attention.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). . . . size mismatch for wav2vec2.encoder.layers.23.feed_forward.output_dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 4096]). size mismatch for quantizer.codevectors: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1, 640, 384]). size mismatch for quantizer.weight_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([640, 512]). size mismatch for project_hid.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 1024]). size mismatch for project_q.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method. ``` I am using a custom dataset. In the training I am calling a pre-trained model instead of initiating a new model using config file `model = Wav2Vec2ForPreTraining.from_pretrained(model_name_or_path)` ### Expected behavior The model is supposed to load properly with all layers and weights.
07-26-2022 08:42:03
07-26-2022 08:42:03
Hey @Aaryan369, Is there any way you could upload the checkpoint to the Hub (maybe as a private one if the weights are sensitive?) Happy to take a deeper look then, but in short the above error message shouldn't happen. There seems to be a mismatch with the config and the model weights<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,292
closed
Add TFAutoModelForImageClassification to pipelines.py
# What does this PR do? Add `TFAutoModelForImageClassification` to `pipelines.py`. Fix the test failure mentioned [here](https://github.com/huggingface/transformers/pull/18079#issuecomment-1194352739). Here is the [failed job run](https://github.com/huggingface/transformers/runs/7492640649?check_suite_focus=true)
07-26-2022 07:34:34
07-26-2022 07:34:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,291
closed
Couldn't run the run_clip.py successfully
### System Info I couldn't run the code successfully following the README.md (https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#readme)。 """ COCO_DIR = "data" ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR) """ """ python examples/pytorch/contrastive-image-text/run_clip.py \ --output_dir ./clip-roberta-finetuned \ --model_name_or_path ./clip-roberta \ --data_dir ./data \ --dataset_name ydshieh/coco_dataset_script \ --dataset_config_name=2017 \ --image_column image_path \ --caption_column caption \ --remove_unused_columns=False \ --do_train --do_eval \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="64" \ --learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \ --overwrite_output_dir \ --push_to_hub """ The errors are: FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/ydshieh/coco_dataset_script/resolve/main/data/train2017.zip - ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction """ COCO_DIR = "data" ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR) """ """ python examples/pytorch/contrastive-image-text/run_clip.py \ --output_dir ./clip-roberta-finetuned \ --model_name_or_path ./clip-roberta \ --data_dir ./data \ --dataset_name ydshieh/coco_dataset_script \ --dataset_config_name=2017 \ --image_column image_path \ --caption_column caption \ --remove_unused_columns=False \ --do_train --do_eval \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="64" \ --learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \ --overwrite_output_dir \ --push_to_hub """ The errors are: FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/ydshieh/coco_dataset_script/resolve/main/data/train2017.zip - ### Expected behavior run the code successfully
07-26-2022 07:31:35
07-26-2022 07:31:35
cc @ydshieh :)<|||||>Hi, @lchwhut I believe you already download the (real) coco datasets into `COCO_DIR = "data"`. In this case, `--data_dir ./data` should be changed to the **absolute path** of the data directory, instead of `./data`. Please let me know if this solves the issue, thank you. (you can use `--data_dir $PWD/data`) <|||||>> Hi, @lchwhut > > I believe you already download the (real) coco datasets into `COCO_DIR = "data"`. In this case, `--data_dir ./data` should be changed to the **absolute path** of the data directory, instead of `./data`. > > Please let me know if this solves the issue, thank you. > > (you can use `--data_dir $PWD/data`) thx, it works. <|||||>Hello, i think the "load_dataset" doesn't load the data i have downloaded unless "streaming=True" is set. But if i set that, i obtained IterableDataset(IterableDatasetDict) rather than Dataset(DatasetDict). How can i load the data i have downloaded? <img width="1244" alt="image" src="https://user-images.githubusercontent.com/27990344/186414346-37107f40-ac97-4745-95c0-6c136834d1c2.png"> @ydshieh<|||||>Hi @lchwhut , could you explain what kind of issue you have when you say `the "load_dataset" doesn't load the data i have downloaded?` <|||||>![image](https://user-images.githubusercontent.com/27990344/186424214-f1a81fc9-c3a6-43ff-9616-b0890123df67.png) Here<|||||>The train size of "ds" i got is 80, but the real size is at least greater than 20,000. As you see, the space occupied by the train2012.zip is 19GB<|||||>OK, will take a look<|||||>@lchwhut Before I take a more close look, could you try maybe `rm -rf` the directory `/root/.cache/huggingface/datasets/`. Then running `load_dataset` without `streaming`.<|||||>I tried it before and didn't find a difference<|||||>Hi @lchwhut I am not able to reproduce the issue. Could you share what's your `datasets` version? You can get it by ``` pip show datasets ``` Here is what I got with `print(ds)` ```bash >>> print(ds) DatasetDict({ train: Dataset({ features: ['image_id', 'caption_id', 'caption', 'height', 'width', 'file_name', 'coco_url', 'image_path'], num_rows: 591753 }) validation: Dataset({ features: ['image_id', 'caption_id', 'caption', 'height', 'width', 'file_name', 'coco_url', 'image_path'], num_rows: 25014 }) test: Dataset({ features: ['image_id', 'caption_id', 'caption', 'height', 'width', 'file_name', 'coco_url', 'image_path'], num_rows: 40670 }) }) ```<|||||> My datasets version is 2.0.0. The issue was solved when "rm -rf /root/.cache/huggingface/datasets/". I just tried "rm -rf /root/.cache/huggingface/datasets/ydshieh___coco_dataset/" before because there was other important data(310GB), really sorry about that Orz. If i specify another cache_dir in load_dataset(like: ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR, cache_dir="test_load")), The issue also would be solved. Actully, I follow these steps and the issue can reproduce: 1. download dataset from "https://huggingface.co/datasets/ydshieh/coco_dataset_script/tree/main/dummy_data" into data folder 2. "ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR)", and than get the "ds" of size 80 3. "rm data/*" and get the "ds" following the readme(https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/README.md) to get the 19GB dataset 4. rm -rf /root/.cache/huggingface/datasets/ydshieh___coco_dataset/ 5. "ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR)", and than get the "ds" of size 80 6. if specify another cache_dir or streaming=true, i will get the "ds" of real size(591753) So, rm -rf ydshieh___coco_dataset is not enough, the data configuration is kept and will be reused when COCO_DIR stay the same. The old data configuration isn't used when i specify another cache_dir or streaming=True. Thanks a lot for you help!<|||||>@lchwhut Thank you for the detailed information. Glad it works for you now. It's probably good for me to make a comment on my dataset page regarding this.
transformers
18,290
closed
Converting facebook/opt-13b to onnx
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.4.0-72-generic-x86_64-with-debian-buster-sid - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @SaulLu @patrickvonplaten @Narsil @gante @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I used the PR [here](https://github.com/huggingface/transformers/pull/17771) to convert facebook/opt-13b to onnx. You can see the discussion I had [here](https://github.com/huggingface/optimum/issues/202). Now, when I want to do the conversion, after fixing some problems, I face a problem whith no error and I'm not sure how should I solve it. I got opt-13, finetuned it on a custom dataset and saved it locally. The model is saved in 3 parts: `pytorch_model-00001-of-00003.bin`, `pytorch_model-00002-of-00003.bin` , and `pytorch_model-00003-of-00003.bin`. ![image](https://user-images.githubusercontent.com/24753756/180838890-e90b1be6-b97e-4870-ab57-e2d5cee9b964.png) Now I want to convert it to onnx using `python -m transformers.onnx --model=local-pt-checkpoint --feature=causal-lm` but use the local saved model (local-pt-checkpoint) and not facebook/opt-13b from huggingfacehub. Here is the code in jupyter on a system with enough memory: ``` !pip install git+https://github.com/kargarisaac/transformers.git@opt_onnx_13b !pip install onnxruntime | tail -1 !pip install onnx | tail -1 !pip install "optimum[onnxruntime]==1.3.0" -q ``` ``` !python -m transformers.onnx --model=local-folder/ onnx/ --feature=causal-lm ``` Now I get the following message and no onnx file is saved. How can I solve this? It doesn't work for the main facebook/opt-13b model too. ``` Using framework PyTorch: 1.12.0 Overriding 1 configuration item(s) - use_cache -> False ``` ### Expected behavior I expect to save the converted onnx model. The same command works for all other opt versions but not for opt-13b. I forked the mentioned PR and changed it a bit, `tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)` because it was complaining about fast tokenizer. You can see the repo [here](https://github.com/kargarisaac/transformers.git) on branch `opt_onnx_13b`.
07-25-2022 22:31:31
07-25-2022 22:31:31
@mfuntowicz Maybe for visibility on ONNX export.<|||||>I think the problem was memory. Now it works but I get the same output about use_cache which I assume is just a warning. Anyway the model is now converted. I have another problem now. I quantize the model using: ```python from onnxruntime.quantization.calibrate import CalibrationMethod from onnxruntime.quantization import quantize_dynamic, QuantType, quantize_static quantize_dynamic( "onnx2/model.onnx", "onnx-quantized2/model-int8.onnx", weight_type=QuantType.QUInt8, use_external_data_format=True ) ``` The model output is not good at all now: ``` This is an award winning short story titled The Drive. This story titled The A A story A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A ``` It should be sth like this: ``` This is an award winning short story titled The Drive. This story is written with descriptive language, described in detail. This is the first chapter of The Drive.\n\nIn this chapter I am a professional driver. I am driving a car from San Francisco to Los Angeles. I have a female passenger who is a famous photographer. She is taking a photo of me as I drive.\n\n#action, #driving, #fiction, #funny, #funny, #driving,' ``` What do you think is the problem? <|||||>I cannot tell. Is the ONNX model correct before quantization ?<|||||>@Narsil yes the output before quantization is ok. Any other tool you suggest I can use? Is DeepSpeed an option here or I have to use it during training? I don't have access to data now. Just the mode.<|||||>The only thing that stands out right here is: ``` weight_type=QuantType.QUInt8, ``` It's not necessarily wrong, but I don't recall having using it. If it's the quantization that's causing a problem the best is to open an issue directly at onnxruntime I guess.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,289
closed
Converting batch into jax arrays during training is inefficient
### System Info The Flax Transformers examples use a custom dataloader which yields batches. This includes converting the batch into jax arrays: https://github.com/huggingface/transformers/blob/main/examples/flax/summarization/run_summarization_flax.py#L355 This operation is inefficient, it can take about 700ms to generate a batch during training. On A100 GPUs, this bottleneck leads to lower GPU utilization. The solution would be to set ds.set_format("jax") before loading batches for training. ### Who can help? @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction All examples in flax use this operation for batch generation: https://github.com/huggingface/transformers/blob/main/examples/flax/ ### Expected behavior The fix gives better GPU utilization and faster training (Tested locally on A100 GPUs).
07-25-2022 19:37:21
07-25-2022 19:37:21
Hi @isunitha98selvan. In the example linked, we actually force the batch indices to `numpy` arrays: https://github.com/huggingface/transformers/blob/286a18fa0080dd39bd373008d11d831fbb1a77f1/examples/flax/summarization/run_summarization_flax.py#L345-L346 And then slice the dataset according to these `numpy` arrays. When using JAX, we want to keep everything that's outside of the jit'd function as a `numpy` array, and everything that's inside as `jax.numpy` arrays. The reason being if everything outside of the jit'd function is a `numpy` array, we can make maximal use of JAX's asynchronous dispatch (https://jax.readthedocs.io/en/latest/async_dispatch.html): the CPU can run ahead and compute any `numpy` arrays ahead of time, whilst the GPUs/TPUs tackle the `jax.numpy` arrays inside the jit'd function. In the forward pass of the model, any `numpy` arrays are converted to `jax.numpy` arrays, putting the data on the accelerator device. So whilst it might be slower computing the batches on a per-batch basis like this, when we combine it with a jit'd training function we'll actually train much faster overall 🚀<|||||>Hey @isunitha98selvan, feel free to ask if you have any questions! Otherwise, closing this for now.
transformers
18,288
closed
[XLA] Improve t5 model performance
This PR improves the performance of t5 model on XLA device. I tested T5ForConditionalGeneration with t5-small config on both GPU and colab TPU and the performance on GPU was improved by >10% while the speedup on TPU is not significant (<1%). I'm not sure if it's ok to skip the `shifted_input_ids` checking on XLA device since this can trigger device host sync. The `torch.ones().to(device)` operation will trigger unnecessary data uploading to XLA device. cc @sgugger
07-25-2022 18:55:53
07-25-2022 18:55:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I've removed this check completely. I think maybe it's fine to skip it since `nn.Embed` will throw an error for negative indices.<|||||>Makes sense, LGTM! Wdyt @patrickvonplaten ?<|||||>Ok to skip this check for me!
transformers
18,287
closed
Numpy arrays used instead of jax array in example
### System Info The example[ here](https://github.com/huggingface/transformers/blob/main/examples/flax/text-classification/run_flax_glue.py#L286) should yield jax device arrays instead of numpy arrays. @patil-suraj ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://github.com/huggingface/transformers/blob/main/examples/flax/text-classification/run_flax_glue.py#L286 ### Expected behavior Should return batches with jax device arrays
07-25-2022 18:04:22
07-25-2022 18:04:22
Answered in https://github.com/huggingface/transformers/issues/18289#issuecomment-1198216304
transformers
18,286
closed
Fix TF bad words filter with XLA
null
07-25-2022 16:36:00
07-25-2022 16:36:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>And thank you for having a look at this issue <3
transformers
18,285
closed
Run_mlm.py for fine-tuning generator(mlm) of electra
Hi, I have a new corpus and want to fine-tune ELECTRA for better result. ELECTRA used generator as generator after that used discriminator. Can run file run_mlm.py makes my pre-trained electra works better ?
07-25-2022 14:45:19
07-25-2022 14:45:19
Hi @ToanKGO, I think the latest update on the ELECTRA pretraining/fine-tuning is this comment from @LysandreJik :+1: https://github.com/huggingface/transformers/pull/4656#issuecomment-711082850 So you could use e.g. this implementation: https://github.com/richarddwang/electra_pytorch<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,284
closed
Initializing attention weights in T5
@patrickvonplaten @patil-suraj @craffel Excuse me if this question is repeated but I did not find an answer for it In these lines ``` elif isinstance(module, (LongT5Attention, LongT5LocalAttention, LongT5TransientGlobalAttention)): # Mesh TensorFlow attention initialization to avoid scaling before softmax # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/attention.py#L136 d_model = self.config.d_model key_value_proj_dim = self.config.d_kv n_heads = self.config.num_heads module.q.weight.data.normal_(mean=0.0, std=factor * ((d_model * key_value_proj_dim) ** -0.5)) module.k.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5)) module.v.weight.data.normal_(mean=0.0, std=factor * (d_model**-0.5)) module.o.weight.data.normal_(mean=0.0, std=factor * ((n_heads * key_value_proj_dim) ** -0.5)) if module.has_relative_attention_bias: module.relative_attention_bias.weight.data.normal_(mean=0.0, std=factor * ((d_model) ** -0.5)) if isinstance(module, LongT5TransientGlobalAttention): module.global_relative_attention_bias.weight.data.normal_( mean=0.0, std=factor * ((d_model) ** -0.5) ) ``` from t5 implementation https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/longt5/modeling_longt5.py#L1291 1) we notice that the factor is multiplied by ((d_model * key_value_proj_dim) ** -0.5) for just the query and the output , and with * (d_model**-0.5) for key and value, why? Is there a detailed explanation of that? and still the initial value of the factor is 1.0? 2) Also today I found this issue https://github.com/huggingface/transformers/issues/16749 According to my understanding to this issue and correct me if I am wrong : @patrickvonplaten corrects the initialization but still vague for me is the relation between tying word embedding initialization and language model head initialization in this line https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/t5/modeling_t5.py#L766 and why this condition in not included in longt5 implementation?
07-25-2022 14:31:36
07-25-2022 14:31:36
It also confuses me. @patrickvonplaten @patil-suraj @craffel<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @Arij-Aladel, The reason is that the first T5 model: - https://huggingface.co/t5-base does tie the word embeddings where as the v1_1 version doesn't - https://huggingface.co/google/t5-v1_1-base => therefore we need to support both use cases in Transfomers<|||||>@patrickvonplaten Is it the answer to initializing word embeddings(question2)? if yes, then sorry it is not clear yet. the condition clearly states that the initialization is only when there is no word tieing. So where is the support? what about the first question, please?<|||||>Sorry I won't have time to look into this anymore. Also note that the questions might find better answers in the forum: https://discuss.huggingface.co/<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,283
closed
Add Italian translation of converting_tensorflow_models.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Italian translation of converting_tensorflow_models See issue: https://github.com/huggingface/transformers/issues/17459 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/17459 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @mfumanelli <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-25-2022 14:22:30
07-25-2022 14:22:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @Xpiri you started a new line many times, could you check? e.g. A partire dalla versione 2.3.0 lo script di conversione è parte di transformers CLI (**transformers-cli**), disponibile in ogni installazione di transformers >=2.3.0 --> A partire dalla versione 2.3.0 lo script di conversione è parte di transformers CLI (**transformers-cli**), disponibile in ogni installazione di transformers >=2.3.0 And at line 1 È disponibile un'interfaccia per i comandi di linea --> È disponibile un'interfaccia a linea di comando Thanks <|||||>Hi @nickprock, yes, in the raw file I started a new line many times, this is something that was also done in some other docs that I found, and I believe it makes it easier to edit the documentation (such as [here](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx)) Line 1 was edited, thanks! <|||||>Thanks @Xpiri and @nickprock! 🇮🇹 This LGTM @sgugger.
transformers
18,282
closed
How to freeze GPT-2 model layers with Tensorflow/Keras?
### Feature request I read the post https://github.com/huggingface/transformers/issues/12881. It seems that no progress on feezing GPT layers with keras has been reported. Are there any updates or work around for freezing GPT-2 model layers at Tensorflow? Thank you ### Motivation For finetuning, I want to update only some layers ### Your contribution I can test the solution
07-25-2022 13:50:52
07-25-2022 13:50:52
cc @Rocketknight1 @gante <|||||>Hi @kmkarakaya 👋 ⚠️ Disclaimer: this is untested with `.fit()`, please let me know if it works! I'll edit the comment afterwards depending on your answer. The weights can be frozen by setting a `Layer` attribute to `False`, as per [Keras' docs](https://keras.io/guides/transfer_learning/#freezing-layers-understanding-the-trainable-attribute). The original issue you linked attempts to freeze the `.weights` attribute of a model, which looks similar... but not the same :D To reach the layer you want to freeze, the best way is to navigate the code of the original model and find its attribute name. For instance, let's say you have to freeze GPT-2's word embeddings. If you check the [code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_tf_gpt2.py#L322), you see that it is a `Layer` inside `TFGPT2MainLayer` named `wte`. `TFGPT2MainLayer`, in its turn, is a `transformer` attribute in the instantiable classes. In summary, do NOT do this ```python from transformers import TFAutoModelForCausalLM model = TFAutoModelForCausalLM.from_pretrained('gpt2') model.weights[6]._trainable=False model.compile() model.summary() ``` (notice that ALL weights are trainable) and do this instead ```python from transformers import TFAutoModelForCausalLM model = TFAutoModelForCausalLM.from_pretrained('gpt2') model.transformer.wte.trainable=False model.compile() model.summary() ``` (a large amount of weights are now tagged as untrainable)<|||||>@gante thank you for the reply. Actually, I attempted reading the code but I could not locate the transformer block names to use them in freezing. As you suggested the "wte" name can be used to freeze the embedding layer. However, when I want to freeze let's say the first 3 transformer blocks how should I code it? I tried to freeze the first transformer block (model.transformer.h_._0.trainable=False) but generated the error msg: ![image](https://user-images.githubusercontent.com/41159849/185100015-a65423ee-5f23-409e-b383-2ca8c742aebd.png) However,[ in the code](https://github.com/huggingface/transformers/blob/c99e984657b64dd8f19de74405bbf13763ab4f2b/src/transformers/models/gpt2/modeling_tf_gpt2.py#L322), as far as I understand, the block names are like h__._x. ![image](https://user-images.githubusercontent.com/41159849/185100408-66854c2a-156b-4572-91c0-fac1a662fcc7.png) Also, I check these names by "model.transformer.trainable_variables". Could you help me to freeze transformer block weights so that I can partially fine-tune a transformer model? Thanks a lot. <|||||>@gante I now tried the below code: ![image](https://user-images.githubusercontent.com/41159849/185101607-8303dee4-e57e-49d7-a4a4-4738d23ed90b.png) It seems to be working. Do you think that it is the correct way of freezing a transformer block's weights in GPT2? Because, I'm not sure :) Thanks.<|||||>Hey @kmkarakaya 👋 That's true, some names are non-trivial to match, due to internal name mapping. In this case, the layer is named `model.transformer.h[0]`, so setting `model.transformer.h[0].trainable=False` will work. In general, if you see `layer_._{index}`, you can access them as `layer[index]`.<|||||>@gante Thank you for the reply, I noted the info. I checked if I can freeze the model by freezing all the embeddings and the blocks as below. But it seems there is something missing from my attention :) What else should I freeze in a GPT2 model other than the embedding and the blocks? ![image](https://user-images.githubusercontent.com/41159849/185105549-5bbfe0a7-3c14-49cb-aab0-a4d49153bce4.png) <|||||>If you want to freeze everything, then you should freeze `model.transformer` (which is a layer). I don't know all parameters on top of my head, so if there are parameters remaining you should attempt to find the trainable parameters :)<|||||>I recognized that "wpe" is missing: ![image](https://user-images.githubusercontent.com/41159849/185109289-502c8721-77f3-4288-844b-33501bbc9cce.png) However, I cannot set it to untrainable. ![image](https://user-images.githubusercontent.com/41159849/185108667-b286fb87-fd5f-4eb5-a7b0-77a87151928f.png) Frankly, I don't know what this layer does. Do you have any clue? Thanks <|||||>`wpe` are the position embeddings. Their code is [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_tf_gpt2.py#L329) -- I don't know how to freeze these ones :(<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> `wpe` are the position embeddings. Their code is [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_tf_gpt2.py#L329) -- I don't know how to freeze these ones :( what about a new variable? `model.transformer.wpe= tf.Variable(model.transformer.wpe, trainable=False) `
transformers
18,281
closed
T5 generate with do_sample doesn't work on DeepSpeed Stage 3
### System Info transformers == 4.20.1 python == 3.8.13 OS == ubuntu 20.4 DeepSpeed == 0.6.7 ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://github.com/microsoft/DeepSpeed/issues/2022#issuecomment-1158389764 ### Expected behavior All processes run and finish.
07-25-2022 12:47:54
07-25-2022 12:47:54
cc @stas00 <|||||>@lkm2835, looking at your code you linked to, you must use `generate(..., synced_gpus=True)` when using ZeRO stage-3<|||||>@stas00, Thank you for your advice! It works using `synced_gpus`! https://huggingface.co/docs/transformers/v4.20.1/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.synced_gpus<|||||>If possible, could you explain to me why it is not possible in T5 since it works in gpt2 and gptj without `synced_gpus`?<|||||>oops, apologies for the typo - fixed! glad you figured it out, @lkm2835 This has nothing to do with t5 specifically, but just how ZeRO stage3 works. It needs to have all gpus work in sync. So if one gpu finished generating, it has to continue running `forward` because ZeRO distributes all the weight shards to all gpus and if one stops the other gpus can't get the shards they are missing. So it really depends on the situations - sometimes all gpus generate the same output length and then it works w/o syncing, but that's just an accident and can easily break down the road. For more details please see: https://huggingface.co/docs/transformers/main/perf_train_gpu_many#zero-data-parallelism
transformers
18,280
closed
Raise a TF-specific error when importing Torch classes
We've had a couple of reports from users that the class naming is confusing - they import `AutoModel` but don't realize that it's a PyTorch-only class, and the error message doesn't give them any guidance except to tell them to install PyTorch. This PR adds a check to `requires_backends`, so that if the user tries to import a class that requires PyTorch when they don't have it installed, but they do have TF installed instead, it will raise an error that explains the situation and directs them to the TF classes. Fixes #18220.
07-25-2022 12:28:45
07-25-2022 12:28:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik I'm not sure! I think it's more obvious that "TFAutoModel" is TF-specific, and it's only the PyTorch classes that don't specify the framework in their names. But I guess it's okay to add the inverse error too, one sec!<|||||>@Rocketknight1 NICE! -- Thanks for implementing this. You rock!
transformers
18,279
closed
'FlavaModelOutput' object has no attribute 'contrastive_logits_per_image'
Hi, I have used a code of Flava model from this link: ``` https://huggingface.co/docs/transformers/model_doc/flava#transformers.FlavaModel.forward.example ``` But I am getting the following error: ``` 'FlavaModelOutput' object has no attribute 'contrastive_logits_per_image' ``` Could you please help me in solving the above question. Thank you.
07-25-2022 09:29:57
07-25-2022 09:29:57
Hey @ans92! It seems there's an error in the documentation indeed. This code example works with the `FlavaForPreTraining` model, rather than the `FlavaModel. Would you like to open a Pr to fix this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am experiencing this issue as well and the documentation shows it with the `FlavaModel`. There are also a large set of warnings when loading the `FlavaModel`, is this normal?<|||||>This issue needs re-opening <|||||>Yes agreed, I just ran in the same issue and I also think it needs re-opening. Only following the doc here with `FlavaForPreTraining` works https://huggingface.co/facebook/flava-full
transformers
18,278
closed
Kindly provide a sample dataset used in layoutlmv3.
### Feature request Need a dataset sample to explore and know how it's been prepared. ### Motivation @NielsRogge ### Your contribution @NielsRogge
07-25-2022 09:14:44
07-25-2022 09:14:44
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, Check this thread for more info: https://github.com/NielsRogge/Transformers-Tutorials/issues/123
transformers
18,277
closed
Does 'convert_data2vec_text_original_pytorch_checkpoint_to_pytorch.py' transfer data2vec-text model's parameter well?
### System Info ```shell - `transformers` version: 4.21.0.dev0 - Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.1+cu111 (False) ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to convert own pretrained data2vec-text to Hugginface form, but it looks there are some parameter mismatches on code. In my opinion there seems some troubleshooting parts are included in 'convert_data2vec_text_original_pytorch_checkpoint_to_pytorch.py' 1. import error I pretrained my model with fairseq code, so that I should import it with fairseq one (doesn't work with suggested code with import Data2vecModel from transformers). I tweak this part to load my model with 'Data2vecTextModel' from fairseq, then it works. 2. 'lm_head'? IIRC, data2vec's pretraining procedure updates model parameters with embedding vector(the Avg. of top k output of Transformer Network). So, there should be no 'lm_head', I guess. As I guess, my pre-trained model only has 'regression_head' layer not 'lm_head' layer. However, Data2vecForMaskedLM's architecture (ofcoursely?) tries to transfer lm_head's parameters, which i don't have. To solve this problem, I tweak the code to import Data2vecModel from transformers to transfer headless parameters. ![스크린샷 2022-07-25 오후 4 09 54](https://user-images.githubusercontent.com/71030815/180718595-e31a0d8a-7eef-4e25-86d1-7869fa9a5eb4.png) As you can see, the upper on image is Huggingface's Data2vecTextModel architecture, and the below is Fairseq's one(my own pretrained model from fairseq). Can you guide me if there's something i misunderstood the module? There's no need to transfer 'regression_head', and it's fine to leave blank with 'Pooler'? ### Expected behavior ```shell ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
07-25-2022 07:24:07
07-25-2022 07:24:07
cc @patrickvonplaten <|||||>I myself solve this issue by tweaking the codes, leaving pooler parameters blank but others to be transferred. In my opinion, however, there's no MLM task in data2vec Text model's pretraining procedure, so imported model should be 'Data2VecTextModel' not 'Data2VecTextForMaskedLM'. For data2vec, MaskedLM is one of the downstream tasks, not pretraining way. They only require the K top of outputs(the latent representation) of Transformers Network for distillation and parameter updates, which is different from BERT-like way. Anyway, no pretraining way is now on service in Hugginface, so that there's no need to transfer 'regression_head' if I understand correctly. I'd like to listen other opinions because I may misunderstood sth. Thanks<|||||>Hey @qwer4107, Good analysis. We indeed didn't find the time yet to port the pretraining capabilities of the model (happy to help in adding such functionality if you'd be interested though!) The reason why we added the class nevertheless is because we want to be sure to be able to add pretraining code in the future without breaking backwards compatibility. You're right that for now there is no need to convert the regression_head! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,276
closed
Debertav2 debertav3 TPU : socket closed
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (False) - Tensorflow version (GPU?): 2.8.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: TF : 2.8.2 colab / 2.4 Kaggle TPU : v2 and v3 ### Who can help? @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I tried to launch a script with a simple classification problem but got the error "socket close". I tried with deberta small and base so I doubt it is a memory error. Moreover I tried with Kaggle (TPUv3) and Colab (TPUv2). The same script with a roberta base model works perfectly fine. The length I used was 128. I created the model using this : ``` def get_model() -> tf.keras.Model: backbone = TFAutoModel.from_pretrained(cfg.model_name) input_ids = tf.keras.layers.Input( shape=(cfg.max_length,), dtype=tf.int32, name="input_ids", ) attention_mask = tf.keras.layers.Input( shape=(cfg.max_length,), dtype=tf.int32, name="attention_mask", ) x = backbone({"input_ids": input_ids, "attention_mask": attention_mask})[0] x = x[:, 0, :] # tf.concat([, feature], axis=1) outputs = tf.keras.layers.Dense(1, activation="sigmoid", dtype="float32")(x) return tf.keras.Model( inputs=[input_ids, attention_mask], outputs=outputs, ) ``` It also seems that Embedding is not compatible with bfloat16 : > > InvalidArgumentError: Exception encountered when calling layer "embeddings" (type TFDebertaV2Embeddings). > > cannot compute Mul as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor https://colab.research.google.com/drive/1T4GGCfYy7lAFrgapOtY0KBXPcnEPeTQz?usp=sharing ### Expected behavior A regular training like training roberta. On GPU, the same script is working and use 3 or 4 GB.
07-25-2022 01:06:58
07-25-2022 01:06:58
Hi @Shiro-LK, we're seeing other reports of issues with DeBERTa running slowly on TPU with TF - see #18239. I'm not sure what the cause of the "socket closed" error is though - the other user got it to run, but just had a lot of slowdown on one of the layers.<|||||>@Rocketknight1 Thank for the reply. Yes I have just looked at it. But it does not seem to use the keras function "model.fit" so I wonder if that"s the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,275
closed
My model's Hosted Inference API is returning "Internal Server Error" and when fetching the API it eternally loads.
### System Info - `transformers` version: 4.6.1 - Platform: Windows-10-10.0.19043-SP0 - Python version: 3.8.10 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): 2.6.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Visit https://huggingface.co/SebastianS/DEL2 2. Try and use the Hosted Interface API. 3. It will return "Internal Server Error" ![image](https://user-images.githubusercontent.com/37946988/180653047-29db495d-30d6-4b36-8240-7e9ac9eb7933.png) ![image](https://user-images.githubusercontent.com/37946988/180653060-3d150330-9f72-4628-8ffd-cef58d672935.png) ### Expected behavior The model should be loaded and then generate text.
07-24-2022 14:57:05
07-24-2022 14:57:05
Its working again, seems like something did happen with the servers.
transformers
18,274
closed
Define metric for save the best model
### Feature request I suggest to add a metric choice to save the best model ### Motivation I use multiple metrics in process of fine tuning models through Trainer and don't know how the metric is chosen to save the best model (I suppose it's first metric in dictionaty?!). ### Your contribution -
07-24-2022 12:02:48
07-24-2022 12:02:48
If you are using HF trainer it provides two arguments for selecting your preferred metric for choosing the best model. The first one is `metric_for_best_model`. It defaults to "loss". The second argument is `greater_is_better`. The default value is `False` but If metric for best model is set to any value other than "loss" then it will default to `True` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,273
closed
Generalize decay_mask_fn to apply mask to all LayerNorm params
# What does this PR do? Fixes the problem of `decay_mask_fn` not applying mask to all LayerNorm params. For example, when running `run_mlm_flax.py` with `roberta-base`'s config, current code fails to apply mask to the LayerNorm's `scale` param of the `lm_head`, ![Imgur](https://i.imgur.com/87Oy6QJ.png) This is because `("layer_norm", "scale")` is omitted from `decay_mask_fn`: ```python def decay_mask_fn(params): flat_params = traverse_util.flatten_dict(params) flat_mask = {path: (path[-1] != "bias" and path[-2:] != ("LayerNorm", "scale")) for path in flat_params} print('flat_mask: ', flat_mask) return traverse_util.unflatten_dict(flat_mask) ``` Another example, `run_t5_mlm_flax.py` with `t5-base`'s config omitted all the LayerNorm params, ![Imgur](https://i.imgur.com/isNQDSf.png) This is because `decay_mask_fn` only takes into account `scale` while the T5LayerNorm's param is `weight`. ```python def decay_mask_fn(params): flat_params = traverse_util.flatten_dict(params) flat_mask = { path: (path[-1] != "bias" and path[-2:] not in [("layer_norm", "scale"), ("final_layer_norm", "scale")]) for path in flat_params } print('flat_mask: ', flat_mask) return traverse_util.unflatten_dict(flat_mask) ``` ## Fix Generalize `decay_mask_fn` to apply mask to all params whose lowered name containing `layernorm`, `layer_norm` or `ln`. ## Who can review? potential reviewers: @patrickvonplaten, @sgugger, @patil-suraj
07-24-2022 11:55:55
07-24-2022 11:55:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sanchit-gandhi Thank you for pointing them out! It's been done.<|||||>Amazing, thanks for the PR @duongna21!
transformers
18,272
closed
Deberta V2: Fix critical trace warnings to allow ONNX export
# What does this PR do? This PR fixes some untraceable functions used in Deberta V2. Specifically, `math.sqrt`, `np.arange`, `np.tile` and `np.where` were replaced with their torch equivalents. I also applied some type conversions to make sure that the types are compatible with ONNX ops (opsset 15 was the focus). The remaining trace warnings that I did not solve seem to concern configuration items, which should stay constant for any given model. Fixes #18237 ## Who can review? @LysandreJik
07-23-2022 20:12:42
07-23-2022 20:12:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @BigBird01 for knowledge, @michaelbenayoun <|||||>@michaelbenayoun I resolved all the comments. Could you verify and merge?
transformers
18,271
closed
[EncoderDecoder] Improve docs
# What does this PR do? As a follow-up of #17815, this PR improves the docs of `VisionEncoderDecoderModel` and `SpeechEncoderDecoderModel`. It also fixes some typos in the docs of `EncoderDecoderModel`.
07-23-2022 09:41:32
07-23-2022 09:41:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,270
closed
This code block will not be executed
### System Info python 3.9 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction [](urlhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/detr/modeling_detr.py) ` combined_attention_mask = None if attention_mask is not None and combined_attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] combined_attention_mask = combined_attention_mask + _expand_mask( attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1] )` ### Expected behavior The combined_attention_mask is set to None, so the if code block below it will never be executed because it will check combined_attention_mask is not None.
07-23-2022 07:22:13
07-23-2022 07:22:13
Where is that code block from?<|||||>The code block is from https://github.com/huggingface/transformers/blob/8e8384663d716d4b5a4f510070ff954fc0ba4a52/src/transformers/models/detr/modeling_detr.py#L1076<|||||>cc @NielsRogge <|||||>Hi @AlfredQin, thanks for spotting that. The `combined_attention_mask` was probably taken from another model, which is not relevant for DETR. Feel free to open a PR to remove that code block!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,269
closed
cannot import name 'TrainingArguments' from 'transformers'
### System Info Traceback (most recent call last): File "dv2xxl.py", line 30, in <module> from transformers import TrainingArguments,Trainer **ImportError: cannot import name 'TrainingArguments' from 'transformers'** (/lustre06/project/6005433/takfa/UW/ue/lib/python3.7/site-packages/transformers/__init__.py) ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Traceback (most recent call last): File "dv2xxl.py", line 30, in <module> from transformers import TrainingArguments,Trainer ImportError: cannot import name 'TrainingArguments' from 'transformers' (/lustre06/project/6005433/takfa/UW/ue/lib/python3.7/site-packages/transformers/__init__.py) ### Expected behavior looking for help
07-23-2022 07:07:52
07-23-2022 07:07:52
Hey! Could you please provide your transformers version? How did you install transformers?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,268
closed
OPT vocab size of model and tokenizer does not match
### System Info - `transformers` version: 4.19.2 - Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu113 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: no ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('facebook/opt-350m') tok = AutoTokenizer.from_pretrained('facebook/opt-350m', use_fast=False) print(model.config.vocab_size) # 50272 print(tok.vocab_size) # 50265 ``` ### Expected behavior Hello, I'm not sure whether this is a bug or if I am missing something. In the reproduction script above, the model has a bigger vocabulary than the tokenizer. In my project, the LM produces the token `50272`, which the tokenizer doesn't know and thus the decode() function fails. (I use my own text generation script, so is it by any chance that the model is not supposed to output the last 7 tokens that the tokenizer doesn't know?) Best, David
07-22-2022 20:15:01
07-22-2022 20:15:01
cc @ArthurZucker <|||||>Hey, thanks for noticing this! I am gonna add @younesbelkada to the loop. It seems that the original tokenizer [vocabulary](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/assets/gpt2-vocab.json) has 50264 words, with some "madeupwords". Let us have a look! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Duplicate of https://github.com/huggingface/transformers/issues/17431#issuecomment-1224231170<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,267
closed
Expected input batch_size (16) to match target batch_size (262144)
### System Info ENERATE_JPG_FILES = True # warning: generation takes ~ 1h slice_sum=0 if (GENERATE_JPG_FILES): path = Path(".") os.makedirs('train_images',exist_ok=True) os.makedirs('train_masks',exist_ok=True) for ii in tqdm(range(0,len(df_files))): # take 1/3 nii files for training curr_ct = read_nii(df_files.loc[ii,'dirname']+"/"+df_files.loc[ii,'filename']) curr_mask = read_nii(df_files.loc[ii,'mask_dirname']+"/"+df_files.loc[ii,'mask_filename']) curr_file_name = str(df_files.loc[ii,'filename']).split('.')[0] curr_dim = curr_ct.shape[2] # 512, 512, curr_dim slice_sum = slice_sum+curr_dim for curr_slice in range(0,curr_dim,1): # export every 2nd slice for training data = tensor(curr_ct[...,curr_slice].astype(np.float32)) mask = Image.fromarray(curr_mask[...,curr_slice].astype('uint8'), mode="L") data.save_jpg(f"train_images/{curr_file_name}_slice_{curr_slice}.jpg", [dicom_windows.liver,dicom_windows.custom]) mask.save(f"train_masks/{curr_file_name}_slice_{curr_slice}_mask.png") else: path = Path('C:/AML 2404 AI and ML Lab/Liver Tumor Segmentation/Liver Tumor Segmentation/new_images') # read jpg from saved kernel output print(slice_sum) bs = 16 im_size = 128 codes = np.array(["background","liver","tumor"]) def get_x(fname:Path): return fname def label_func(x): return path/'train_masks'/f'{x.stem}_mask.png' tfms = [IntToFloatTensor(),Normalize()] db = DataBlock(blocks=(ImageBlock(),MaskBlock(codes)), #codes = {"Backround": 0,"Liver": 1,"Tumor": 2} batch_tfms=tfms, splitter=RandomSplitter(), item_tfms=[Resize(im_size)], get_items=get_image_files, get_y=label_func ) # ../output/kaggle/working/train_images.zip # ds = db.datasets(source=path/'train_images.zip') ds = db.datasets(source='./train_images') print(len(ds)) print(ds) dls = db.dataloaders(path/'train_images',bs = bs) # num_workers=0 dls.show_batch() def foreground_acc(inp, targ, bkg_idx=0, axis=1): # exclude a background from metric "Computes non-background accuracy for multiclass segmentation" targ = targ.squeeze(1) mask = targ != bkg_idx return (inp.argmax(dim=axis)[mask]==targ[mask]).float().mean() def cust_foreground_acc(inp, targ): # # include a background into the metric return foreground_acc(inp=inp, targ=targ, bkg_idx=3, axis=1) learn = vision_learner(dls, resnet34, metrics =([foreground_acc,cust_foreground_acc])) learn.lr_find() ValueError: Expected input batch_size (16) to match target batch_size (262144). ### Who can help? @NielsRogge, @sgugger, @Rocketknight1 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ENERATE_JPG_FILES = True # warning: generation takes ~ 1h slice_sum=0 if (GENERATE_JPG_FILES): path = Path(".") os.makedirs('train_images',exist_ok=True) os.makedirs('train_masks',exist_ok=True) for ii in tqdm(range(0,len(df_files))): # take 1/3 nii files for training curr_ct = read_nii(df_files.loc[ii,'dirname']+"/"+df_files.loc[ii,'filename']) curr_mask = read_nii(df_files.loc[ii,'mask_dirname']+"/"+df_files.loc[ii,'mask_filename']) curr_file_name = str(df_files.loc[ii,'filename']).split('.')[0] curr_dim = curr_ct.shape[2] # 512, 512, curr_dim slice_sum = slice_sum+curr_dim for curr_slice in range(0,curr_dim,1): # export every 2nd slice for training data = tensor(curr_ct[...,curr_slice].astype(np.float32)) mask = Image.fromarray(curr_mask[...,curr_slice].astype('uint8'), mode="L") data.save_jpg(f"train_images/{curr_file_name}_slice_{curr_slice}.jpg", [dicom_windows.liver,dicom_windows.custom]) mask.save(f"train_masks/{curr_file_name}_slice_{curr_slice}_mask.png") else: path = Path('C:/AML 2404 AI and ML Lab/Liver Tumor Segmentation/Liver Tumor Segmentation/new_images') # read jpg from saved kernel output print(slice_sum) bs = 16 im_size = 128 codes = np.array(["background","liver","tumor"]) def get_x(fname:Path): return fname def label_func(x): return path/'train_masks'/f'{x.stem}_mask.png' tfms = [IntToFloatTensor(),Normalize()] db = DataBlock(blocks=(ImageBlock(),MaskBlock(codes)), #codes = {"Backround": 0,"Liver": 1,"Tumor": 2} batch_tfms=tfms, splitter=RandomSplitter(), item_tfms=[Resize(im_size)], get_items=get_image_files, get_y=label_func ) # ../output/kaggle/working/train_images.zip # ds = db.datasets(source=path/'train_images.zip') ds = db.datasets(source='./train_images') print(len(ds)) print(ds) dls = db.dataloaders(path/'train_images',bs = bs) # num_workers=0 dls.show_batch() def foreground_acc(inp, targ, bkg_idx=0, axis=1): # exclude a background from metric "Computes non-background accuracy for multiclass segmentation" targ = targ.squeeze(1) mask = targ != bkg_idx return (inp.argmax(dim=axis)[mask]==targ[mask]).float().mean() def cust_foreground_acc(inp, targ): # # include a background into the metric return foreground_acc(inp=inp, targ=targ, bkg_idx=3, axis=1) learn = vision_learner(dls, resnet34, metrics =([foreground_acc,cust_foreground_acc])) learn.lr_find() ValueError: Expected input batch_size (16) to match target batch_size (262144). ### Expected behavior vision_learner modell should predict image processing
07-22-2022 19:05:47
07-22-2022 19:05:47
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,266
closed
Can't pickle local object when running official benchmark
### System Info - `transformers` version: 4.11.3 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.8.13 - PyTorch version (GPU?): 1.12.0.post2 (False) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]) benchmark = PyTorchBenchmark(args) results = benchmark.run() print(results) ``` ``` 1 / 3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/site-packages/transformers/benchmark/benchmark_utils.py", line 707, in run memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/site-packages/transformers/benchmark/benchmark_utils.py", line 676, in inference_memory return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/site-packages/transformers/benchmark/benchmark_utils.py", line 101, in multi_process_func p.start() File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'separate_process_wrapper_fn.<locals>.multi_process_func.<locals>.wrapper_func' ``` ### Expected behavior ``` ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.006 bert-base-uncased 8 32 0.006 bert-base-uncased 8 128 0.018 bert-base-uncased 8 512 0.088 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1227 bert-base-uncased 8 32 1281 bert-base-uncased 8 128 1307 bert-base-uncased 8 512 1539 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== ... ```
07-22-2022 18:25:26
07-22-2022 18:25:26
Hey @ryanrudes, we're in the process of deprecating and removing benchmarks from the library, so we unfortunately won't be able to help you out on this one.<|||||>Understood
transformers
18,265
closed
Allows `KerasMetricCallback` to use XLA generation
Updates the `KerasMetricCallback` with the ability to use XLA generation for a big speed boost! cc @merveenoyan
07-22-2022 17:51:19
07-22-2022 17:51:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,264
closed
Adding type hints of TF:CTRL
Issue related: #16059 As the title suggests, this PR adds type hints to the `Tensorflow` `CTRL` model class.
07-22-2022 17:12:39
07-22-2022 17:12:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,263
closed
Adding type hints of TF:OpenAIGPT
Issue related: #16059 As the title suggests, this PR adds type hints to the `Tensorflow` `OpenAIGPT` model class.
07-22-2022 16:07:32
07-22-2022 16:07:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @Mathews-Tom, thanks for working on providing type hints for both OpenAIGPT and CTRL! To get a review quicker, don't hesitate to ping @Rocketknight1 directly :)
transformers
18,262
closed
[DETR] Improve code examples
# What does this PR do? As a follow-up of #17786, this PR improves the code examples of DETR to showcase the `post_process` methods.
07-22-2022 15:28:54
07-22-2022 15:28:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,261
closed
Generate: validate `model_kwargs` (and catch typos in generate arguments)
# What does this PR do? A common cause for issues in generate is around it not behaving as expected, as arguments can be silently ignored as part of the selected generation submethod (greedy_search, sample, ...). Typos also often fly under the radar, as the method accepts **model_kwargs, which in turn are passed to models that also accept **kwargs. This PR solves the low-hanging fruit (derived from https://github.com/huggingface/transformers/pull/18218): validates `model_kwargs`, which notifies the users about problems in model arguments AND about typos, as they will fall in `model_kwargs`. Will open a PR with TF and FLAX equivalent after this one gets merged. A solution for the other generation arguments is also on the way :) Fixes https://github.com/huggingface/transformers/issues/18130 ___________________ Here is an example of the output for ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("distilgpt2") model = AutoModelForCausalLM.from_pretrained("distilgpt2") prompt = tokenizer(["hello world"], return_tensors="pt") model.generate(**prompt, do_samples=True, foo="bar") ``` ![image](https://user-images.githubusercontent.com/12240844/180473721-1e216e12-b25a-470e-b4cd-2d3236f8e8ff.png)
07-22-2022 15:27:29
07-22-2022 15:27:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger ready for a final review :) [equivalent TF and FLAX changes will come in separate PRs, as they might need test corrections like this one]<|||||>Super-useful, thank you, @gante!
transformers
18,260
closed
Fix torch version check in Vilt
# What does this PR do? This line fails when torch < 1.10.0 https://github.com/huggingface/transformers/blob/1fc4b2a13223b9069f9969344117a2994261939c/src/transformers/models/vilt/modeling_vilt.py#L44 In this case, `torch.__version__` is of type `str` instead of `torch.torch_version.TorchVersion` and can't compare to a tuple. This leads to strange error message when other models being tested, for example (the use of `get_values` below) ``` def test_training(self): for model_class in self.all_model_classes: .... if model_class in get_values(MODEL_MAPPING): continue ```
07-22-2022 14:02:20
07-22-2022 14:02:20
cc @LysandreJik: This is *one* of the reasons why there are more failures in PyTorch past CI.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Oops, you approved before I tag 😄
transformers
18,259
closed
Replace false parameter by a buffer
# What does this PR do? The weights of the sinusoidal embedding are defined as a parameter with no grad in M2M100 (and thus XGLM), and never saved in the state dict. The problem is that when loading this with `low_cpu_mem_usage=True`, this false parameter will be replaced by an empty weight on the meta device, which is not re-initialized afterward (since it's not in the state dict). As a result, the model is not usable if `low_cpu_mem_usage=True` is used. By replacing it with a buffer, the weight is ignore by init_empty_weights and thus preserved.
07-22-2022 13:49:45
07-22-2022 13:49:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,258
closed
Fix dtype of input_features in docstring
# What does this PR do? Fix dtype in docstring for `input_features`: It should be `torch.FloatTensor`.
07-22-2022 13:35:44
07-22-2022 13:35:44
_The documentation is not available anymore as the PR was closed or merged._