repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
20,168
closed
Enable PyTorch 1.13
# What does this PR do? This PR enables PyTorch 1.13 for Transformers so we can start adding functionality like safer loading with `torch.load`. Since there are no wheels for torch scatter, this comes at the price of uninstalling `torch-scatter`. However the PR to move away from this dep and use the PyTorch core ops seems well under way, so skipping the TAPAS tests for now until the PR is merged does not seem like a heavy price to pay (cc @NielsRogge for information). A couple of tests are still failing, which are all torch FX tests (cc @michaelbenayoun, see failing job [here](https://app.circleci.com/pipelines/github/huggingface/transformers/51241/workflows/4a2c365b-8721-4c2c-adac-54f0fd7cf9c8/jobs/613048)). I'm skipping them and we can fix them next week in a followup PR.
11-10-2022 20:04:39
11-10-2022 20:04:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @sgugger , there is a PR https://github.com/huggingface/transformers/pull/20149 but we forgot to ping you. (Oh, I just saw you are aware of that PR)<|||||>Hi @sgugger :) - One question, you have removed `torch-scatter` dependency but at the same time you have added decorator `require_scatter` for `test_pt_tf_model_equivalence` test. Is that correct or am I just missing something?<|||||>The scatter dependency is only removed from the CPU runners on circleCI, it's not removed from the library byt his PR, that's your job ;-) . When testing in our other setups that include the `scatter` dependency, the tests will be run (until your PR is merged and the dep is removed entirely).
transformers
20,167
closed
Fix object-detection bug (height, width inversion).
# What does this PR do? Fixes a bug I didn't catch, but height and width where inversed. https://huggingface.co/Narsil/layoutlm-funsd (Contains the fix) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2022 18:13:36
11-10-2022 18:13:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20167). All of your documentation changes will be reflected on that endpoint.
transformers
20,166
closed
Fix arg names for our models
Some arg names on the `infer` method for `ESMFold` don't fit our port, and this PR updates/removes them. In future these methods will probably be moved to the tokenizer/processor but this quick fix will at least get them working for now! Fixes #20120
11-10-2022 16:30:17
11-10-2022 16:30:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,165
closed
Adding a siamese text similarity inference pipeline
### Feature request Siamese networks have been really popular for variety of tasks in nlp. I am wondering if we can build a inference pipeline class for siamese like architecture. Pipeline can look something like ``` class SiamesePipeline(Pipeline): def _sanitize_parameters(self): return {}, {}, {} def preprocess(self, texts): return self.tokenizer(texts['text'], return_tensors=self.framework),self.tokenizer(texts['text_pair'], return_tensors=self.framework) def _forward(self, model_inputs): output_text = self.model(**model_inputs[0]) output_text_pair = self.model(**model_inputs[1]) sentence_embedding_text = output_text['last_hidden_state'][:,0,:] sentence_embedding_text_pair = output_text_pair['last_hidden_state'][:,0,:] cos = torch.nn.functional.cosine_similarity(sentence_embedding_text, sentence_embedding_text_pair) return cos def postprocess(self, model_outputs): return model_outputs.item() ``` Please note that preprocess takes in a dictionary of text pairs eg ({'text':'I like you.', 'text_pair':'I love you.'}). Also we would need to make change in datacollator to handle the tuples for [base](https://github.com/huggingface/transformers/blob/e0d7c831c7691a0069a57ba03993a8d531343de1/src/transformers/pipelines/base.py#L172) pipeline. ``` def inner_wrapper(items): if isinstance(items[0], tuple): items_text, items_text_pair = zip(*items) return inner(items_text), inner(items_text_pair) else: return inner(items) return inner_wrapper ``` inference will looks something like ``` tokenizer = AutoTokenizer.from_pretrained(input_path_model) model = AutoModel.from_pretrained(input_path_model) dataset = Dataset.from_pandas(dataset_df[['text', 'text_pair']]) pipe = SiamesePipeline( model=model, tokenizer=tokenizer, device=model_params_parsed['device'], num_workers=4) score = list(tqdm(pipe( KeyPairDataset(dataset, 'text', 'text_pair'), batch_size=model_params_parsed['batch_size']), total=len(dataset))) ``` or without dataset `pipe({'text':'I like you.', 'text_pair':'I love you.'})` Would love to know if this approach is good and in line with pipeline conventions. If yes then does it makes sense to add this change permanently. ### Motivation I am using this solution for my Siamese experiments. ### Your contribution I can raise a PR.
11-10-2022 15:35:45
11-10-2022 15:35:45
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Narsil , please have a look 😇<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, sorry for failing to see those messages. In principle there's no issue with adding such a pipeline, however, do `transformers` have such models ? Ideally pipelines do not reflect architecture (here siamese) only the input and outcome, so something like `text-similarity` comes to mind. (This is covered by `sentence-transformers` typically).<|||||>I dont think transformers has any siamese models right now. Package currently lacks a way to train `text-similarity` directly too. Sentence-transformers cover it but i has its own issues. You cannot use other pipeline abstractions or TrainerAPI that are part of transformers when using sentence-transformers. Package doesn't even support multi-gpu training out of the box. So I believe there is a value to getting them to transformers in future. We need to send a pair of text instead of text, and we need to tokenise the both texts separately. Also support datacollator separately for text pairs. Since pipeline supports these operations i believe pipeline would need change. I haven't looked deep into how `sentence-transformers` does it. For now, I have moved to using pytorch-lighting for my training and inference, so I dont have immediate need for this now. We can mark this issue closed if it doesnt align with package direction right now. Else i can make a design and contribute some basic pipeline and training framework to add support after design gets a go ahead.<|||||>Tagging @sgugger for information.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,164
closed
doc comment fix: Args was in wrong place
# What does this PR do? Small fix for `Args:` being in the wrong place in the doc comments. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2022 14:44:58
11-10-2022 14:44:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20164). All of your documentation changes will be reflected on that endpoint.
transformers
20,163
closed
OWLViT - Allow text model to compute text embeddings only once
### Feature request Hi again, Currently, the OWL-ViT pipeline works by having a list of queries **per example** (e.g. for classification problems with 10 classes there would be 10 queries for each image). I understand that the rationale behind this is to allow to train with multiple datasets simultaneously where each dataset has a different set of classes, and where images from different dataset may be in the batch at the same time. Nevertheless, in many settings (in most actually), we train/test using a single training/test set. And calculating the text embedding multiple times causes redundant computations. So, could you provide functionality for allowing the text embedding to be calculated only once? By once I mean **one time for each of the queries that correspond to the dataset classes**, and this to be used for all examples in the batch rather than calculating it for each batch element. Thank you! cc @alaradirik
11-10-2022 14:27:58
11-10-2022 14:27:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This is a great feature request and can help in improving the usability of OWL-ViT. Please do consider. cc @alaradirik <|||||>Hi @amitjena40, sorry for my late reply! We looked into this issue and while we think it would make it easier to use OWL-ViT at a larger scale, it is a big breaking change as it requires introducing new arguments and rearranging multiple components.
transformers
20,162
closed
[WHISPER] Update modeling tests
# What does this PR do? Fixes the test after the update of the tokenizer, which added sufix and prefix tokens.
11-10-2022 13:50:36
11-10-2022 13:50:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.<|||||>I agree, it is indeed better to just use `add_special_tokens=False`! Slipped my mind 😉 <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.<|||||>Ping me again once when you think it's ready :) <|||||>Looks like we still need to add `add_special_tokens=False` to the tf test @ArthurZucker!<|||||>Yeah on it 🤗<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.<|||||>Ah there's small bug with this, the kwargs is passed to `self.pad` in the feature extractor. Gonna fix that<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.
transformers
20,161
closed
how to fine tune custom dataset using coreference pretrained model
the pre-trained model available in hugging face hub "nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large" how to fine tuning with own custom dataset.
11-10-2022 12:00:52
11-10-2022 12:00:52
Hi, For that I'll refer to the training guide of Sentence Transformers: https://www.sbert.net/docs/training/overview.html.<|||||>> Hi, > > For that I'll refer to the training guide of Sentence Transformers: https://www.sbert.net/docs/training/overview.html. Hi @NielsRogge , I have to use my own dataset for co-reference resolution task, so above mention suggestion will work on pretrained model of "nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large". after finetune I got output folder which contains 1_Pooling ,config.json, config_sentence_transformers.json,eval,modules.json,pytorch_model.bin,README.md,sentence_bert_config.json sentencepiece.bpe.model,special_tokens_map.json, tokenizer.json, tokenizer_config.json and this folder I saved as zip file and load path of fine tune model for prediction but when use for prediction it is not working
transformers
20,160
closed
Add segmentation + object detection image processors
# What does this PR do? Adds image processors for DETR, Deformable DETR, Conditional DETR, YOLOS and Maskformer, as many of the image processors methods are copied from DETR. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
11-10-2022 11:13:54
11-10-2022 11:13:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20160). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge @sgugger @alaradirik Sorry for the previous issues with the docstrings. They should all be resolved now.
transformers
20,159
closed
Generate: fix TF doctests
# What does this PR do? Fixes doctests in `src/transformers/generation/tf_utils.py` that were not passing.
11-10-2022 11:08:20
11-10-2022 11:08:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20159). All of your documentation changes will be reflected on that endpoint.
transformers
20,158
closed
[MaskFormer] Add doc tests
# What does this PR do? This PR fixes all code snippets for MaskFormer, and makes sure they are tested. None of them actually ran without issues. It makes a distinction between semantic and panoptic segmentation. Fixes #20132
11-10-2022 09:56:01
11-10-2022 09:56:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20158). All of your documentation changes will be reflected on that endpoint.
transformers
20,157
closed
Update `OnnxConfig.generate_dummy_inputs` to check `ImageProcessingMixin`
# What does this PR do? As we have a new class `ImageProcessingMixin`, the method `OnnxConfig.generate_dummy_inputs` needs to check this case, otherwise it can't find a if/else branch to create dummy inputs. See https://github.com/huggingface/transformers/blob/7ec1dc8817a99d16e6f9e0ab94ce4027ef74b72d/src/transformers/onnx/config.py#L371 Current failing error is: ```bash AssertionError: beit, default -> Unable to generate dummy inputs for the model. Please provide a tokenizer or a preprocessor. ```
11-10-2022 09:27:12
11-10-2022 09:27:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>We still have `OwlViTFeatureExtractor` and no `OwlViTImageProcessor`. @amyeroberts I guess it is a miss, right? If so, could you work on adding `OwlViTImageProcessor` in another PR, thanks.<|||||>@ydshieh Yes - the `OwlViTImageProcessor` doesn't exist yet (along with other object detection / segmentation models). These will be added soon. There's two pending PRs: * Adding transforms: https://github.com/huggingface/transformers/pull/20003 * Adding these models image processors: https://github.com/huggingface/transformers/pull/20160 I believe this should be only affect OwlViT - as it has a processor class which contains both the feature extractor and the image processor. As you've done in the PR - I think maintaining a check for both `FeatureExtactionMixin` and `ImageProcessingMixin` should work. We can then remove the check for `elif isinstance(preprocessor, FeatureExtractionMixin) and preprocessor.model_input_names[0] == "pixel_values":`
transformers
20,156
closed
Models can't be loaded after updating to Python 3.10
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.27 - Python version: 3.10.8 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @SaulLu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I updated Python to version `3.10.8`. Note that I use JupyterLab. I had to re-install a lot of packages (not `transformers` is `4.24.0`). At first, I got an error message that the fast tokenizer couldn't be loaded (sorry to be this vague, I didn't track it), so I updated some packages. Now I get an error when I try to load the tokenizer of [this model](https://huggingface.co/deepset/xlm-roberta-large-squad2), and I am not able to overcome it. Steps to reproduce the behavior: 1. Update Python to 3.10.8 2. Update JupyterLab and related libraries 3. Run the following code: ``` # Import libraries from transformers import pipeline, AutoTokenizer # Define checkpoint model_checkpoint = 'deepset/xlm-roberta-large-squad2' # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) ``` I tried several solutions ([this](https://stackoverflow.com/questions/70943244/attributeerror-module-collections-has-no-attribute-mutablemapping) and [this](https://stackoverflow.com/questions/69512672/getting-attributeerror-module-collections-has-no-attribute-mutablemapping-w)) but none seem to work. [Here](https://github.com/googleapis/google-auth-library-python/pull/419) they suggest I should change `collections.Mapping` into `collections.abc.Mapping`, but I wouldn't knwo where to do it. Another possible solution is downgrading Python to 3.9, but I would like to keep it as last resort. Many thanks for your help ### Expected behavior Tokenizer should be loaded. Instead, I get this error: ``` AttributeError Traceback (most recent call last) Cell In [3], line 2 1 # Tokenizer ----> 2 tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) File ~/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:637, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 635 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 636 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 637 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 638 else: 639 if tokenizer_class_py is not None: File ~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1777, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1774 else: 1775 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}") -> 1777 return cls._from_pretrained( 1778 resolved_vocab_files, 1779 pretrained_model_name_or_path, 1780 init_configuration, 1781 *init_inputs, 1782 use_auth_token=use_auth_token, 1783 cache_dir=cache_dir, 1784 local_files_only=local_files_only, 1785 _commit_hash=commit_hash, 1786 **kwargs, 1787 ) File ~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1932, in PreTrainedTokenizerBase._from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, local_files_only, _commit_hash, *init_inputs, **kwargs) 1930 # Instantiate tokenizer. 1931 try: -> 1932 tokenizer = cls(*init_inputs, **init_kwargs) 1933 except OSError: 1934 raise OSError( 1935 "Unable to load vocabulary from file. " 1936 "Please check that the provided vocabulary is accessible and not corrupted." 1937 ) File ~/.local/lib/python3.10/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py:155, in XLMRobertaTokenizerFast.__init__(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs) 139 def __init__( 140 self, 141 vocab_file=None, (...) 151 ): 152 # Mask token behave like a normal word, i.e. include the space before it 153 mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token --> 155 super().__init__( 156 vocab_file, 157 tokenizer_file=tokenizer_file, 158 bos_token=bos_token, 159 eos_token=eos_token, 160 sep_token=sep_token, 161 cls_token=cls_token, 162 unk_token=unk_token, 163 pad_token=pad_token, 164 mask_token=mask_token, 165 **kwargs, 166 ) 168 self.vocab_file = vocab_file 169 self.can_save_slow_tokenizer = False if not self.vocab_file else True File ~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:114, in PreTrainedTokenizerFast.__init__(self, *args, **kwargs) 111 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) 112 elif slow_tokenizer is not None: 113 # We need to convert a slow tokenizer to build the backend --> 114 fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) 115 elif self.slow_tokenizer_class is not None: 116 # We need to create and convert a slow tokenizer to build the backend 117 slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py:1162, in convert_slow_tokenizer(transformer_tokenizer) 1154 raise ValueError( 1155 f"An instance of tokenizer class {tokenizer_class_name} cannot be converted in a Fast tokenizer instance." 1156 " No converter was found. Currently available slow->fast convertors:" 1157 f" {list(SLOW_TO_FAST_CONVERTERS.keys())}" 1158 ) 1160 converter_class = SLOW_TO_FAST_CONVERTERS[tokenizer_class_name] -> 1162 return converter_class(transformer_tokenizer).converted() File ~/.local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py:438, in SpmConverter.__init__(self, *args) 434 requires_backends(self, "protobuf") 436 super().__init__(*args) --> 438 from .utils import sentencepiece_model_pb2 as model_pb2 440 m = model_pb2.ModelProto() 441 with open(self.original_tokenizer.vocab_file, "rb") as f: File ~/.local/lib/python3.10/site-packages/transformers/utils/sentencepiece_model_pb2.py:20 18 from google.protobuf import descriptor as _descriptor 19 from google.protobuf import message as _message ---> 20 from google.protobuf import reflection as _reflection 21 from google.protobuf import symbol_database as _symbol_database 24 # @@protoc_insertion_point(imports) File /usr/lib/python3/dist-packages/google/protobuf/reflection.py:58 56 from google.protobuf.pyext import cpp_message as message_impl 57 else: ---> 58 from google.protobuf.internal import python_message as message_impl 60 # The type of all Message classes. 61 # Part of the public interface, but normally only used by message factories. 62 GeneratedProtocolMessageType = message_impl.GeneratedProtocolMessageType File /usr/lib/python3/dist-packages/google/protobuf/internal/python_message.py:69 66 import copyreg as copyreg 68 # We use "as" to avoid name collisions with variables. ---> 69 from google.protobuf.internal import containers 70 from google.protobuf.internal import decoder 71 from google.protobuf.internal import encoder File /usr/lib/python3/dist-packages/google/protobuf/internal/containers.py:182 177 collections.MutableMapping.register(MutableMapping) 179 else: 180 # In Python 3 we can just use MutableMapping directly, because it defines 181 # __slots__. --> 182 MutableMapping = collections.MutableMapping 185 class BaseContainer(object): 187 """Base container class.""" AttributeError: module 'collections' has no attribute 'MutableMapping' ```
11-10-2022 09:06:52
11-10-2022 09:06:52
Hi @NeuroinformaticaFBF, Thanks a lot for this issue! The issue seems to be coming from an external dependency [sentencepiece](https://github.com/google/sentencepiece) which is using protobuf . Can you share with us the versions of the following packages please :pray: ? For example with: ``` pip freeze | grep "protobuf|sentencepiece|tokenizers" ```<|||||>Yes of course. These are the versions: - `protobuf` = `3.0.0` - `tokenizers` = `0.13.2` - `sentencepiece` = `0.1.97`<|||||>I can't have an env with python 3.10.8 right now, but the first thing I would want to try is to upgrade protobuf to its latest version which is `4.21.9` :relaxed: <|||||>I installed version `4.21.9`. That changed the error, which was: ``` TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates ``` Downgrading `protobuf` to version `3.20.0` fixed it! Many thanks for the quick help 👍🏻
transformers
20,155
closed
Add to DeBERTa resources
# What does this PR do? Adds resources to Deberta Relates to https://github.com/huggingface/transformers/issues/20055 @stevhliu Can you please take a look :-). I could not really find anything on DeBERTa but since DeBERTa builds upon RoBERTa, should I add the materials for RoBERTa? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2022 05:50:51
11-10-2022 05:50:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20155). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20155). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20155). All of your documentation changes will be reflected on that endpoint.
transformers
20,154
closed
Less noisy console output
### Feature request I have just started using HF transformers and am struck by the amount of text it dumps into the console. Just following the steps in the course, a simple script that loads a model and trains it spews out this: ![image](https://user-images.githubusercontent.com/4443482/200996914-b944f497-459b-4fbd-b47f-0d6bf10651b3.png) ...my monitor isn't big enough to screenshot it all :) Note that I'm using PyCharm, and specifically the built in "Python Console". ### Motivation This is a problem because now a user has two options: * every single time they run their code, scan the wall of text to see if there is any new information that they haven't seen 300 times before * not re-read the wall of text every time, and potentially miss out on _useful_ information that they need to attend to. Also, it's just not very pretty, and pretty is nice. ### Your contribution Only suggestions/questions: * Do any HF developers use PyCharm's Python Console? Maybe it's worth testing on this, it's flawed, but quite popular. * You can check whether the environment is a TTY with `sys.stdout.isatty()`. tqdm simply doesn't work well when not in a terminal (beyond a simple indicator, as long as you don't print anything while the indicator is active). So a good solution is to simply print the results at the end for these environments. * I don't think writing cache files is something to notify the user about. Perhaps it's worth thinking in terms of 'user personas': to the first-time user, this is useful information. For every other run, once the user knows that HF writes checkpoints in a certain place, it's no longer information that needs to be logged and so goes from helpful to detrimental, since it makes important info harder to spot. Maybe the problem is just that HF it setting the log level to "INFO" when the default Python level is "WARNING" and all you need to do is pick up the correct log level from the user's environment and most of the junk will disappear. * I don't know why it's all red, this also adds to the difficulty in seeing real errors (and also adds to the ugliness). I hope this is useful and not just me complaining...
11-10-2022 04:26:36
11-10-2022 04:26:36
You can adjust the logging level to your preferred value with `transformers.utils.logging.set_verbosity(log_level)`. Also cc @LysandreJik <|||||>Thanks @sgugger, I just tried that. There's still plenty of noise from code in HF (datasets) like this: ```py logger.warning(f"Loading cached processed dataset at {cache_file_name}") ``` Also, `Trainer` will call `args.get_process_log_level()` and overwrite whatever I've set with `logging.set_verbosity()`. I think there might be something wrong in `get_process_log_level` (after a quick glance). The [docstring says](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L205-L208) that the default log level is `passive` and that this won't change anything, but [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1610) in the code explicitly sets the log level to `INFO`. Should that not default to `logger.getEffectiveLevel()`?<|||||>You'll need to do the same thing for `datasets` (same API I believe 🤞 ). As for the `Trainer` it's very possible that there is a bug. Do you want to suggest a fix in a PR?<|||||>I've got a full schedule with study at the moment, sorry. So in summary this looks like 2.5 issues: * messages like "loaded from cache" should be log level INFO, not WARNING * the default log level should read from the user's environment, or at least use the same default as Python, which is 30/WARNING, not 20/INFO * And a nice to have would be that all HF packages shared a log-level setting, although if the defaults are right this is not a big deal.<|||||>I disagree with the first two, and even if I did, it's too late to change it without surprising the whole user community. The only issue I see is the bug in `Trainer` you reported :-)<|||||>Ah, interesting, I thought the first one was quite clear cut. From the [Python logging how-to](https://docs.python.org/3/howto/logging.html): * INFO: Confirmation that things are working as expected. * WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected. But I do agree that too much change for minor things is not great, so fair enough if you want to stick with things the way they are. But for me, this makes Huggingface harder to work with than it needs to be, swamping out my own logging. I'm quite surprised it's intentional!<|||||>Ah sorry I misread, I do agree with you on the first comment, and this one might be something we can change as a warning is indeed too strong for this message.<|||||>Just dug in the code base and didn't find any obvious `logger.warning` that tells the user about something loaded from cache. Could you tell me which one you saw (or was it from the Datasets library?)<|||||>Oh good! :) Yes I just checked and this is actually coming from the datasets package. <|||||>Ok, so you should open an issue there. Agreed with you that those should be info (and it's the reason you will see most of our examples set the log level of datasets to Error, to avoid getting those warnings).<|||||>@sgugger something else I've just noticed is that sometimes transformers will set the log level to info. I can't pin down exactly when, but I see that there's lots of code that calls `logging.set_verbosity_info()` at the top level of the module. Is that intentional? I don't understand the logic of a module globally changing the log level to INFO.<|||||>Only scripts do this (mostly conversion scripts of models from their original repos to Transformers), not the module itself.<|||||>Hmm, are these scripts ever called from the application code? There's definitely _something_ that sets the log level to INFO, and I think it's related to loading a model for the first time (which is why it's hard to replicate).<|||||>Here's an example, I have this code that references a model not in my cache. ```py print(f"Verbosity: {transformers.logging.get_verbosity()}") conf = transformers.AutoModel.from_pretrained("distilgpt2") print(f"Verbosity: {transformers.logging.get_verbosity()}") ``` Interestingly, it ran and had the same log levels immediately before and after, but a second later, when I queried the log level in the console, it had changed. ![image](https://user-images.githubusercontent.com/4443482/213947678-76399f2d-384b-47cf-bb52-fdee00959ca7.png) I had a breakpoint on `transformers.logging.set_verbosity` that wasn't triggered, not sure why. So do you have some post-download steps running a script that changes the log level? Perhaps a good idea would be to move all those `logging.set_verbosity_info()` calls inside the `if __name__ == "__main__":` guards.<|||||>Actually I'm going to re-open this, since there's still the bug of `TrainingArguments` defaulting to log level INFO so that just the act of creating a `Trainer` changes the log level. Please do let me know if I'm wasting my time reporting these issues, maybe there's bigger fish to fry and no interest in fiddling with logging.<|||||>I'm not sure I understand the bug here. Creating the `Trainer` with `TrainingArguments` at logel level INFO will change the log level yes. If you want another log level you should select it.<|||||>No, if I have log level set to WARNING (the default) and create a `Trainer`, this _changes_ the log level to INFO. This code: ```py print(f"Verbosity: {transformers.logging.get_verbosity()}") trainer = transformers.Trainer( model=model, args=TrainingArguments( output_dir=dg.get_root_dir("logs/hf"), evaluation_strategy="epoch", report_to=None, fp16=True, ), train_dataset=dataset["train"], eval_dataset=dataset["validation"], data_collator=DataCollatorWithPadding(tokenizer=tokenizer), ) print(f"Verbosity: {transformers.logging.get_verbosity()}") ``` Results in this: ![image](https://user-images.githubusercontent.com/4443482/214172791-f287a1d7-d14d-4740-9e2e-68b3159b1c21.png) <|||||>Yes, because you have left the logging value of the `TrainingArguments` to its default value of info. Just so I understand better, you would like the `TrainingArguments` to defaults to `None` and only change the logging level if explicitly set to some value? I can get behind that if you want to make a PR.<|||||>`TrainingArguments` defaults to `passive`, doesn't it? See [here](https://github.com/huggingface/transformers/issues/20154#issuecomment-1310772907)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,153
closed
How to fine-tune a pre-trained protein language model on the protein folding task ?
### Feature request @Rocketknight1, Thanks for releasing the notebook and example of [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) and [How to generate protein folds](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb). These codes help me quickly apply the SOTA protein structure prediction model. However, I wonder what is the best way to train the ESMFold from the scratch. To be specific, I want to finetune the protein language model ESM-2 on the large-scale protein sequence database (e.g. UniRef 90), And I can get a new model for the downstream protein folding task. But I don't know whether it can use the Transformers [Trainer](https://huggingface.co/docs/transformers/training) to implement this function. Hope for your suggestions. Thanks in advance! ### Motivation [HuggingFace Trainer](https://huggingface.co/docs/transformers/training) has provided very convenient functions (e.g training on many GPUs) to fine-tune a pre-trained model. Providing a notebook or example to fine-tune the protein language model ESM-2 for the protein folding task may be very helpful for the engineer that works on protein structure prediction. ### Your contribution I can work with you to contribute the notebook or example of fine-tune the protein language model ESM-2 for the protein folding task if necessary.
11-10-2022 03:32:38
11-10-2022 03:32:38
Hi @pengshuang - right now our port of ESMFold is only really usable for inference, and is lacking some of the training code. This happened because we were rushing to launch simultaneously with the release by FAIR, and so we had to launch with a couple of bits missing! We're working with the team at FAIR to add this, though, and I'll let you know when we have anything to report.<|||||>@Rocketknight1 Thanks for your quick reply. Looking forward to your future work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Rocketknight1 Following up on @pengshuang request. Was hoping to finetune ESM 2 for Antibodies. How can we plug in the finetuned ESM-2 into ESMFold structure prediction
transformers
20,152
closed
Fix typo (line 221) in portuguese translation. Documentation @sugger …
# What does this PR do? Fix typo (line 221) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) Relative with this issue #19443 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No ## Who can review? @sgugger @ydshieh Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2022 01:21:05
11-10-2022 01:21:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20152). All of your documentation changes will be reflected on that endpoint.<|||||>@kant Thank you for the PR. However, as @sgugger mentioned before: > there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? You can also try to push an empty commit first to see if it can trigger the CI. You can do it by ```bash git commit --allow-empty -m "push an empty commit to trigger CI" ``` Otherwise, could you try refreshing your CircleCI permissions as mentioned above. Thanks!<|||||>done the steps via this [resource](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)<|||||>(juste waiting that the `check-quality` test pass not sure why it is not triggering) <|||||>@kant We still need you to push an empty commit on this branch so that the tests are re-triggered with the appropriate permissions.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,151
closed
Add video classification pipeline
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds a video classification pipeline using VideoMAE. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2022 01:02:42
11-10-2022 01:02:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.<|||||>Holding off on this PR as we discuss https://github.com/huggingface/datasets/issues/5225 - I think I will update the PR here to use `av` instead of `decord` because of it. Feel free to join the conversation there. --- edit: wrong issue link<|||||>> Holding off on this PR as we discuss #5225 - I think I will update the PR here to use `av` instead of `decord` because of it. Feel free to join the conversation there. Your link is wrong I think, you meant https://github.com/huggingface/datasets/issues/5225 (Tip: GH will shorten the URL on its own so you don't have to care, just copy&paste raw URLs :) ) Maybe a core maintainer could jump in, but I feel like "blocking" PRs like this is not desirable, we should merge whatever is ready first, and hardmonize later. if this PRs code isolate the dependency enough, it should be a breeze to update. And if it's not it could be an argument in favor/defavor of some library. Real code always trumps whatever feelings about library X.<|||||>I agree the PR should not be held off until a feature is merged in Datasets. We can adapt to it later on when Datasets has the features.<|||||>Ok thanks for the advice @Narsil and @sgugger - in that case I'll just resolve all PR comments here and finish this out this week.<|||||>@Narsil is it ok to leave decord for now? I think its fine for this use case, and is just constrained to this pipeline. Later, we'll probably want to add some `video_utils.py` file, just as we do with image utils, where we can keep some more permanent video utilities. Based on the convo in the datasets repo, I think we'll end up using PyAV. To try this feature: ```python from transformers import pipeline pipe = pipeline('video-classification') pipe('https://huggingface.co/datasets/nateraw/video-demo/resolve/main/archery.mp4') # Result """ [{'score': 0.6418354511260986, 'label': 'archery'}, {'score': 0.0026529659517109394, 'label': 'riding unicycle'}, {'score': 0.00258301617577672, 'label': 'golf driving'}, {'score': 0.002545431721955538, 'label': 'throwing ball'}, {'score': 0.0023797585163265467, 'label': 'tobogganing'}] """ ```
transformers
20,150
closed
Typo fixed (line 219) in german translation. Documentation: @sgugger @ydshieh
# What does this PR do? Typo fixed (line 219) in german translation <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) No fix on previous issue, but related to this #19443 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2022 00:16:08
11-10-2022 00:16:08
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20150). All of your documentation changes will be reflected on that endpoint.<|||||>Like in your other PRs, the tests are not run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Done the steps. But failed from this side.<|||||>You might need to push an empty commit to retrigger the tests.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,149
closed
Fix tapas scatter
# What does this PR do? Changes the usage of scatter from torch_scatter to PyTorch's scatter thus removing the dependency on third party library * [x] remove torch_scatter dependency * [x] update `_segment_reduce` function in order to work with PyTorch's scatter * [x] update test case `test_reduce_sum_vectorized` Fixes # (issue) https://github.com/huggingface/transformers/issues/20101 ## Who can review? @NielsRogge
11-09-2022 21:56:55
11-09-2022 21:56:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20149). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @sgugger - Sure, I will remove it as a part of removing all "scatter" mentions requested by @NielsRogge.<|||||>Hi @NielsRogge - could you tell me what is the difference between `require_scatter` and `require_torch_scatter` in `transformers/src/transformers/testing_utils.py` since they are calling the same thing.<|||||>Good question, you can remove both ;)<|||||>@NielsRogge - I have removed "scatter" mentions from the code base. It will be good to double check the changes :). I have not changed `.circleci/create_circleci_config.py` since removing it is a part of the #20168 PR.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20149). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20149). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for all the work! Will rebase my PR on yours to finish the job :-)
transformers
20,148
closed
Add support for images embeddings as one of the **input parameters
### Feature request [Owl-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) **Current flow** for inference for OWL ViT is to accept image and text, run it through a processor and then give that as an input to the Model for inference. Result: Getting detections with some confidences. ```python model = OwlViTForObjectDetection.from_pretrained(...) processor = OwlViTProcessor.from_pretrained(...) inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) ``` **Additional flow** It would be useful to have additional capability to support images embeddings as an input (either for a processor or the model itself), so the additional flow would look like this: take 100 (e.g.) images -> run to calculate images embeddings, save (a dataframe, or files) -> use model(embeddings_dir, text_terms) to infer detections. ```python model = OwlViTForObjectDetection.from_pretrained(...) processor = OwlViTProcessor.from_pretrained(...) # using images_embeddings instead of images inputs = processor(text=texts, images_embeddings=image_embedding, return_tensors="pt") outputs = model(**inputs) ``` ### Motivation Creating a cache, precomputed set of images embeddings for faster inference / search by text. It is possible, that this functionality already exists, but because of the way owl-vit is structured, it might be tricky to perform: currently **inputs contain embeddings for bboxes and text. ### Your contribution Would be difficult, as I'm not familiar with the infra.
11-09-2022 18:58:28
11-09-2022 18:58:28
cc @alaradirik and @NielsRogge <|||||>Hi @ramanova, thanks for the suggestion! This is doable but might be a bit tricky. `OwlViTProcessor` preprocesses images (resizing, cropping, etc.) and doesn't compute embeddings so you would need compute the embeddings using the base `OwlViTModel`. @NielsRogge do you know if there are any other models that provide this kind of functionality?<|||||>Several NLP models, like BERT, provides the `inputs_embeds` argument as seen [here](https://github.com/huggingface/transformers/blob/d066c3731bed1755f93ea64f0f00981b805532de/src/transformers/models/bert/modeling_bert.py#L919), which allows you to provide embeddings yourself rather than `input_ids`. So the use case here is similar, I assume.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,147
closed
Fix `ImageSegmentationPipelineTests`
# What does this PR do? - If the concept is approved, I can apply the same changes to other places) - On `CircleCI`, we get `0.9469921875 not greater than or equal to 0.99`. Maybe I should lower the threshold ..? ------- Fix `ImageSegmentationPipelineTests.test_small_model_pt`. Compare hash of the output/expected masks is too flaky. This PR: - get the current output masks, upload them to Hub - use the uploaded masks as the new expected values - only compare if the output/expected masks match with 99% or above It should be safe to use the current output masks as the new expected masks, as the current output/expected masks seems match close: ```python [ { "label": "LABEL_88", "mask": {"hash": "4e2da4b9a4", "shape": (480, 640), "white_pixels": 11}, "score": None, }, { "label": "LABEL_101", "mask": {"hash": "9ec7310913", "shape": (480, 640), "white_pixels": 8946}, "score": None, }, { "label": "LABEL_215", "mask": {"hash": "21dcfdc10d", "shape": (480, 640), "white_pixels": 298243}, "score": None, }, ], ``` current expected values (before this PR): ```python [ { "label": "LABEL_88", "mask": {"hash": "7f0bf661a4", "shape": (480, 640), "white_pixels": 3}, "score": None, }, { "label": "LABEL_101", "mask": {"hash": "10ab738dc9", "shape": (480, 640), "white_pixels": 8948}, "score": None, }, { "label": "LABEL_215", "mask": {"hash": "b431e0946c", "shape": (480, 640), "white_pixels": 298249}, "score": None, }, ], ```
11-09-2022 17:32:17
11-09-2022 17:32:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>Mark as draft in order to debug this super flaky test!<|||||>@sgugger @Narsil @alaradirik and others: ready for your review<|||||>@Narsil - no zip now - I decided to simply use (for now) `.../blob/...` instead of `.../resolve/...` to link to the pages (where we can visualize the images, although not the full size). ``` https://huggingface.co/datasets/hf-internal-testing/mask-for-image-segmentation-tests/blob/main/mask_0.png ``` I don't like much the usage of `datasets-server`: - too long - link strings not corresponding to file names I am going to merge if you are OK 🙏
transformers
20,146
closed
Make DummyObject more robust
# What does this PR do? Use `__getattribute__` instead of `__getattr__` in `DummyObject` to track the attribute access. Compared to `__getattr__` (invoked only if the attribute is missing), `__getattribute__` is invoked for every access, hence more robust. Fixes #20127 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
11-09-2022 17:23:37
11-09-2022 17:23:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20146). All of your documentation changes will be reflected on that endpoint.
transformers
20,145
closed
Set task and language tokens for whisper model
### Feature request Hi @ArthurZucker, thanks for the great work on whisper models. I would like to know if it's possible to also have a `set_prefix_tokens` function in `WhisperForConditionalGeneration`, which receives the language/task name and changes the language/task token in `model.config.forced_decoder_ids`, in order to run the ASR inference on languages other than EN. As far as I know it has by default `['<|en|>', '<|transcribe|>', '<|notimestamps|>']`, so I have to get the language token ID first and set it manually before running generate ### Motivation An easy API to set language/task would be useful. ### Your contribution Willing to do if this function doesn't exist
11-09-2022 16:51:11
11-09-2022 16:51:11
Hey! That's a good idea and also on my TODO! It is debatable whether we should have this in the `WhisperForConditionalGeneration` or add a new class for multilingual Decoding which would do this automatically. We could also add a `WhisperForSequenceClassification` class which would just detect the language. CC @sgugger and @patrickvonplaten as this is really a design question. <|||||>I think this is something we would want to solve with generate use-case specific configurations (cc @gante) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,144
closed
[OWL-ViT] Make model consistent with CLIP
# What does this PR do? This PR improves OWL-ViT by removing 3 arguments, to make the model more consistent with CLIP. All integration tests pass with these fixes.
11-09-2022 15:15:08
11-09-2022 15:15:08
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20144). All of your documentation changes will be reflected on that endpoint.<|||||>The arguments `return_base_image_embeds`, `use_hidden_state` and `return_projected` are not useful for users, and the number of people that could have used them until now should be marginal (I don't think anyone is using them at all). I really hope we proceed with this PR, I would not add deprecation here as it just cleans up the code, and there are no use cases with those arguments. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20144). All of your documentation changes will be reflected on that endpoint.
transformers
20,143
closed
Adding support for LayoutLMvX variants for `object-detection`.
# What does this PR do? Adding support for `layoutlm` to `object-detection`. LayoutLMv{2,3} can be used for `object-detection`. However the classes are `ForTokenClassification` which not all classes can support a vision + OCR type of inference. (the model needs `bbox` object even if we splitted out the OCR). The current implementation changes `object-detection` to `multimodal` since now the models require `tokenizer` for the layoutlm variants. (This does not affect existing working pipelines). Then it uses reflection at runtime to see if the model is using a `tokenizer`. This is not a great way to go about it but was the "simpler" change I could think of. As long as we don't have support for other model architecture, I'm hesitant to make "cleaner" modifications, since I don't know if other architectures will support the same invariants or not. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-09-2022 14:46:01
11-09-2022 14:46:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,142
closed
[DOCTEST] Fix the documentation of RoCBert
# What does this PR do? Fixes the documetnation test
11-09-2022 14:21:25
11-09-2022 14:21:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20142). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20142). All of your documentation changes will be reflected on that endpoint.<|||||>Sorry for the late fix, tests are passing locally. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20142). All of your documentation changes will be reflected on that endpoint.
transformers
20,141
closed
Add `RoCBertTokenizer` to `TOKENIZER_MAPPING_NAMES`
# What does this PR do? Add `RoCBertTokenizer` to `TOKENIZER_MAPPING_NAMES`.
11-09-2022 12:12:25
11-09-2022 12:12:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20141). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20141). All of your documentation changes will be reflected on that endpoint.
transformers
20,140
closed
[WIP] add the tokenizer for SMALL100 model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> We propose SMaLL-100, which is a compact and fast massively multilingual machine translation model covering more than 10K language pairs, that achieves competitive results with M2M-100 while being much smaller and faster. It is introduced in [this paper](https://arxiv.org/abs/2210.11621)(accepted to EMNLP 2022), and initially released in [this repository](https://github.com/alirezamshi/small100). The model architecture and config are the same as [M2M-100](https://huggingface.co/facebook/m2m100_418M/tree/main) implementation, but the tokenizer is modified to adjust language codes. Comparing to M2M-100, target language code is added to the beginning of source sequence (instead of source language code), and target language code is removed from the target side. I've added the usage instruction in [model hub](https://huggingface.co/alirezamsh/small100). Adding this model to transformers, will help NMT community especially for low-resource languages. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @patil-suraj
11-09-2022 12:04:37
11-09-2022 12:04:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20140). All of your documentation changes will be reflected on that endpoint.<|||||>Since the only difference is in the tokenization code, maybe it would be more beneficial to add the custom code directly in the model repo (see [documentation here](https://huggingface.co/docs/transformers/custom_models#sharing-custom-models) ) and not add a new model to the library?<|||||>@sgugger Thanks for the comment. I currently put the model and tokenization code [here](https://huggingface.co/alirezamsh/small100) in model hub. Is it the standard way? as users have to download the code too. Another alternative is to add new options to [m2m-100 tokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/m2m_100/tokenization_m2m_100.py)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,139
closed
Update SwinForMaskedImageModeling doctest values
# What does this PR do? Doc test was failing because checkpoints were changed in #20034 which have different config values, resulting in different image sizes after preprocessing. * Previous config: https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/preprocessor_config.json * New config: https://huggingface.co/microsoft/swin-base-simmim-window6-192/blob/main/preprocessor_config.json Updates the test to reflect the new checkpoints. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
11-09-2022 12:00:31
11-09-2022 12:00:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,138
closed
Problems with layoutlm language model
### Feature request Hi, I was recently reading Layoutlm and it's variants and I figured out that they do exist in hugging face but there are several problems with It or I am not very sure to understand it just by reading the documentation. Q1) [Layoutlm](https://arxiv.org/abs/1912.13318) is a multimodal machine learning transformer then why is it listed in the text transformers category on HuggingFace ? Q2) Even if it is multimodal, layoutlm does not tend to use the image anywhere in the examples of layoutlm [Layoutlm](https://huggingface.co/docs/transformers/main/en/model_doc/layoutlm) ? Q3) For [Layoutlmv2](https://huggingface.co/docs/transformers/main/en/model_doc/layoutlmv2) there is no tensorflow class like TFlayoutlm as it is available for version1 ? Q4) there is no MLM head class as it available for it's version 1 ? so I am not sure if I want to pretrain this model from scratch how do I do that ? Q5) Same as Q4 there is not MLM head class, so in this case if I have my own tokenizer and I want to pretrain from scratch layoutlm and then simply want to change the transformer with one line of code change, that's not possible because they have different heads ? I am a bit new to HF interface so forgive me if I asked something super basic. I don't know if I have something wrong with understanding layoutlm in the first place or these are valid questions, But I would be very happy if anyone can shed some light on this !! Once again thanks for taking the time to read 😊😊 , have a good day !! ### Motivation Just find it very difficult to understand the implementation of the specific model from the transformers library ### Your contribution I can try to look into it, but first I need to know if the problem is really a problem or it is just my wrong understanding of the library.
11-09-2022 11:53:38
11-09-2022 11:53:38
Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only :-) cc @NielsRogge <|||||>Please link your question on the forum, I'll answer there!<|||||>@NielsRogge can you send me the link to access forums, I tried to look at the discord channel of hugging face but I am not exactly sure where to ask questions. Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,137
closed
Update VisionEncoderDecoder to use an image processor
# What does this PR do? Loading TrOCR processor failed because when checking the type of the feature extractor loaded, it was an image processor, rather than a feature extractor. This PR replaces: * Replaces the feature extractor with an image processor in `TrOCRProcessor` * Adds `AutoImageProcessor` to `AUTO_TO_BASE_CLASS_MAPPING` for the `ProcessorMixin` checks * Adds backwards compatibility in case `feature_extractor` passed in as a kwarg when creating the processor. * Makes equivalent changes in `VisionEncoderDecoder` model ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
11-09-2022 11:46:39
11-09-2022 11:46:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @amyeroberts Nothing extra (other than what Sylvain mentioned) from my side. One thing I think would be great if you can provide a link to the line for > when checking the type of the feature extractor loaded Like: See this line: https://github.com/huggingface/transformers/blob/c4cad8e3018e26f697f4ab0c5926e0c93aa0315b/src/transformers/processing_utils.py#L84 (For me, this way is easier to know what issue we have, and to see if the fix is good for the issue) (Probably not necessary for Sylvain, as they knows everything in mind)<|||||>@ydshieh That's a good point - thanks for the feedback! I'll make sure to add a link next time.
transformers
20,136
closed
Adds image-guided object detection support to OWL-ViT
# What does this PR do? Adds image-guided object detection method to `OwlViTForObjectDetection` class. This enables users to use a query image to search for similar objects in the input image. Co-Authored-By: Dhruv Karan [[email protected]](mailto:[email protected]) Fixes #18748 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/18748 - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X ] Did you write any new necessary tests?
11-09-2022 11:18:55
11-09-2022 11:18:55
@NielsRogge @sgugger sorry for the double PR, the upstream of the branch used in the other [PR](https://github.com/huggingface/transformers/pull/18891) points to huggingface/transformers:img_guided_obj_det instead of main and I couldn't change the upstream. The reviews in the other PR are addressed but there are two failing tests I couldn't debug: ``` FAILED tests/pipelines/test_pipelines_zero_shot_object_detection.py::ZeroShotObjectDetectionPipelineTests::test_pt_OwlViTConfig_OwlViTForObjectDetection_CLIPTokenizerFast_OwlViTFeatureExtractor - IndexError: tuple index out of range FAILED tests/pipelines/test_pipelines_zero_shot_object_detection.py::ZeroShotObjectDetectionPipelineTests::test_pt_OwlViTConfig_OwlViTForObjectDetection_CLIPTokenizer_OwlViTFeatureExtractor - IndexError: tuple index out of range ```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>Could you make sure to add @unography as co-author? I'd prefer to merge the original PR, but if it's not possible, I want to make sure the authorship is properly attributed.<|||||>Hi there! Maybe this is not the place to mention this, but just wanted to mention that the original implementation uses stochastic depth (https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/clip/layers.py#L235). They set it to 0.2 and 0.1 for the vision and text encoders (https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/configs/clip_b16.py#L132). I guess that's not really important if you guys don't plan to implement the training losses for detection, but if you do, maybe it's something to keep in mind :)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger @NielsRogge could you do a final review when you're available? All tests are passing and I think all issues are addressed.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.<|||||>It seems that running the example for image-guided od is still buggy: ``` import requests from PIL import Image import torch from transformers import OwlViTProcessor, OwlViTForObjectDetection import numpy as np import cv2 processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) query_url = "http://images.cocodataset.org/val2017/000000001675.jpg" query_image = Image.open(requests.get(query_url, stream=True).raw) inputs = processor(images=image, query_images=query_image, return_tensors="pt") with torch.no_grad(): outputs = model.image_guided_detection(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process_image_guided_detection( outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes ) i = 0 # Retrieve predictions for the first image plot_image = np.array(image) boxes, scores = results[i]["boxes"], results[i]["scores"] score_threshold = 0.2 for box, score in zip(boxes, scores): if score < score_threshold: continue box = [int(i) for i in box.tolist()] plot_image = cv2.rectangle(plot_image, (box[0],box[1]), (box[0]+box[2], box[1]+box[3]), (0, 255, 0), 2) cv2.imshow("", plot_image) q = cv2.waitKey(0) ``` Upon plotting the boxes, it is very off. This target query pair should work as it works in the scenic repo. Edit: tried both patch-16 and 32 model, same results (bad box predictions on target image)<|||||>> Upon plotting the boxes, it is very off. This target query pair should work as it works in the scenic repo. What's your Pillow version? We've seen that using Pillow==7.1.2 is essential for getting the expected results (and cc @alaradirik we should make sure the model works on any pillow version)<|||||>@NielsRogge , ran `pip install Pillow==7.1.2` and got the same outputs in this example. output of models are as follows: ``` boxes: tensor([[ 7.6539, -0.9177, 646.1529, 474.4720]]) scores: tensor([1.0000]) ``` @alaradirik did you manage to run the example and get an appropriate prediction? Edit: You can see that y1 is 0 in this case which is already wrong if you look at the image, image shape is (480,640) so in this case the bbox is just covering the entire image.<|||||>Hey @timothylimyl, thanks for bringing this up. I was able to replicate the issue on my local and confirmed that it's not OpenCV or Pillow related and stems from the post-processing method. I think it's due to changed default behaviour between PyTorch versions, I'll open a fix PR once I confirm this. CC @NielsRogge <|||||>@timothylimyl sorry for the mixup, I thought this was a Pillow versioning issue we previously encountered and didn't realize the query image you are using is different . The post-process method returns coordinates in (x0, y0, x1, y1) format, the correct command to print the boxes is: `plot_image = cv2.rectangle(plot_image, box[:2], box[2:], (0, 255, 0), 2)` Note that this still returns a bounding box that covers the entire image. This is because OWL-ViT is a text-conditioned model that uses CLIP as its backbone, the image-guided object detection method repurposes the trained text-conditioned model with the assumption that the query image contains a single object. In this case, you are just getting results for an image that could be described with more general terms ("a photo of of a cat sitting on top of a ...."). Here are the results for a cropped version of the query image you are using: ![cropped](https://user-images.githubusercontent.com/8944735/205046498-63bf24d5-e7e0-4b31-8872-09e9300ce3f0.jpeg) <img width="638" alt="new_results" src="https://user-images.githubusercontent.com/8944735/205046902-b53e30b5-8a8f-4bfe-abd6-155624d8e734.png"> <|||||>hey @alaradirik in the other old PR I've uploaded an image and query (+ results) used in the official one. Maybe it's worth trying them as well since you can (subjectively) evaluate the result bboxes using the original results. I hope it helps :) <|||||>Hi @FrancescoSaverioZuppichini, I'm not sure what you mean by subjectively evaluating the bounding boxes or which PR you are referring to? <|||||>Hi @alaradirik, can you share your code that was used to generate the example? I tried cropping and basically I still just received one big bounding box: ``` import requests from PIL import Image import torch from transformers import OwlViTProcessor, OwlViTForObjectDetection import numpy as np import cv2 processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) query_url = "http://images.cocodataset.org/val2017/000000001675.jpg" query_image = Image.open(requests.get(query_url, stream=True).raw) query_image = np.array(query_image)[:280,:] query_image = Image.fromarray(query_image) inputs = processor(images=image, query_images=query_image, return_tensors="pt") with torch.no_grad(): outputs = model.image_guided_detection(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process_image_guided_detection( outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes ) i = 0 # Retrieve predictions for the first image plot_image = np.array(image) boxes, scores = results[i]["boxes"], results[i]["scores"] score_threshold = 0.2 for box, score in zip(boxes, scores): if score < score_threshold: continue box = [int(i) for i in box.tolist()] plot_image = cv2.rectangle(plot_image, box[:2], box[2:], (0, 255, 0), 2) cv2.imshow("", plot_image) q = cv2.waitKey(0) ```<|||||>also, I was confused by the comment `COCO API` as I believe that coco bbox are in the format `x,y,w,h` while PASCAL VOC XML is `x1,y1,x2,y2` which is what we are expecting here. <|||||>@timothylimyl, you are right about the COCO API comment, we will update the docs shortly to reflect the correct returned data format. Here is the code I used and the resulting image but keep in mind that different crops can lead to different results and both text-guided and image-guided object detection requires experimentation. There is no need for the `score_threshold` variable, you can directly use the threshold argument of the post-processing method to filter out low probability bounding boxes. ``` import requests import cv2 import torch import numpy as np from PIL import Image from transformers import OwlViTProcessor, OwlViTForObjectDetection processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) query_url = "http://images.cocodataset.org/val2017/000000001675.jpg" query_image = Image.open(requests.get(query_url, stream=True).raw) query_image =np.array(query_image)[:340] query_image = Image.fromarray(query_image) inputs = processor(images=image, query_images=query_image, return_tensors="pt") with torch.no_grad(): outputs = model.image_guided_detection(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process_image_guided_detection( outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes ) img = cv2.cvtColor(np.array(image), cv2.COLOR_BGR2RGB) boxes, scores = results[0]["boxes"], results[0]["scores"] for box, score in zip(boxes, scores): box = [int(i) for i in box.tolist()] img = cv2.rectangle(img, box[:2], box[2:], (255, 0, 0), 5) cv2.imshow("", img) q = cv2.waitKey(0) ``` ![result](https://user-images.githubusercontent.com/8944735/205241604-16f45c14-1e25-4b70-8272-39e3055e3e33.jpeg) <|||||>Oh wow, that is very unexpected. Seems like the model is not very well trained/robust. The difference between your crop and mine is visually minimal yet the result differs by so much: ![1](https://user-images.githubusercontent.com/49274721/205543725-e5abb10f-f435-4a9e-8c4e-8ab4c2e78221.jpg) [Does not work] versus ![2](https://user-images.githubusercontent.com/49274721/205543747-5bf8fc08-a0ef-4bf1-86b0-0bae93b80377.jpg) [Works] If you crop slightly further up to `:360` then there will be no bounding boxes again (only the one covering the whole image). ![2](https://user-images.githubusercontent.com/49274721/205544357-e88d3c4e-9c27-4273-97c5-55f6dd2e7ff3.jpg) [Does not work!!!] Do you reckon there could be something buggy with the code or is the model fundamentally not robust and require pretty exact crops for matching? It does not make much sense to me that crops have to be so exact as the feature embedding matching won't be that poor. <|||||>@alaradirik to the "original" one https://github.com/huggingface/transformers/pull/18891<|||||>Any updates?<|||||>Hi @timothylimyl, feel free to open an issue with a reproducable code sample so we can discuss it there<|||||>Hi @NielsRogge @timothylimyl @alaradirik @sgugger I have found the issue that causes image conditioning to be so sensitive. There was a small bug in the query selection, please see my PR: https://github.com/huggingface/transformers/pull/23157 Best, Orr
transformers
20,135
closed
Update tokenizer_summary.mdx
# What does this PR do? Hi, thanks for this document. But I think the headings here are a little bit misunderstanding. I changed them to: ``` - Introduction - Subword tokenization - Byte-Pair Encoding (BPE) - Byte-level BPE - WordPiece - Unigram - SentencePiece ``` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? cc @LysandreJik
11-09-2022 11:14:01
11-09-2022 11:14:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20135). All of your documentation changes will be reflected on that endpoint.
transformers
20,134
closed
Update `CLIPSegModelTester`
# What does this PR do? To align with other CLIP-like model testers. See #20044.
11-09-2022 10:07:31
11-09-2022 10:07:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20134). All of your documentation changes will be reflected on that endpoint.
transformers
20,133
open
Compact Transformer
### Model description # Escaping the Big Data Paradigm with Compact Transformers Abstract : > With the rise of Transformers as the standard for language processing, and their advancements in computer vision, there has been a corresponding growth in parameter size and amounts of training data. Many have come to believe that because of this, transformers are not suitable for small sets of data. This trend leads to concerns such as: limited availability of data in certain scientific domains and the exclusion of those with limited resource from research in the field. In this paper, we aim to present an approach for small-scale learning by introducing Compact Transformers. We show for the first time that with the right size, convolutional tokenization, transformers can avoid overfitting and outperform state-of-the-art CNNs on small datasets. Our models are flexible in terms of model size, and can have as little as 0.28M parameters while achieving competitive results. Our best model can reach 98% accuracy when training from scratch on CIFAR-10 with only 3.7M parameters, which is a significant improvement in data-efficiency over previous Transformer based models being over 10x smaller than other transformers and is 15% the size of ResNet50 while achieving similar performance. CCT also outperforms many modern CNN based approaches, and even some recent NAS-based approaches. Additionally, we obtain a new SOTA result on Flowers-102 with 99.76% top-1 accuracy, and improve upon the existing baseline on ImageNet (82.71% accuracy with 29% as many parameters as ViT), as well as NLP tasks. Our simple and compact design for transformers makes them more feasible to study for those with limited computing resources and/or dealing with small datasets, while extending existing research efforts in data efficient transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper : https://arxiv.org/pdf/2104.05704.pdf Github repository : https://github.com/SHI-Labs/Compact-Transformers
11-09-2022 09:06:06
11-09-2022 09:06:06
Are you willing to collaborate to make this available at HF transformers? @astariul . If so, please connect with me <|||||>Hi @astariul and @navinelahi, are there any updates on this issue? May I start working on this?
transformers
20,132
closed
maskformer sample code error
### System Info - `transformers` version: 4.24.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.0 (True) ### Who can help? @sgugger @patil-suraj Try running the sample code in maskformer [https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/maskformer](url) ```python output = feature_extractor.post_process_instance_segmentation(outputs) ``` has a NoneType error. ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade") inputs = feature_extractor(images=image, return_tensors="pt") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to feature_extractor for postprocessing output = feature_extractor.post_process_semantic_segmentation(outputs) output = feature_extractor.post_process_instance_segmentation(outputs) output = feature_extractor.post_process_panoptic_segmentation(outputs) ``` Error Message: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [5], in <cell line: 1>() ----> 1 output = feature_extractor.post_process_instance_segmentation(outputs) File ~\anaconda3\lib\site-packages\transformers\models\maskformer\feature_extraction_maskformer.py:794, in MaskFormerFeatureExtractor.post_process_instance_segmentation(self, outputs, threshold, mask_threshold, overlap_mask_area_threshold, target_sizes, return_coco_annotation) 792 # Get segmentation map and segment information of batch item 793 target_size = target_sizes[i] if target_sizes is not None else None --> 794 segmentation, segments = compute_segments( 795 mask_probs_item, 796 pred_scores_item, 797 pred_labels_item, 798 mask_threshold, 799 overlap_mask_area_threshold, 800 target_size, 801 ) 803 # Return segmentation map in run-length encoding (RLE) format 804 if return_coco_annotation: File ~\anaconda3\lib\site-packages\transformers\models\maskformer\feature_extraction_maskformer.py:163, in compute_segments(mask_probs, pred_scores, pred_labels, mask_threshold, overlap_mask_area_threshold, label_ids_to_fuse, target_size) 161 for k in range(pred_labels.shape[0]): 162 pred_class = pred_labels[k].item() --> 163 should_fuse = pred_class in label_ids_to_fuse 165 # Check if mask exists and large enough to be a segment 166 mask_exists, mask_k = check_segment_validity( 167 mask_labels, mask_probs, k, mask_threshold, overlap_mask_area_threshold 168 ) TypeError: argument of type 'NoneType' is not iterable ### Expected behavior Checking logic shall add if label_ids_to_fuse is None like below. ```python if label_ids_to_fuse is None: should_fuse = False elif pred_class in label_ids_to_fuse: should_fuse = True else: should_fuse = False ```
11-09-2022 09:05:52
11-09-2022 09:05:52
cc @alaradirik and @NielsRogge <|||||>Hi @Tungway1990, thanks for pointing this out! You are right, the ` label_ids_to_fuse` argument is only used for panoptic segmentation and the logic should take None values into account. We'll open a PR to fix this shortly. cc @sgugger @NielsRogge <|||||>We should actually update that code example, as that particular checkpoint was fine-tuned on ADE20K Semantic Segmentation. Hence, it doesn't make sense to postprocess the outputs for instance or panoptic segmentation. Thanks for pointing out!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,131
closed
Failed to import
### System Info python 3.10.7 Target: arm64-apple-darwin21.6.0 Thread model: posix transformers==4.24.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `from transformers import AlbertTokenizer, AlbertModel` -> ``` RuntimeError: Failed to import transformers.models.albert.modeling_albert because of the following error (look up to see its traceback): libcublas.so.11: cannot open shared object file: No such file or directory ``` ### Expected behavior no error
11-09-2022 03:32:24
11-09-2022 03:32:24
Looks like a problem in your CUDA installation.<|||||>it works on my local, but it fails when I use it with docker, do I need to give any additional command in docker apart from installing requirements<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ravi160822 I'm currently getting `libcublas.so.*[0-9] not found in the system path ['/app/src', '/usr/local/lib/python311.zip', '/usr/local/lib/python3.11', '/usr/local/lib/python3.11/lib-dynload', '/usr/local/lib/python3.11/site-packages', '/app/src']` Did you find a workaround? <|||||>I'm facing the same issue as @jagumpert, but only when running Github actions: ```libcublas.so.*[0-9] not found in the system path [(...)]``` This seems to be a PyTorch issue. I was using `torch` version 2.0.1, and downgrading to 2.0.0 fixed the issue.<|||||>I also have the same issue as @jagumpert when trying to build a docker image with --platform=linux/amd64 python:3.11 as the base image. @saattrupdan solution did not work for me ( downgrading torch to 2.0.0 did not fix the issue )<|||||>Same here @jose-arguelles when trying to build a docker image. Did you find any workaround?<|||||>@saattrupdan 's solution helped me, thank you. Downgrading torch to 2.0.0 version also helped me (`poetry add torch=2.0.0`) on Ubuntu without gpu on it!
transformers
20,130
closed
Finetuning m2m100 with run_translation_no_trainer.py using ZERO stage 3 hangs when evaluation after first epoch
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.4.1 (gpu) - Jax version: 0.3.5 - JaxLib version: 0.3.5 - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <yes> ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. accelerate config Accelerate configs as follows: ``` compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED downcast_bf16: 'no' fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 4 use_cpu: false ``` 2. Run finetuning script with command: `accelerate launch run_translation_no_trainer.py --model_name_or_path facebook/m2m100_418M --source_lang ro --target_lang zh --train_file teddata/train.json --validation_file teddata/val.json --output_dir ./m2m100_418M --max_source_length 128 --max_target_length 128 --per_device_train_batch_size=8 --per_device_eval_batch_size=4 --forced_bos_token zh` Traing output infos: 11/09/2022 11:02:34 - INFO - __main__ - ***** Running training ***** 11/09/2022 11:02:34 - INFO - __main__ - Num examples = 1000 11/09/2022 11:02:34 - INFO - __main__ - Num Epochs = 3 11/09/2022 11:02:34 - INFO - __main__ - Instantaneous batch size per device = 8 11/09/2022 11:02:34 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 11/09/2022 11:02:34 - INFO - __main__ - Gradient Accumulation steps = 1 11/09/2022 11:02:34 - INFO - __main__ - Total optimization steps = 94 33%|███████████████████████████ 32/94[18:31<39:25, 9.20s/it] Finetuning hangs here, all GPU-Util is almost 100%. While accelerate config set zero stage 2, finetuning is success . ### Expected behavior Success finish finetuning m2m100 with run_translation_no_trainer.py using ZERO stage 3.
11-09-2022 03:28:45
11-09-2022 03:28:45
cc @pacman100 <|||||>Hello @cokuehuang, please provide a minimal script along with the minimal dataset in order to reproduce this issue. I am unable to reproduce this using below steps: 1. run `accelerate env` command to see the config being used: ``` - `Accelerate` version: 0.15.0.dev0 - Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - Numpy version: 1.23.1 - PyTorch version (GPU?): 1.12.1 (True) - `Accelerate` default config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: no - use_cpu: False - dynamo_backend: NO - num_processes: 4 - machine_rank: 0 - num_machines: 1 - gpu_ids: None - main_process_ip: None - main_process_port: None - rdzv_backend: static - same_network: True - main_training_function: main - deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'cpu', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3} - fsdp_config: {} - megatron_lm_config: {} - downcast_bf16: no - tpu_name: None - tpu_zone: None - command_file: None - commands: None ``` 2. Run below command: ``` accelerate launch run_translation_no_trainer.py --model_name_or_path facebook/m2m100_418M --source_lang en --target_lang ro --dataset_name wmt16 --output_dir ./m2m100_418M --max_source_length 128 --max_target_length 128 --per_device_train_batch_size 8 --per_device_eval_batch_size 4 --dataset_config_name ro-en ``` For this to work, change the following line in `run_translation_no_trainer.py` : ```diff - if isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)): + if isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast, M2M100Tokenizer)): ``` 3. Output logs: ``` 11/10/2022 07:21:52 - INFO - __main__ - ***** Running training ***** 11/10/2022 07:21:52 - INFO - __main__ - Num examples = 610320 11/10/2022 07:21:52 - INFO - __main__ - Num Epochs = 3 11/10/2022 07:21:52 - INFO - __main__ - Instantaneous batch size per device = 8 11/10/2022 07:21:52 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 11/10/2022 07:21:52 - INFO - __main__ - Gradient Accumulation steps = 1 11/10/2022 07:21:52 - INFO - __main__ - Total optimization steps = 57219 0%|▎ | 209/57219 [05:48<26:29:02, 1.67s/it] ``` What I think is happening is that at step 32 epoch 1 is over and now the eval loop starts which is using `accelerator.unwrap_model(model).generate()`. Now, this might be taking long time when being offloaded to CPU and as a result one might feel like code has hanged. Can you try ZeRO Stage-3 without offloading anything to `CPU` and let us know if that resolves the issue? <|||||>@pacman100 1. accelerate env: `- `Accelerate` version: 0.12.0 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Numpy version: 1.23.3 - PyTorch version (GPU?): 1.12.0+cu113 (True) - `Accelerate` default config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: no - use_cpu: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - main_process_ip: None - main_process_port: None - main_training_function: main - deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'cpu', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3} - fsdp_config: {} - downcast_bf16: no ` 2. My training script and datas: [scriptanddatas.zip](https://github.com/huggingface/transformers/files/9979662/scriptanddatas.zip) 3. Yes you're right, by adding log , at step 32, it's start eval loop and 'hangs' at generate(). I'll try ZeRO Stage-3 without offloading cpu later.<|||||>1. Without cpu offloading : `- `Accelerate` version: 0.12.0 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Numpy version: 1.23.3 - PyTorch version (GPU?): 1.12.0+cu113 (True) - `Accelerate` default config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: no - use_cpu: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - main_process_ip: None - main_process_port: None - main_training_function: main - deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3} - fsdp_config: {} - downcast_bf16: no ` trainging output infos: 11/10/2022 18:21:48 - INFO - __main__ - ***** Running training ***** 11/10/2022 18:21:48 - INFO - __main__ - Num examples = 1000 11/10/2022 18:21:48 - INFO - __main__ - Num Epochs = 3 11/10/2022 18:21:48 - INFO - __main__ - Instantaneous batch size per device = 8 11/10/2022 18:21:48 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 11/10/2022 18:21:48 - INFO - __main__ - Gradient Accumulation steps = 1 11/10/2022 18:21:48 - INFO - __main__ - Total optimization steps = 96 33%|████████████████████████████████████████████████▎ | 32/96 [03:06<06:42, 6.29s/it] Hangs already 1hour at 33% and eval data size is only 200.<|||||>Hello @cokuehuang, Thank you for giving the minimal script and data for reproducing the issue on our end. When using ZeRO stage-3 following needs to passed to `generate` function call: ``` if accelerator.state.deepspeed_plugin.zero_stage == 3: gen_kwargs["synced_gpus"] = True #required for ZeRO Stage 3 ``` after adding it, everything should work just fine when using DS ZeRO-3 with/without cpu offloading ``` 11/10/2022 14:09:03 - INFO - __main__ - ***** Running training ***** 11/10/2022 14:09:03 - INFO - __main__ - Num examples = 1000 11/10/2022 14:09:03 - INFO - __main__ - Num Epochs = 3 11/10/2022 14:09:03 - INFO - __main__ - Instantaneous batch size per device = 16 11/10/2022 14:09:03 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 11/10/2022 14:09:03 - INFO - __main__ - Gradient Accumulation steps = 1 11/10/2022 14:09:03 - INFO - __main__ - Total optimization steps = 96 33%|█████████████████████ | 32/96 [01:14<02:28, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True} 33%|█████████████████████ | 32/96 [01:14<02:28, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True} 11/10/2022 14:13:04 - INFO - __main__ - {'bleu': 6.697252711851462} 67%|██████████████████████████████████████████ | 64/96 [05:13<01:14, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True} 67%|██████████████████████████████████████████ | 64/96 [05:13<01:14, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True} 11/10/2022 14:16:52 - INFO - __main__ - {'bleu': 6.944214970589274} 100%|███████████████████████████████████████████████████████████████| 96/96 [09:02<00:00, 2.33s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True} 100%|███████████████████████████████████████████████████████████████| 96/96 [09:02<00:00, 2.33s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True} 11/10/2022 14:20:52 - INFO - __main__ - {'bleu': 6.8998500689065} Configuration saved in ./m2m100_418M/config.json 100%|███████████████████████████████████████████████████████████████| 96/96 [11:48<00:00, 7.38s/it] Model weights saved in ./m2m100_418M/pytorch_model.bin tokenizer config file saved in ./m2m100_418M/tokenizer_config.json Special tokens file saved in ./m2m100_418M/special_tokens_map.json 100%|███████████████████████████████████████████████████████████████| 96/96 [11:48<00:00, 7.38s/it] ```<|||||>@pacman100 Yes! It works! Thanks very much!
transformers
20,129
closed
[testing doc-build from fork]
null
11-09-2022 01:03:36
11-09-2022 01:03:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,128
closed
[testing doc-build]
null
11-09-2022 00:57:51
11-09-2022 00:57:51
transformers
20,127
closed
Improvement to error handling in subclasses
### Feature request I encountered a fascinating (though very frustrating :-)) scenario about error handling in `transformers`. When subclassing a tokenizer that relies on `sentencepiece`, and not having it installed, you will get an unhelpful error message that sends you down a lot of rabbit holes. Consider this minimal example: ```python from transformers import MBartTokenizer class CustomMBartTokenizer(MBartTokenizer): @classmethod def from_pretrained(cls, *args, **kwargs): inst = super().from_pretrained(*args, **kwargs) # Do other stuff with it... a = CustomMBartTokenizer.from_pretrained("facebook/mbart-large-cc25") ``` If you run this in a new environment where sentencepiece is not installed, you get the following error: > AttributeError: 'super' object has no attribute 'from_existing' This error message had me comparing Windows vs. Linux and python 3.8 vs 3.9 vs 3.10 because I could not figure out why it was working on my home machine and not on our cluster. In the end, the reason was that `sentencepiece` was not yet installed on the cluster **but the error message does not show that**. It seems that the sentencepiece error does not show or does not stop execution, which then leads the class to not be successfully initialized. Although admittedly I have not dug much farther. ### Motivation The error message does not seem to correctly propagate when subclassing a tokenizer. The error message that indicates that sentencepiece is not installed and needs to be installed is not correctly shown. Instead the user gets a vague error message about the `from_pretrained` call. While this may be an exceptional case, I have found that subclassing tokenizers for a specific task is common in research. ### Your contribution I do not have the time to work on figuring out what the exact cause is unfortunately. Posting this here for posterity. For anyone getting this issue: **you probably just need to make sure all necessary third party libraries (such as sentencepiece) are installed.**
11-08-2022 17:12:27
11-08-2022 17:12:27
Interesting. It seems like Python gobbles the error raised by the super class (which does tell you to install sentencepiece) and decides it has no `from_pretrained` attribute instead. If anyone has any idea to fix our metaclass `DummyObject` so it works on subclasses, I'm all ears!<|||||>Pinging here as we are seeing the same issue. ``` E ImportError: E XLMRobertaTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the E installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones E that match your environment. Please note that you may need to restart your runtime after installation. ```<|||||>No this is not the same issue. The error message is clearly indicating that your need to install `sentencepiece`.<|||||>Yet it doesn't come up as a requirement from transformers - as described in the original post. The sentencepiece library has to be added manually. Did I miss something about having to do that otherwise in the release notes? This was discovered after updating to the latest version of transformers 2.24.0. site-packages\transformers\models\xlm_roberta\tokenization_xlm_roberta.py https://github.com/huggingface/transformers/blob/f3d99e49d459f9d1cc7544352041b3a64d68c734/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L22<|||||>Also, just a side note here, this occurred in multiple environments - Linux and Windows based. I hope this helps.<|||||>And when running "pip show sentencepiece" locally, it shows as having no requires/requires-by - very odd.<|||||>I think `DummyObject` should override `__getattribute__` instead of `__getattr__` to get the expected error. EDIT: Tested locally, and it works. I've linked a PR with the fix.<|||||>Thanks @mariosasko! @jacwalte That is the expected behavior. sentencepiece is not installed by default because not all tokenizers need it. You'll get the error message if you need it for your use-case and then you just have to install it manually. Or you can install transformers with the extra `sentencepiece`, `transformers[sentencepiece]`. The problem in my case was that the error message did not show up. This has now been quickly fixed by @mariosasko!<|||||>> Thanks @mariosasko! > > @jacwalte That is the expected behavior. sentencepiece is not installed by default because not all tokenizers need it. You'll get the error message if you need it for your use-case and then you just have to install it manually. Or you can install transformers with the extra `sentencepiece`, `transformers[sentencepiece]`. The problem in my case was that the error message did not show up. This has now been quickly fixed by @mariosasko! Thanks! - will update the requirements with that
transformers
20,126
open
Add pop2piano
### Model description - Introduce a large amount of paired and synchronised {pop, piano cover} data using an automated pipeline. - Pop2Piano, a Transformer network that generates piano covers given waveforms of pop music. - First model to directly generate a piano cover from pop audio without melody and chord extraction modules. - Uses a T5 model so should be straightforward. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation - Weights : https://github.com/sweetcocoa/pop2piano/releases/download/dpi_2k_epoch/model-1999-val_0.67311615.ckpt - Code : https://github.com/sweetcocoa/pop2piano/ - Paper : https://arxiv.org/abs/2211.00895 - Colab : https://colab.research.google.com/drive/1rBAs2TkryDnnQOhcM-mtlrgtL2h3ekml?usp=sharing
11-08-2022 16:05:52
11-08-2022 16:05:52
Hi @ArthurZucker could you please share the progress of the model addition?(I am asking because the last time this Issue has had any actions was in Nov 8, 2022). I tried this model on colab and really loved it. I want to add / help adding this model to HF. Is it possible that we can collaborate in this addition? <|||||>Hey! I did not start at all! Feel free to open a PR and ping me for pointers/help! I won't have time to do it alone but would love to collaborate ! 😉 <|||||>@ArthurZucker Ok, my term exams are till 3rd March so I will start working from 4th, in meantime I will open a PR. Do you want to continue communicating through that PR or how about communicating through slack/discord if it's possible? <|||||>Sure, will invite you to slack if you can share your email, [email protected]! <|||||>> Sure, will invite you to slack if you can share your email, [[email protected]](mailto:[email protected])! My email is - [email protected] @ArthurZucker <|||||>Just invited you! Good luck on your mid terms 😉 <|||||>Also if anyone want to tackle this before, ping me and will add you to the channel
transformers
20,125
closed
Update github pr docs actions
null
11-08-2022 14:41:09
11-08-2022 14:41:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20125). All of your documentation changes will be reflected on that endpoint.
transformers
20,124
closed
Remove BertConfig inheritance from RobertaConfig
# What does this PR do? Removes BertConfig dependencies from RobertaConfig Related to https://github.com/huggingface/transformers/issues/19303 @sgugger can I please get some feedback on this. Thanks 😄 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-08-2022 14:18:12
11-08-2022 14:18:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for your contribution!
transformers
20,123
closed
Whisper: incorrect list of non speech tokens
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The lists `NON_SPEECH_TOKENS` and `NON_SPEECH_TOKENS_MULTI` contain the tokens 6 and 12 that are not suppressed by default in the [reference implementation](https://github.com/openai/whisper/). Consider the following example using the reference `whisper` module: ```python import transformers from whisper.tokenizer import get_tokenizer tokenizer = get_tokenizer(multilingual=True, task="transcribe", language="fr") suppress_tokens = list( sorted( tokenizer.non_speech_tokens + (tokenizer.sot, tokenizer.sot_prev, tokenizer.sot_lm, tokenizer.no_speech) ) ) config = transformers.WhisperConfig.from_pretrained("openai/whisper-tiny") print(suppress_tokens == config.suppress_tokens) # prints False config.suppress_tokens.remove(6) config.suppress_tokens.remove(12) print(suppress_tokens == config.suppress_tokens) # prints True ``` ### Expected behavior The list of suppressed tokens should match the reference implementation.
11-08-2022 12:40:02
11-08-2022 12:40:02
Hey! thanks for pointing that out. Will open a PR on the online models and on the repo, just gotta make sure this is not backward incompatible 🤗 <|||||>Thanks for the fix and the configurations update! However, there are still 2 pull requests to merge: * https://huggingface.co/openai/whisper-small/discussions/4 * https://huggingface.co/openai/whisper-medium/discussions/5 <|||||>Thanks for the notice! 🤗
transformers
20,122
closed
Why is CLIPImageProcessor not in general init?
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - Huggingface_hub version: 0.11.0.dev0 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger maybe ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction In diffusers we load transformers classes according to a model index, which *e.g.* looks as follows: ```bash { "_class_name": "StableDiffusionPipeline", "_diffusers_version": "0.7.0.dev0", "feature_extractor": [ "transformers", "CLIPFeatureExtractor" ], "scheduler": [ "diffusers", "PNDMScheduler" ], "text_encoder": [ "transformers", "CLIPTextModel" ], "tokenizer": [ "transformers", "CLIPTokenizer" ], "unet": [ "diffusers", "UNet2DConditionModel" ], "vae": [ "diffusers", "AutoencoderKL" ] } ``` The important part is: ``` "feature_extractor": [ "transformers", "CLIPFeatureExtractor" ], ``` Now what is happening then is that we load a component that we call `"feature_extractor"` from `"transformers"` and the `"CLIPFeatureExtractor"` class. Then when saving the model we save it with `type(feature_extractor)` which is now though `CLIPImageProcessor` and then we want to load it again from `transformers`, but we cannot import it from transformers. E.g. `from transformers import CLIPImageProcessor` doesn't work. Could we add `CLIPImageProcessor` to the init? ### Expected behavior I think we should put `CLIPImageProcessor` in the init no?
11-08-2022 12:17:36
11-08-2022 12:17:36
Will be fixed by #20111<|||||>I have the same problem... God...When will the CLIPImageProcessor be set up in general init?<|||||>@ZoeyyHz, which version of transformers are you using? I'm able to run the following without issue: ``` from transformers import CLIPImageProcessor ```<|||||>Hi, there. I have the same problem too. which version of transformers do I have to use?<|||||>@BigTail375 The CLIPImageProcessor has been available to import from the public init from v4.25.1
transformers
20,121
closed
Cannot load CLIPProcessor / CLIPFeatureExtractor locally
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - Huggingface_hub version: 0.11.0.dev0 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger maybe ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Is it expected that the following doesn't work on main? ```python from transformers import CLIPFeatureExtractor, AutoFeatureExtractor, AutoProcessor feature_extractor = CLIPFeatureExtractor() id_name = "./clip_feat" feature_extractor.save_pretrained(id_name) print("load from CLIPFeatureExtractor") feature_extractor = CLIPFeatureExtractor.from_pretrained(id_name) #print("load from CLIPImageProcessor") #feature_extractor = CLIPImageProcessor.from_pretrained(id_name) print("load from AutoFeatureExtractor") feature_extractor = AutoFeatureExtractor.from_pretrained(id_name) print("load from AutoProcessor") feature_extractor = AutoProcessor.from_pretrained(id_name) ``` We can see that I can load the feature extractor directly from the class but not from `AutoFeatureExtractor` or `AutoProcessor` even though we save the feature extractor type in the `preprocessor_config.json` file. ### Expected behavior I would have expected that the code snippet above works.
11-08-2022 12:11:09
11-08-2022 12:11:09
Will be fixed by #20111
transformers
20,120
closed
ESM esmfold_v1 infer_pdbs method gives TypeError
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import EsmForProteinFolding model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1").cuda() pdbs = model.infer_pdbs(["MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"]) ``` gives ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [12], line 1 ----> 1 pdbs = model.infer_pdbs(["MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"]) File ~/PycharmProjects/esm/venv/lib/python3.9/site-packages/transformers/models/esm/modeling_esmfold.py:2318, in EsmForProteinFolding.infer_pdbs(self, seqs, *args, **kwargs) 2316 def infer_pdbs(self, seqs: List[str], *args, **kwargs) -> List[str]: 2317 """Returns the pdb (file) string from the model given an input sequence.""" -> 2318 output = self.infer(seqs, *args, **kwargs) 2319 return self.output_to_pdb(output) File ~/PycharmProjects/esm/venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File ~/PycharmProjects/esm/venv/lib/python3.9/site-packages/transformers/models/esm/modeling_esmfold.py:2280, in EsmForProteinFolding.infer(self, seqs, residx, with_mask) 2278 if residx.ndim == 1: 2279 residx = residx.unsqueeze(0) -> 2280 return self.forward( 2281 aatype, 2282 mask, 2283 mask_aa=with_mask is not None, 2284 masking_pattern=with_mask, 2285 residx=residx, 2286 ) TypeError: forward() got an unexpected keyword argument 'mask_aa' ``` ### Expected behavior pdb will be calculated correctly.
11-08-2022 11:56:11
11-08-2022 11:56:11
cc @Rocketknight1 <|||||>Hi @maxjeblick, this is caused by those methods being ported directly from `ESMFold` and not being updated to match our implementation. I'm working on a fix now! In the meantime you can use the code from our [example notebook for protein folding](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) to convert model outputs to PDB.<|||||>@maxjeblick fixed on main now!
transformers
20,119
closed
Improve tiny model creation script
# What does this PR do? - Add the option to upload the created tiny models to Hub. - Make tiny config corresponds better to the (reduced) tokenizer. - This gets quite complicated - But basically, just to make sure the `vocab_size` and `xxx_token_ids` in the tiny config correspond to what we have in the (reduced) tokenizer. Once this PR is approved, should I upload them to `hf-internal-testing`? (I remembered you said yes a few months ago, but want to be sure :-))
11-08-2022 11:52:19
11-08-2022 11:52:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20119). All of your documentation changes will be reflected on that endpoint.
transformers
20,118
closed
[CLIPSeg] Add resources
# What does this PR do? This PR adds resources for CLIPSeg.
11-08-2022 11:25:20
11-08-2022 11:25:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20118). All of your documentation changes will be reflected on that endpoint.
transformers
20,117
closed
[processor] Add 'model input names' property
# What does this PR do? Adds the `model_input_names` property to the processor class. Related to #20058. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-08-2022 09:43:16
11-08-2022 09:43:16
Currently, I've only applied the change to the Wav2Vec2 Processor - once we're happy with the design I'll copy it to all other processor classes (both audio and vision). This should make the preliminary review much easier!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for the review! > Just one thing: should this be overriden for CLIP models and the like that combine the inputs of the tokenizer and feature extractor? That's a very good point. Perhaps we can add a generic property method to `ProcessorMixin` that returns the `model_input_names` for the first attribute (feature extractor), and override it for the models that combine the inputs of the tokenizer and feature extractor? Or, we can add the property method to **each individual** processor class, tailored to return the expected inputs for the given model (as is currently done with Wav2Vec2Processor, and modified accordingly for CLIP etc).<|||||>Great point @sgugger! Have quickly cleaned-up the PR to try and remove any ad-hocery in the tests: - Single modality models: assert that the model input names of the processor and feature extractor match - Multi modal models: assert that the model input names of the processor match the keys of the inputs dict<|||||>Very much agreed - if we implement a common tester for the processor this all collapses into one / two tests max. Will leave this for a follow-up PR as it's quite a significant refactor!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint.
transformers
20,116
closed
Fix gradient clipping on XLA device
# What does this PR do? This PR fixed the gradient clipping logic on XLA device. We found all_reduce is wrongfully disabled in fp16 mode when `max_grad_norm=0` on XLA gpu device. To be consistent with native pytorch behavior, all_reduce should be placed immediately after calling `self.training_step`. Tested with the run_mlm.py example script on 8 Nvidia V100 GPU: ```sh GPU_NUM_DEVICES=8 python -m torch_xla.distributed.xla_spawn --num_gpus 8 run_mlm.py \ --model_name_or_path bert-base-uncased \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --overwrite_output_dir true \ --output_dir /tmp/test-mlm \ --per_gpu_train_batch_size 16 \ --do_eval \ --fp16 true \ --max_grad_norm 0 \ --do_train \ --num_train_epochs 3 ``` Results of final training losses: | Backend | fp16 | fp32 | fp16+max_grad_norm=0| fp32+max_grad_norm=0| | --- | ----------- | --------| ----------- | --------| | pytorch cuda| 1.8649 | 1.8575 | 1.8753 | 1.8694 | | torch_xla cuda| 1.86 | 1.8576 | 1.8694 | 1.867| cc @sgugger
11-07-2022 22:11:16
11-07-2022 22:11:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20116). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,115
closed
Roberta model seemingly unable to take embeddings as input
### System Info Python 3.9 on linux, transformers 4.24.0 ### Who can help? @lysandrejik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Trying to implement cross encoder with RoBERTa. I embed token ids separate from the model forward call: ```python model = AutoModel.from_pretrained("roberta-base") text_features : TensorType["batch", "sequence_length", "d_model"] = model.embeddings(text.input_ids) embs = .... # same shape as text features ``` AFAIK this incorporates word embeddings, positional embeddings and special token embeddings, since those are all in the models embeddings. Sequence length here is max_posititon_embeddings from the models config. I then add on ViT embeddings to start of sequence and truncate accordingly (shape of tensor stays the same) ```python out = model( inputs_embeds=embs, attention_mask=attn_mask, output_hidden_states=True, return_dict=True ) ``` This gives error: ```python IndexError: index out of range in self ``` The traceback (image linked [here](https://media.discordapp.net/attachments/738882678711123980/1039276109423984711/unknown.png?width=719&height=146)) suggests it is trying to perform word and position embeddings again, and an error is occurring when position embeddings are called. If I'm understanding correctly, since I already did embeddings, and am providing them rather than tokens, it should not be calling on position embeddings again, yes? Running the same code with "bert-base-uncased" functions as expected with no errors. ### Expected behavior Expect a successful forward call through model using the embeddings provided
11-07-2022 21:24:11
11-07-2022 21:24:11
Even if you provide input embeddings, the RoBERTa model will still add the token type embeddings and position embeddings. As for the exact reason you get an error, we would need a full reproducer to be able to investigate.<|||||>> Even if you provide input embeddings, the RoBERTa model will still add the token type embeddings and position embeddings. Is there any way to circumvent this? In my use-case position embeddings would probably be harmful to training. <|||||>You'll need to modify the model code for that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,114
closed
Add CV + audio labels to glossary
This is a [follow-up PR ](https://github.com/huggingface/transformers/pull/20051#discussion_r1013978694) to expand the `labels` definition to include expected labels for model heads from other modalities.
11-07-2022 21:07:24
11-07-2022 21:07:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20114). All of your documentation changes will be reflected on that endpoint.
transformers
20,113
closed
Adapt has_labels test when no labels were found
# What does this PR do? As #20105 highlights it, the new way we infer default label names for models might not work for models outside of Transformers This PR reverts to the old default for non-`PreTrainedModel` models
11-07-2022 20:30:46
11-07-2022 20:30:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20113). All of your documentation changes will be reflected on that endpoint.
transformers
20,112
closed
Pytorch type hints
# What does this PR do? Added Type hints ## Who can review? @Rocketknight1
11-07-2022 19:56:38
11-07-2022 19:56:38
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.<|||||>Sure, I will add all type hints for pytorch and ping you<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.<|||||>@Rocketknight1 I think I've covered all PyTorch models!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.<|||||>@IMvision12 Amazing, thank you! I just finished reviewing and it all looks good, so I'm going to merge now. Once we have full type hint coverage we can add tests to ensure that it stays that way in future, and then start using the type hints in library checking, so this should help a lot!
transformers
20,111
closed
AutoImageProcessor
# What does this PR do? Adds the `AutoImageProcessor` class and makes model image processors available to import. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
11-07-2022 19:12:55
11-07-2022 19:12:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20111). All of your documentation changes will be reflected on that endpoint.<|||||>Still reviewing, but in `processing_auto.py/from_pretrained`, the beginning part https://github.com/huggingface/transformers/blob/1ebc7bb995c5e43961a7c8079ca3bf29f06f2411/src/transformers/models/auto/processing_auto.py#L197 around this, there is no `ImageProcessingMixin`. I feel this is a miss and should appear here? <|||||>> Still reviewing, but in `processing_auto.py/from_pretrained`, the beginning part > > https://github.com/huggingface/transformers/blob/1ebc7bb995c5e43961a7c8079ca3bf29f06f2411/src/transformers/models/auto/processing_auto.py#L197 > > around this, there is no `ImageProcessingMixin`. I feel this is a miss and should appear here? @ydshieh Yes, you're right. I've added a check now [here](https://github.com/amyeroberts/transformers/blob/2e08d16f7758889fe0cb203091d292c968f067b0/src/transformers/models/auto/processing_auto.py#L194). Can you confirm if this matches with what you think should have been added? <|||||>> Can you confirm if this matches with what you think should have been added? Yes! <|||||>One comment (no need to be done in this PR): I think it would be great if we can remove the `feature_extractor_type` key after loading the image processor. ```python from transformers import CLIPModel, AutoProcessor, CLIPProcessor, CLIPImageProcessor, CLIPFeatureExtractor, AutoImageProcessor p = CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32") print(p.feature_extractor_type) p.save_pretrained("temp-clip") ``` gives ```bash CLIPFeatureExtractor ``` on the terminal, and in the output file `preprocessor_config.json`, we have ```python "feature_extractor_type": "CLIPFeatureExtractor", "image_processor_type": "CLIPImageProcessor", ```
transformers
20,110
closed
Fix AutoTokenizer with subfolder passed
# What does this PR do? As reported in #20108, the `AutoTokenizer` API does not properly work with the `subfolder` argument, because it is not consumed by the `get_tokenizer_config` function. This PR fixes that.
11-07-2022 18:57:04
11-07-2022 18:57:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,109
closed
docs: Replace awkward `timm` link with the expected one
# What does this PR do? Replace https://github.com/rwightman/pytorch-image-models/tree/master/timm with https://github.com/rwightman/pytorch-image-models. ## Reasoning 1. The URL has the hardcoded `master` branch, despite the `timm` being branch being renamed to `main` nowadays. 2. The URL points to the `timm` folder for some reason, when linking to the root, i.e. where a README is visible, is much more sensible. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @sgugger --- I just keep running into small issues here and there! More than happy to help fix them, though. - Tom Aarsen
11-07-2022 18:35:01
11-07-2022 18:35:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,108
closed
Using `subfolder` with AutoTokenizer.from_pretrained doesn't work
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? Not sure; tagging @lvwerra and @osanseviero ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using `subfolder` when loading a trained tokenizer seems to fail because it expects a `config.json` whereas saving a tokenizer results in a `tokenizer_config.json`. `tok = AutoTokenizer.from_pretrained("cakiki/bytelevel-dropout-0.1-50K")` This loads **successfully** but the following **fails** even though it's the same exact tokenizer files: `tok = AutoTokenizer.from_pretrained("bigcode/tokenizer", subfolder="bytelevel-dropout-0.1-50K")` The latter results in the following trace: ```python --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:213, in hf_raise_for_status(response, endpoint_name) 212 try: --> 213 response.raise_for_status() 214 except HTTPError as e: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self) 1020 if http_error_msg: -> 1021 raise HTTPError(http_error_msg, response=self) HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/bigcode/tokenizer/resolve/main/bytelevel-dropout-0.1-50K/config.json The above exception was the direct cause of the following exception: EntryNotFoundError Traceback (most recent call last) File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/utils/hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 407 try: 408 # Load from URL or cache if already cached --> 409 resolved_file = hf_hub_download( 410 path_or_repo_id, 411 filename, 412 subfolder=None if len(subfolder) == 0 else subfolder, 413 revision=revision, 414 cache_dir=cache_dir, 415 user_agent=user_agent, 416 force_download=force_download, 417 proxies=proxies, 418 resume_download=resume_download, 419 use_auth_token=use_auth_token, 420 local_files_only=local_files_only, 421 ) 423 except RepositoryNotFoundError: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/file_download.py:1053, in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, legacy_cache_layout) 1052 try: -> 1053 metadata = get_hf_file_metadata( 1054 url=url, 1055 use_auth_token=use_auth_token, 1056 proxies=proxies, 1057 timeout=etag_timeout, 1058 ) 1059 except EntryNotFoundError as http_error: 1060 # Cache the non-existence of the file and raise File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/file_download.py:1359, in get_hf_file_metadata(url, use_auth_token, proxies, timeout) 1350 r = _request_wrapper( 1351 method="HEAD", 1352 url=url, (...) 1357 timeout=timeout, 1358 ) -> 1359 hf_raise_for_status(r) 1361 # Return File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:231, in hf_raise_for_status(response, endpoint_name) 226 message = ( 227 f"{response.status_code} Client Error." 228 + "\n\n" 229 + f"Entry Not Found for url: {response.url}." 230 ) --> 231 raise EntryNotFoundError(message, response) from e 233 elif error_code == "RepoNotFound" or response.status_code == 401: EntryNotFoundError: 404 Client Error. (Request ID: vwMqo-SJymZZUmQ2bMoXA) Entry Not Found for url: https://huggingface.co/bigcode/tokenizer/resolve/main/bytelevel-dropout-0.1-50K/config.json. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) Cell In [18], line 1 ----> 1 tok = AutoTokenizer.from_pretrained("bigcode/tokenizer", subfolder="bytelevel-dropout-0.1-50K") File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py:566, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 564 if config_tokenizer_class is None: 565 if not isinstance(config, PretrainedConfig): --> 566 config = AutoConfig.from_pretrained( 567 pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs 568 ) 569 config_tokenizer_class = config.tokenizer_class 570 if hasattr(config, "auto_map") and "AutoTokenizer" in config.auto_map: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:770, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 768 kwargs["name_or_path"] = pretrained_model_name_or_path 769 trust_remote_code = kwargs.pop("trust_remote_code", False) --> 770 config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 771 if "auto_map" in config_dict and "AutoConfig" in config_dict["auto_map"]: 772 if not trust_remote_code: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/configuration_utils.py:558, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 556 original_kwargs = copy.deepcopy(kwargs) 557 # Get config dict associated with the base config file --> 558 config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) 559 if "_commit_hash" in config_dict: 560 original_kwargs["_commit_hash"] = config_dict["_commit_hash"] File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/configuration_utils.py:613, in PretrainedConfig._get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 609 configuration_file = kwargs.pop("_configuration_file", CONFIG_NAME) 611 try: 612 # Load from local folder or from cache or download from model Hub and cache --> 613 resolved_config_file = cached_file( 614 pretrained_model_name_or_path, 615 configuration_file, 616 cache_dir=cache_dir, 617 force_download=force_download, 618 proxies=proxies, 619 resume_download=resume_download, 620 local_files_only=local_files_only, 621 use_auth_token=use_auth_token, 622 user_agent=user_agent, 623 revision=revision, 624 subfolder=subfolder, 625 _commit_hash=commit_hash, 626 ) 627 commit_hash = extract_commit_hash(resolved_config_file, commit_hash) 628 except EnvironmentError: 629 # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to 630 # the original exception. File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/utils/hub.py:454, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 452 if revision is None: 453 revision = "main" --> 454 raise EnvironmentError( 455 f"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout " 456 f"'[https://huggingface.co/{path_or_repo_id}/{](https://huggingface.co/%7Bpath_or_repo_id%7D/%7Brevision)[revision](https://huggingface.co/%7Bpath_or_repo_id%7D/%7Brevision)}' for available files." 457 ) 458 except HTTPError as err: 459 # First we try to see if we have a cached version (not up to date): 460 resolved_file = try_to_load_from_cache(path_or_repo_id, full_filename, cache_dir=cache_dir, revision=revision) OSError: bigcode/tokenizer does not appear to have a file named bytelevel-dropout-0.1-50K/config.json. Checkout 'https://huggingface.co/bigcode/tokenizer/main' for available files. ``` ### Expected behavior The tokenizer to load from the subfolder
11-07-2022 18:30:56
11-07-2022 18:30:56
Indeed, thanks for the clear issue. The `subfolder` argument is not properly passed along in the utils that get the tokenizer config, and since that tokenizer config is then not found, `AutoTokenizer` then tries to find a config (which does not exist here). Will send a fix shortly.
transformers
20,107
closed
Fix tapas scatter
# What does this PR do? Fix tapas scatter
11-07-2022 15:31:58
11-07-2022 15:31:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,106
closed
Give `t5` the `prune_heads`
# What does this PR do? Give `t5` the `prune_heads` #19975 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue #19625) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @patrickvonplaten @patil-suraj Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-07-2022 15:19:02
11-07-2022 15:19:02
Hi @ArthurZucker ,let's try this one! <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20106). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @ArthurZucker , I have passes the test! <|||||>Hi @ArthurZucker , this version is clean. Could you please give it a review?<|||||>Gently Pin @patrickvonplaten Maybe you are also interested. Many thanks!<|||||>Hi @ArthurZucker , could you please give me a review? Many thanks!<|||||>> Thanks for the PR! We'll need tests before we can merge this. > > Could you set this to `True`: > > https://github.com/huggingface/transformers/blob/07b8f249cdb07a5e6697b379cc6db705a9eb15f1/tests/models/t5/test_modeling_t5.py#L521 > > and then see if the tests all pass? Sure, I have modified it manually in my repo<|||||>Hi @patrickvonplaten , it seems some of the test failed, because I did not modify the `T5stack` but `T5forconditionalgeneration`<|||||>> Hi @patrickvonplaten , it seems some of the test failed, because I did not modify the `T5stack` but `T5forconditionalgeneration` Could you try to make the tests work - we need those to pass before we're able to merge this PR :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,105
closed
default value for default_label_names incorrectly causes has_label in trainer.py to be true
### System Info - `transformers` version: 4.24.0 - Platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.11.0+cu113 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @sgugger This is a bug in `trainer.py` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I've upgraded transformers from 4.17.0 to 4.24.0. In the earlier version(s), the default value for `default_label_names` was set to `["labels"]`. In the current version the default is set to an empty array `[]` 4.17.0: ```python default_label_names = ( ["start_positions", "end_positions"] if type(self.model).__name__ in MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES.values() else ["labels"] ) ``` 4.24.0: ```python default_label_names = find_labels(self.model.__class__) ``` `default_label_names` is used to set `self.label_names` if no labels are supplied: ```python self.label_names = default_label_names if self.args.label_names is None else self.args.label_names ``` It will then be used to decide the value of `has_labels` as `True` or `False`: ```python has_labels = all(inputs.get(k) is not None for k in self.label_names) ``` In 4.17.0 with the default value of `["labels"]` has_labels was `False`. In 4.24.0 with the default value of `[]` has_labels is `True` causing the program to fail in `compute_loss` during eval: ``` E ValueError: The model did not return a loss from the inputs, only the following keys: start_logits,end_logits,target_type_logits. For reference, the inputs it received are input_ids,attention_mask. ``` ### Expected behavior The default behavior when no labels are provided should cause `has_labels` to be `False`
11-07-2022 15:07:18
11-07-2022 15:07:18
Thanks for opening this issue. Could you please explain which code sample stopped working for you with this change?<|||||>This is using `do_eval` in our own code `run_mrc.py` in our PrimeQA code base: https://github.com/primeqa/primeqa/tree/main/primeqa/mrc However, I believe this issue will happen in any instance where labels are not provided through `self.arg.label_names` and `label` is not found in `self.model.__class__`<|||||>This is too vague for us to act upon. Do you have a reproducer of the problem?<|||||>I'm not sure what the best way is to reproduce it for you. It occurs because we have our own model wrapper. I see that in the signature parameters there is a `labels` parameter, but this does not exist in our signature. Perhaps it is expected to always be there? Using AutoModelForSequenceClassification: ``` mappingproxy(OrderedDict([('self', <Parameter "self">), ('input_ids', <Parameter "input_ids: Union[torch.LongTensor, NoneType] = None">), ('attention_mask', <Parameter "attention_mask: Union[torch.FloatTensor, NoneType] = None">), ('token_type_ids', <Parameter "token_type_ids: Union[torch.LongTensor, NoneType] = None">), ('position_ids', <Parameter "position_ids: Union[torch.LongTensor, NoneType] = None">), ('head_mask', <Parameter "head_mask: Union[torch.FloatTensor, NoneType] = None">), ('inputs_embeds', <Parameter "inputs_embeds: Union[torch.FloatTensor, NoneType] = None">), ('labels', <Parameter "labels: Union[torch.LongTensor, NoneType] = None">), ('output_attentions', <Parameter "output_attentions: Union[bool, NoneType] = None">), ('output_hidden_states', <Parameter "output_hidden_states: Union[bool, NoneType] = None">), ('return_dict', <Parameter "return_dict: Union[bool, NoneType] = None">)])) ``` Our model wrapper: ``` mappingproxy(OrderedDict([('self', <Parameter "self">), ('input_ids', <Parameter "input_ids=None">), ('attention_mask', <Parameter "attention_mask=None">), ('token_type_ids', <Parameter "token_type_ids=None">), ('position_ids', <Parameter "position_ids=None">), ('head_mask', <Parameter "head_mask=None">), ('inputs_embeds', <Parameter "inputs_embeds=None">), ('output_attentions', <Parameter "output_attentions=None">), ('output_hidden_states', <Parameter "output_hidden_states=None">), ('return_dict', <Parameter "return_dict=None">), ('kwargs', <Parameter "**kwargs">)])) ```<|||||>Can you try if the PR above would solve your problem?<|||||>That did not work. Our model inherits from PreTrainedModel so it is still an instance of it.<|||||>Ah in this case, it will need to have the label names in the signature like the models in the library. Or you can always pass along the `label_names` you want in the training arguments to override the defaults.<|||||>Thanks! I am using the training arg to override the default right now. I'm just wondering whether when `self.label_names=[]` causes `has_labels` to be `True` is the expected behavior. ``` has_labels = all(inputs.get(k) is not None for k in self.label_names) ``` <|||||>Ah yes, I understand your concern better, thanks! Will adapt the PR.<|||||>Can you try again the PR above?<|||||>This worked! Thank you!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Fixed by #20113
transformers
20,104
closed
Adding chunking for whisper (all seq2seq actually). Very crude matching algorithm.
# What does this PR do? This adds `chunk_length_s` to `seq2seq` algorithms. ## Approach Since we have no way of finding a matching between output and input with `seq2seq` this is an alternative route. This runs the pipeline on the various chunks and finds all generated output. Then it tries to find the longest sequence of non special ids that could correspond to the subsequences within the batch. ## Pros - It should work on *any* seq2seq models - It should work decently when the stride is long enough to have good overlapping of tokens so that the stitching can work correctly - It should be slightly robust to few token errors - It should perform best on mostly continuous talk (so that there is model output that can overlap) ## Cons - This method is **unsound** and will fail under some circumstances - It will fail when there is silence in the overlap. If there is silence then there is no overlapping tokens, and the stitching might get lost during the stitching process. By default it will concatenate, but it might be put off by boundaries in the stride. - It will fail spectacularly when something repeats a single word over and over. Then, we will have overlap that might be TOO large. This is impossible to distinguish without getting access to the timestamps (which only `whisper` can currently do, and it does come with caveats). The currently algorithm will favor long chain of matching tokens. - It will have issues with capitalization and out of domain areas. For instance "Yes, sir." , "Sir Thomas" might be 2 chunks, which have different capitalization. Since the current algorithm works at the token level, the 2 tokens `"sir"` and `¨Sir"` are different and will fail to match leading to some `¨Yes, sir. Sir Thomas" stitching instead of the intended "Yes, Sir Thomas.". <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-07-2022 14:58:42
11-07-2022 14:58:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks for working on this. Not sure if the PR is ready for (at least core maintainer) review yet? Yup sorry it was slightly early for you. The core idea is still there. We chunk with stride. and we make a hopeful stitch to find the longest sequence from all the subsequences. PROs: - It's extremely generic. - It should work in a lot of scenarios including repeating tokens CONs: - It's technically unsound. Meaning if the model infers widely varying tokens, there's no way to reconstruct what the model would actually predict on the whole file. - I expect it can fail spectacularly in well crafted examples where someone repeats the same word over and over, where the longest match will be MUCH longer than the original voices thing. <|||||>As we discussed offline with @Narsil , will be implementing the `find_conmmon_sequence` in `O(N)` 😉 Will open a new PR! <|||||>> As we discussed offline with @Narsil , will be implementing the `find_conmmon_sequence` in `O(N)` wink Will open a new PR! Seems it's going to be complex because of fault tolerance which does seem to be important. You can try doing something like ```python #!wget https://www.archive.org/download/around_world_80_days_mfs_librivox/around_world_in_80_days_01_verne.mp3 from transformers import pipeline speech_recognizer = pipeline( task="automatic-speech-recognition", model="openai/whisper-small", framework="pt", batch_size=2, device=0, chunk_length_s=30, generate_kwargs={"max_new_tokens": 1024}, ) out = speech_recognizer(["around_world_in_80_days_01_verne.mp3"]) print(out) ``` This will required some suboptimal stitches to work.<|||||>@sgugger it's now ready for review. The TODO is left intentionnally. It might really become relevant on hour+ long files where the current naive algorithm might become too slow. However the code is likely to be orders of magnitude more complex (if a O(n) solution exists, I'm pretty sure we could find an expected O(n) algorithm, but not sure about worst case). The current code works correctly, has the fault tolerance we need to be useful. I added a warning because the current code **Will** fail in some know circumstances. I updated the PR description to reflect those. If those tradeoffs are not good enough, I'm happy to not merge this PR in this state. The only other option I see is whisper specific with timestamps and it would only alleviate *some* of the issues. <|||||>Before merging, would love to try a little bit, otherwise LGTM (looking for a solution to the faults) <|||||>@ArthurZucker What are your conclusions ?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.<|||||>I think that including timestamp tokens in the process could help with the error tolerance as they are consistently predicted at the end of pauses in the speech. If the stride is big enough not at least include pauses in speech, it boils down to matching these. Moreover, given that we know approximately the time between each tokens, we can use this information as some kind of guiding information. I am working on something, but we can merge for now and have an improved PR later on 😉 <|||||>@sgugger would like your opinion on this if possible. The results are pretty decent imo on regular speech. I'm still mentionning the caveats because they are real.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.<|||||>Has this been added to the current transformers version? I am getting the "ValueError: `chunk_length_s` is only valid for CTC models, use other chunking options for other models".<|||||>There hasn't been a release yet, you must use `main` if you want to use it now.<|||||>Using chunk_length_s=10 without stride_length_s=(4, 2) looses a rather large part of the transcription. It works pretty nice with stride :) but I get a lot of repetitions despite setting condition_on_previous_text=0 Is there an alternative way to transcribe large audio files when I am using a fine-tuned whisper model?<|||||>> looses a rather large part of the transcription. It works pretty nice with stride :) but I get a lot of repetitions despite setting condition_on_previous_text=0 `condition_on_previous_text` ? What is that ? Could you provide an example with the repetitions ? There might be some optimizations to be made on the tying of the chunks. As I mentioned in this PR, the tying of inferred audio can definitely create repeitions, but with better examples, we might be able to figure out better heuristics.<|||||>condition_on_previous_text: bool if True, the previous output of the model is provided as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop, such as repetition looping or timestamps going out of sync. I think it's true by default and it uses something like GPT to "verify" the transcription. When I simply use whisper with medium or large model, I prefer to set it to False I get a lot of repetitions even with small samples, like the audio from this commercial https://www.youtube.com/watch?v=LkllgKVgz8o or this trailer https://www.youtube.com/watch?v=xncMdIGR2pk I have [fine tuned whisper for Greek](https://huggingface.co/emilios/whisper-medium-el) and I am trying to use it with the following lines (after of course loading transformers and the model, etc) from transformers import pipeline transcript = pipeline( task="automatic-speech-recognition", model = model, feature_extractor = feature_extractor, tokenizer=tokenizer, framework="pt", batch_size=16, device='cuda:0', #generate_kwargs={"max_new_tokens": 1024}, #max_new_tokens = 1024, chunk_length_s=10, stride_length_s=(4, 2), # must have with chunk_length_s condition_on_previous_text=0, compression_ratio_threshold=2.5 )<|||||>hi, I think `condition_on_previous_text` (and `initial_prompt`) is a decoding option used in the original OpenAI's version, not (yet?) implemented in HF's version. cc @ArthurZucker <|||||>We use a different decoding strategy here, because `openai/whisper` is not stateless which is kind of a requirement of `pipeline`. (It means you can actually do batching, which is not possible with original whisper.)<|||||>Did you try using `chunk_length_s=30`. By default it uses `1/6=5s` of chunking on each sides, which should be plenty. I'm getting for the first example: ``` {'text': " The dance is like life. You don't need to know the steps. You just need to hear the beat of your heart. You don't need rules to make the right move. Your consciousness is enough. Zagori. We have the good in us."} ``` Which seems corect to me.<|||||>``` {'text': ' The test is ready. Rachel wrote Ross a letter and demanded he read it before they got back together. How many pages was that letter? 18 pages! 18 pages. Front and back! Front and back is correct! Wait, wait, go one more time! Oh my god. Here we go. Where\'s the tissue box? The cast of Friends. Wow. It\'s cool. her lines written on the table? We\'ve literally just slipped right back. We regret. We have such a bond from this show. Were Ross and Rachel on a break? Yes. Yes. Yes. Yes. Bullshit. table read, that\'s the first time I laid eyes on any of you. Everyone was so perfectly cast. Yeah. This is from the one where everyone finds out. I remember I went to the producer of the show I was on and he he said, "That show\'s not gonna make you a star." [laughing] I remember one time I happened to have the news on, and on the TV was an aerial shot of each of our houses. - Oh, jeez. - And I remember looking at it, going, "What the--?" My roof is a mess. [laughing] It was an incredible time. We became best friends. Yeah, I\'m going to cry now. When I watch the episodes, I\'m laughing out loud, because you all make me laugh so hard. I know you know how big the show is. What you have given so many people is an experience of huge comfort. like we had these friends. I love you guys so much.'} ``` For the second.<|||||>Yes that was good What value should I use for stride_length_s with chunk 30? Can you please tell me if this is the only way to transcribe large audio files with pipeline? Thank you all :) <|||||>> What value should I use for stride_length_s with chunk 30? The stride defaults are `chunk_length_s / 6` on each sides so here, 5s, 5s. It's important to have something significant on both sides I think (more overlap will reduce the chances for the algorithm to get it wrong). <|||||>when I ! pip install git+https://github.com/openai/whisper.git and import whisper, all is fine with medium model I have tried pipeline with chunk_length_s=30, stride_length_s=(5, 5), and still I get repetitions both with openai/whisper-medium openai/whisper-large and emilios/whisper-medium-el I 've tried other bigger videos (well, ok audio) and it is not working as supposed to :( <|||||>I've just noticed that translated is ok, but the transcription in the original language has repetitions https://www.youtube.com/watch?v=e_eCryyPRus model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = "el", task = "transcribe") greek transcript with repetitions, removed model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = "el", task = "translate") transcript translated into english, removed<|||||>It is possible that the model is in cause then ? ML generative models are know to be repetitive. And the kin dof repetition I'm seeing here really looks like bad model generation more than erroneous stitching. <|||||>Nope, I think the translation engine fixes (or hides) the repetitive phrases <|||||>I confirm it is the model. Take the audio of the video you linked. ``` ffmpeg -ss 140 -i out.mp3 -c copy -t 20 out_repete.mp3 ``` Then do the inference: ```python from transformers import pipeline, AutoProcessor processor = AutoProcessor.from_pretrained("emilios/whisper-medium-el") pipe = pipeline(model="emilios/whisper-medium-el") pipe.model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="el", task="transcribe") out = pipe("out_repete.mp3") print(out) ``` And you will see that the model goes looping all by itself. This is not the chunking's doing. <|||||>But I get the same problem with openai model too [{'text': ' [μουσική] Εμείς σήμερα, εγώ του λαμβάνω,πικά, αλλά φαντάζομαι όσοι από μας προσπαθούν να σκεφτούν σοβαρά, μέσα σε όλο αυτό το χάος του ιστορικού υλικού που έχουμε μπροστά μας, επιλέγουμε μια παράδοση. Αυτό δεν σημαίνει ότι την επιλέγουμε για να σημαίνουμε δούλοι.. Επιλέγουμε ακριβώς την παράδοση εκείνη, δηλαδή αυτήν που ονομάζω Έλληνοδυτική, μέσα στην οποία η αμφισβήτηση της παράδοσης είναι ένα βασικό στοιχείο. Η αμφισβήτηση όχι για την ευχαρίστηση της αμφισβήτησης, Η αμφισβήτηση όταν υπάρχει λόγος, η δυνατότητα της αμφισβήτησης, η δυνατότητα του να σκεφτώ αλλιώς, του να μιλήσω αλλιώς από τη σκέφτετη. Η πλειοψηφία, η εκκλησία, το κράτος, το κόμμα κτλ. Δεν είναι έτσι; Ο δάσκαλος, οι γονείς ενδεχομένως. Και από εκεί και πέρα η δυνατότητα να βάλω σαν άτομο ή να βάλει μια κοινωνική ομάδα ή μια πολιτική κίνηση ερωτήματα σχετικά με το αν η σημερινή θέσμη της κοινωνίας είναι δίκαιη ή δεν είναι δίκαιη, εάν η ισότητα εντός εισαγωγικών, την οποία επαγγέλλεται το Σύνταγμα και ο νόμος για τους πολίτες, τα βασικά χαρακτηριστικά αυτής της παράδοσης, πιο όχι άλλο νόμο. Κάθε κοινωνία δημιουργεί τους θεσμούς της, αλλά η ιδέα ότι η θεσμία αυτή είναι η δική της δημιουργία ακριβώς δε είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρονομία της κοινωνίας δεν είναι μόνο και δεν είναι τόσο η εκμετάλλευση, η καταπίεση, η υπάρξη μιας εξουσίας χωρισμένης από την κοινωνία. Είναι η ιδέα ότι οι θεσμοί ήρθαν απαλού.ει και σε πρωτόγωνες κοινωνίες, στις οποίες δεν βλέπουμε αυτά τα συνόμενα. Η ετερονομία της κοινωνίας είναι το γεγονός ακριβώς ότι η κοινωνία αλοτριώνεται στους θεσμούς της οποίες η ίδια η δημιούργησε, διότι δεν ξέρει ότι η ίδια τους η δημιούργησε, Αν δεν υπήρχε Θεός, όλα θα ήσουν αυτοί που θα έρθουν.ημειωταίων δεν ανήκει στον Ντοστογεύσκη, αλλά μπορεί να το πάει κανείς πίσω, ως τουλάχιστον με έκακε τον Πλάτονα. Και το οποίον εγώ θεωρώ επιχείρημα υπαστηνό μου βήτα, δηλαδή ότι χρειάζεται ένας Θεός, διότι αλλιώς όλα αυτά τα ρεμάλια θα κάνουν, τους κατεύαιναν, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, η αρχαία αθηνή, ναι. Τους κάνουμε τους νόμους μας και όσον δεν τους έχουμε αλλάξει, τους ευώμαστε. Αυτό είναι το πράγμα που πρέπει να δίνει. Από αυτή την άψη, αυτό το οποίο ενεργώ εγώ ως αυτώνομη κοινωνία, είναι μια κοινωνία, όχι οποία είναι διαφανής, αλλά είναι μια κοινωνία η οποία ξέρει ότι δεν υπάρχει υπερβατικότητα, ότι δεν υπάρχει υπερβατική πηγή των θεσμών και των νόμων, ότι δεν υπάρχει μεταθάνατον ζωή αυτό που ξέραν οι αρχαίοι Έλληνες, οι οποίοι δεν επίστευαν σε μεταθάνατο ζωή,υτό μας, στους εαυτούς μας, σαν κοινωνικό σύνολο, κανόνες και νόμιες, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να δούμε, να δούμε ότι όσο πρέπει να δούμε, να δούμε ότι όσο πρέπει να δούμε, να δούμε ότι όσο πρέπει να δούμε, να δούμες έχουμε να το κάνουμε και έχουμε να δώσουμε στον εαυτό μας, στους εαυτούς μας σαν κοινωνικό σύνολο, κανόνες και νόμους που να μας επιτρέπουν να υπάρχουμε σαν αυτώνομη κοινωνία και σαν αυτώνομα άτομα μέσα σε αυτή την κοινωνία.'}] Please check it with the following code in a notepad when you can !pip install git+https://github.com/huggingface/transformers !pip install pytube from pytube import YouTube mymodel = "openai/whisper-medium" #mymodel = "openai/whisper-large" #mymodel = "emilios/whisper-medium-el" #lang="English" lang="Greek" from transformers import WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained( mymodel) from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained( mymodel, language=lang, task="transcribe") from transformers import WhisperProcessor processor = WhisperProcessor.from_pretrained( mymodel, language=lang, task="transcribe") from transformers import WhisperFeatureExtractor feature_extractor = WhisperFeatureExtractor.from_pretrained( mymodel, language=lang, task="transcribe") link = 'https://www.youtube.com/watch?v=e_eCryyPRus' try: yt = YouTube(link) except: print("Connection Error") yt.streams.filter(file_extension='mp4') stream = yt.streams.get_by_itag(139) stream.download('',"YouTube.mp4") model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = "el", task = "transcribe") model.config.suppress_tokens = [] #model.config.max_new_tokens = 1024 from transformers import pipeline transcript = pipeline( task="automatic-speech-recognition", model = model, feature_extractor = feature_extractor, tokenizer=tokenizer, framework="pt", batch_size=16, device='cuda:0', #generate_kwargs={"max_new_tokens": 1024}, #max_new_tokens = 1024, chunk_length_s=30, # 12 stride_length_s=(5, 5), # must have with chunk_length_s condition_on_previous_text=0, compression_ratio_threshold=2.4 ) out = transcript(["YouTube.mp4"]) print(out) <|||||>Yes, this is what I'm saying. The model is repeating itself, there's not much we can do about it. If you could fine tune it even more, or on more data, or more diverse data, that could probably help. For faster solutions, you could try and reduce amount of repetition, with `repetition_penalty` (there's actually several options for it) https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate That should help you get started. But please bear in mind it's only a temporary solution, the real solution is fixing the model itself I'm afraid. (But all models end up doing repetition when out of domain).<|||||>I thought that when I use pipeline and hf whisper medium model from openai mymodel = "openai/whisper-medium" from transformers import WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained( mymodel) is exactly the same model with the following code import whisper model = whisper.load_model("medium") result = model.transcribe("GoogleImagen.mp4", language= "el", fp16=True) print(result['text']) Which does not have any repetitions What am I missing? Transcript of the last video Η τ Playstation Εμείς σήμερα, εγώ τουλάχιστον προσωπικά, αλλά φαντάζομαι όσοι από μας προσπαθούν να σκεφτούν σοβαρά, μέσα σε όλο αυτό το χάος του ιστορικού υλικού που έχουμε μπροστά μας, επιλέγουμε μια παράδοση. Αυτό δεν σημαίνει ότι την επιλέγουμε για να τσιμίνουμε δούλοι. Επιλέγουμε ακριβώς την παράδοση εκείνη, δηλαδή αυτή που ονομάζω Έλληνο-Δυτική, μέσα στην οποία η αμφισβήτηση της παράδοσης είναι ένα βασικό στοιχείο. Η αμφισβήτηση όχι για την ευχαρίστηση της αμφισβήτησης, η αμφισβήτηση όταν υπάρχει λόγος, η δυνατότητα της αμφισβήτησης, η δυνατότητα του να σκεφτώ αλλιώς, του να μιλήσω αλλιώς από τι σκέφτεται η πλειοψηφία, η εκκλησία, το κράτος, το κόμμα κτλ. Δεν είναι έτσι? Ο δάσκαλος, οι γονείς ενδεχομένως. Και από εκεί και πέρα η δυνατότητα να βάλω σαν άτομο ή να βάλει μια κοινωνική ομάδα ή μια πολιτική κίνηση ερωτήματα σχετικά με το αν η σημερινή θέσμηση της κοινωνίας είναι δίκαιη ή δεν είναι δίκαιη, εάν η ισότητα εντός αορικών, την οποία αν επαγγέλεται το σύνταγμα και ο νόμος για τους πολίτες, υπάρχει στην πραγματικότητα ή δεν υπάρχει, αυτή η δυνατότητα είναι συμφύσης επίσης με τα βασικά χαρακτηριστικά αυτής της παράδοσης, πιο όχι άλλο νόμο. Κάθε κοινωνία δημιουργεί τους θεσμούς της, αλλά η ιδέα ότι η θεσμία αυτή είναι η δική της δημιουργία ακριβώς δεν υπάρχει στις περισσότερες κοινωνίες. Γι' αυτό και οι θεσμοί μένουν άθηκτοι. Υπάρχει η ιδέα ότι οι θεσμοί ήρθαν απαλού. Η ετερονομία της κοινωνίας δεν είναι μόνο και δεν είναι τόσο η εκμετάλλευση, η καταπίεση, η υπάρξη μιας εξουσίας χωρισμένης από την κοινωνία και τα λοιπά, γιατί η ετερονομία της κοινωνίας υπάρχει και σε πρωτόγωνες κοινωνίες, εσείς όπου δεν βλέπουμε αυτά τα φαινόμενα. Η ετερονομία της κοινωνίας είναι το γιόνωσό ακριβώς ότι η κοινωνία αλοτριώνεται στους θεσμούς τις οποίες η ίδια η δημιούργησε, διότι δεν ξέρει ότι η ίδια τους η δημιούργησε και κατά κάποιο τρόπο δεν μπορεί να το ξέρει, γιατί είναι τρομερά δύσκολο να το ξέρει. Και αυτό το περίφουμο επιχείρημα του Ντοστογεύσκη, το οποίο τόσο έχει εκθιαστεί, ότι εάν δεν υπήρχε Θεός όλα θα ήσανε επιτρεπτά, το οποίο σημειωταίων δεν ανήκει στον Ντοστογεύσκη, αλλά μπορεί να το πάει κανείς πίσω, ως τουλάχιστον με έκανε και τον Πλάτονα, και το οποίο εγώ θεωρώ επιχείρημα υπαστεινό μου βήτα, δηλαδή ότι χρειάζεται ένας Θεός, όλα αυτά τα ρεμάλια θα κάνουν ότι τους κατέβαιναν, παρά τη χειδαιότητα του επιχείρημα του Ντοστογεύσκη, παρά τη χειδαιότητά του εκφράζει μια βασική αλήθεια της θέσμης των ετερονόμων κοινωνιών. Δηλαδή χρειάζεται να λεχτεί ότι ο θεσμός έχει έρθει από αλλού, για να μπορεί να κατοχυρωθεί ο θεσμός. Εάν οι άνθρωποι ξέραν ότι κάνουν ήδη τους νόμους τους, θα τους ευβόντουσαν. Εσ' αυτό απαντάνε οι αρχαίοι Έλληνες και οι αρχαίοι Αθηνέοι, ναι, τους κάνουμε τους νόμους μας και όσον δεν τους έχουμε αλλάξει, τους ευόμαστε. Και σ' αυτό, κατά κάποιο τρόπο, προσπάθησε να απαντήσει το νεότερο δημοκρατικό και παναστατικό κίνημα στο μέτρο που απάντησε, προσπαθώντας να βάλει μπροστά την ιδέα ότι τους νόμους τους δημιουργεί ο λαός και ότι αυτό δεν είναι λόγος να μην είναι σεβαστοί αυτή η νόμη, παραδείγματι, δεν είναι έτσι. Από αυτή την άψη, αυτό το οποίο ενεργώ εγώ ως αυτώνομη κοινωνία, είναι μια κοινωνία όχι η οποία είναι διαφανής αλλά είναι μια κοινωνία η οποία ξέρει ότι δεν υπάρχει υπερβατικότητα, ότι δεν υπάρχει υπερβατική πηγή των θεσμών και των νόμων, ότι δεν υπάρχει μεταθάνατο ζωή αυτό που ξέραν οι αρχαίοι Έλληνες οι οποίοι δεν επίστευαν σε μεταθάνατο ζωή ή αν επίστευαν σε μεταθάνατο ζωή της δίνα ένα περιεχόμενο όπως φαίνεται στην Οδύσσια που ήταν 100 φορές χειρότερο από την επίγελ ζωή η οποία ξέρει συναιπώς ότι ό,τι γίνεται γίνεται εδώ κάτω και ότι ό,τι έχουμε να κάνουμε εμείς έχουμε να το κάνουμε και έχουμε να δώσουμε στον εαυτό μας, στους εαυτούς μας σαν κοινωνικό σύνολο κανόνες και νόμους που να μας επιτρέπουν να υπάρχουμε σαν αυτόνομη κοινωνία και σαν αυτόνομα άτομα μέσα σε αυτή την κοινωνία<|||||>> What am I missing? Both methods are different, and it's just luck based I think into how the split occurs. If you check the split I suggest, you can see the duplication, but it doesn't if you shift by 10s left or right. If OpenAI splits at different locations, then it will work better. But I think it could easily be the other way around. (Unless something else goes around within the model about timestamps within OpenAI, but afaik it's all really just luck based)
transformers
20,103
closed
Fix generate_dummy_inputs for ImageGPTOnnxConfig
# What does this PR do? `ImageGPT` ONNX tests fail with ```bash TypeError: __call__() takes 2 positional arguments but 3 were given ``` This is due to the way of calling `preprocessor(input_image, framework)` is not working with the changes introduced in Image Process PR #19796. This PR updates the calling way by using keyword arguments.
11-07-2022 14:25:24
11-07-2022 14:25:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,102
closed
Can't load FSMT model after resizing token embedding
### System Info **Environment info:** - transformers: 4.19.2 - Platform: Linux elementary OS 6.1 Jólnir - Python version: 3.8.10 - PyTorch version: 1.12.1+cu113 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @stas00 ### Who can help? @stas00 ### Expected behavior / Issue I am having issues to reload a saved FSMT model when the token embedding has been resized. This error doesn't appear with other models such as T5 or MT5. The similar error occured previously for other models as well but has been fixed (-> #9055 or #8706). However it doesn't seem to be fixed for the FSMT model. Currently I receive the following error: ``` RuntimeError: Error(s) in loading state_dict for FSMTForConditionalGeneration: size mismatch for model.encoder.embed_tokens.weight: copying a param with shape torch.Size([42026, 1024]) from checkpoint, the shape in current model is torch.Size([42024, 1024]). size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([42026, 1024]) from checkpoint, the shape in current model is torch.Size([42024, 1024]). ``` Any idea how to solve this? Thanks a lot and all the best! ### Reproduction ``` from transformers import FSMTForConditionalGeneration, FSMTTokenizer SAVING_PATH = "/tmp/test_model_fsmt" model_class = FSMTForConditionalGeneration model_path = "facebook/wmt19-de-en" model = model_class.from_pretrained(model_path) tokenizer = FSMTTokenizer.from_pretrained(model_path) tokenizer.add_tokens(['test1', 'test2']) model.resize_token_embeddings(len(tokenizer)) model.save_pretrained(SAVING_PATH) tokenizer.save_pretrained(SAVING_PATH) new_model = model_class.from_pretrained(SAVING_PATH) ```
11-07-2022 14:13:23
11-07-2022 14:13:23
Thanks for the clear reproducer. Looking at the code, it looks like FSMT in general does not properly support the `resize_token_embeddings` API: it's not using the same config names for the vocab size (easily fixable) but also the method resizes both the encoder and decoder embeddings and in this case, it should only resize the encoder embedding probably. In any case, I don't know the model as well as @stas00 so let's wait for him to chime in and advise on the best fix!<|||||>@alex96k, would you by chance would like to tackle that? The main difficulty with FSMT is that it has 2 unique dictionaries for many models, so some generic functionality is either not possible out of the box or requires some very careful thinking in order not to break other things. I think it's the only model of this kind in HF models. There is an outstanding PR that was trying to bring FSMT in sync with the rest of the models: https://github.com/huggingface/transformers/pull/11218 but it proved to cause a speed regression so it was never merged, but perhaps it had this resolved already? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,101
closed
Replace scatter operations in TAPAS by native PyTorch
### Feature request TAPAS (my first 🤗 contribution 😄 ) still relies on the [torch_scatter](https://github.com/rusty1s/pytorch_scatter) library, as the model uses some scatter operations on tensors. Back then PyTorch didn't have these operations available. Now they have: https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html#torch.Tensor.scatter_. So we should replace [this line](https://github.com/huggingface/transformers/blob/b8112eddecfd524038e3c10970c06a444a32aa9d/src/transformers/models/tapas/modeling_tapas.py#L1800) by native PyTorch. To confirm everything is working fine, one should run the following tests and make sure they pass (to be run from the root of this repository): ``` RUN_SLOW=yes pytest tests/models/tapas/test_modeling_tapas.py ``` Subsequently, all `is_scatter_available` mentions can be removed from the code base. ### Motivation By replacing this, our TAPAS implementation doesn't rely on a third-party library anymore. ### Your contribution I can look into this, but marking it as a good first/second issue. Update: I have an attempt here: https://github.com/huggingface/transformers/compare/main...NielsRogge:transformers:fix_tapas_scatter?expand=1. However, training tests didn't pass due to an issue in the backward pass.
11-07-2022 14:00:50
11-07-2022 14:00:50
@NielsRogge I would love to pick up this issue from where you left if that is okay for you<|||||>Awesome, feel free to take over my branch and see whether you can make all tests pass<|||||>@Bearnardd Thank you! You can start from this (updated) branch/PR (which is updated with the `main` branch with a tiny fix). If you want to start from @NielsRogge branch, you have to rebase on (updated) `main` first - there will be a few conflicts to resolve. https://github.com/huggingface/transformers/pull/20107 https://github.com/huggingface/transformers/tree/fix_tapas_scatter <|||||>@ydshieh Thanks for the guidance! I will start working on this problem after the work :)<|||||>Hi @NielsRogge - I have done a bit of debugging today and I have found the following roots of failing tests: 1. `tests/models/tapas/test_modeling_tapas.py::TapasUtilitiesTest::test_reduce_sum_vectorized` fails because in pytorch version of `scatter_reduce` the `src` and `index` tensors are required to have the same number of dimensions which is not the case in the above test. 2. `tests/models/tapas/test_modeling_tapas.py::TapasModelTest::test_training` fails in backward pass with the following error: `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation` which is happening because inside `segment_reduce` function we are making a view of the `segment_means` variable (which does not copy it) which results in modification in place. Simply cloning segment_means before viewing it seems to be working but I do not know what is your policy about cloning tensors in regard to memory management<|||||>Hi @Bearnardd - that's awesome! Thanks a lot for looking into it. Regarding point 1 - you can update that test accordingly to account for the PyTorch version of `scatter_reduce`. Regarding point 2 - you can clone the tensor and open a PR, we can discuss it there ;)<|||||>Hi @NielsRogge - I guess this issue should be closed?<|||||>Yes, thanks for reminding us :-) @Bearnardd Closed by #20149
transformers
20,100
closed
Fix MaskformerFeatureExtractor
# What does this PR do? This PR fixes MaskFormer's feature extractor. PR #18997 introduced a bug which made the feature extractor create the same binary mask for all segments/instances in an image, hence making it impossible to fine-tune the model. This PR fixes it and makes sure the model can be properly fine-tuned on instance, semantic and panoptic segmentation datasets. To do: - [x] add tests
11-07-2022 13:03:01
11-07-2022 13:03:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20100). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20100). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger could you approve this PR, so that I can merge this critical fix to MaskFormerFeatureExtractor? I'll open a separate PR for improving the docs around image segmentation.
transformers
20,099
closed
Add RetroPrompt in research_projects as example
# What does this PR do? - This PR adds example code for NeurIPS 2022 paper "[Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning](https://arxiv.org/pdf/2205.14704.pdf)" using huggingface's `transformers` library. - The folder `GLUE_task` includes three single sentence tasks (SST-2, MR, CR), three sentence pair classification tasks (MNLI, QNLI, QQP) and one information extraction task (Few-NERD), and the folder `RE_task` includes two information extraction tasks (SemEval, TACRED). - Our original implemention can be viewed at [https://github.com/zjunlp/PromptKG/tree/main/research/RetroPrompt](https://github.com/zjunlp/PromptKG/tree/main/research/RetroPrompt). ## Overview RetroPrompt constructs an open-book knowledge-store from training instances and implements a retrieval mechanism during the process of input, training and inference, thus equipping the model with the ability to retrieve related contexts from the training corpus as cues for enhancement. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-07-2022 12:43:47
11-07-2022 12:43:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20099). All of your documentation changes will be reflected on that endpoint.<|||||>Hi there! Thanks for using Transformers in your research! It looks like your proposed example contains too many modifications of the library with 47 new files, so it's probably best to leave it in your repo? We're happy to link to it from our community page.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,098
closed
Skip 2 tests in `VisionTextDualEncoderProcessorTest`
# What does this PR do? These 2 tests in `VisionTextDualEncoderProcessorTest` will be fixed when we add the new `AutoImageProcessor`. Current error is ```bash AttributeError: 'NoneType' object has no attribute 'from_dict ```
11-07-2022 12:37:43
11-07-2022 12:37:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,097
closed
README in Hindi 🇮🇳
# What does this PR do? Fixes PR #19903
11-07-2022 12:04:46
11-07-2022 12:04:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @sgugger, we are good to go? <|||||>Done.
transformers
20,096
closed
Generate: move generation_*.py src files into generation/*.py
# What does this PR do? Moves `generation_*.py` source files into `generation/*.py`. I tried a few slow tests locally, no problems were raised. ⚠️ the link to the docs seems broken, can't validate their correctness 🤔
11-07-2022 11:16:29
11-07-2022 11:16:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger now with `~/generation/__init__.py` populated with lazy references (as in most `__init__.py` files). I've updated all references to objects outside `/generation` from `generation.file_name.ObjectName` to `generation.ObjectName`, including in the docs. That does make tracking external references easier -- if an object is in `~/generation/__init__.py` it means that it is likely used somewhere else, and should be treated with extra care.<|||||>Could you please update `optimum` to reflect these changes? ``` from optimum.onnxruntime import ORTModelForSeq2SeqLM site-packages/transformers/generation_utils.py:27: FutureWarning: Importing `GenerationMixin` from `src/transformers/generation_utils.py` is deprecated and will be removed in Transformers v5. Import as `from transformers import GenerationMixin` instead. FutureWarning, ```<|||||>> Could you please update `optimum` to reflect these changes? > > ``` > from optimum.onnxruntime import ORTModelForSeq2SeqLM > site-packages/transformers/generation_utils.py:27: FutureWarning: Importing `GenerationMixin` from `src/transformers/generation_utils.py` is deprecated and will be removed in Transformers v5. Import as `from transformers import GenerationMixin` instead. > FutureWarning, > ``` The PR https://github.com/huggingface/optimum/pull/536 has just been merged and solves this issue.
transformers
20,095
closed
Transformers documentation translation to Chinese (Simplified)
Hi! Let's bring the documentation to all the Chinese-speaking community :) Who would want to translate? Please follow our [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know here if you'd like to translate any and we'll add your name to the list. Some notes: - Add your translations to the folder [source/zh/](https://github.com/huggingface/transformers/blob/main/docs/source/zh/) - Register your translation in [zh/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/zh/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). - Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. - 🙋 If you'd like others to help you with the translation, you can also post in our [forums](https://discuss.huggingface.co/) or tag [@espejelomar](https://twitter.com/espejelomar) on Twitter to gain some visibility. ## Get Started section - - [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx). @bfss - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.mdx). @bfss
11-07-2022 11:09:18
11-07-2022 11:09:18
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,094
closed
[Don't merge] Check CircleCI against PyTorch 1.13
# What does this PR do? **[Don't merge]** Check CircleCI against PyTorch 1.13
11-07-2022 10:24:58
11-07-2022 10:24:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20094). All of your documentation changes will be reflected on that endpoint.
transformers
20,093
closed
docs: Replace unsupported `facebookresearch/bitsandbytes`
# What does this PR do? * Replace unsupported https://github.com/facebookresearch/bitsandbytes with https://github.com/TimDettmers/bitsandbytes, which is by the same author and still being maintained and updated. For reference, the latter repository is the one mentioned in your blog post https://huggingface.co/blog/hf-bitsandbytes-integration, which was co-written by Tim Dettmers. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @sgugger @TimDettmers --- - Tom Aarsen
11-07-2022 09:29:24
11-07-2022 09:29:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,092
closed
[wip doc buidler test]
null
11-07-2022 08:43:59
11-07-2022 08:43:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20092). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,091
closed
OWL-ViT training / fine-tuning code
### Feature request Hi, I've noticed that recently Google Research added the training and fine-tuning code for OWL-ViT in Scenic. Are you planning to integrate it in HuggingFace Transformers? Thank you!
11-07-2022 08:36:04
11-07-2022 08:36:04
cc @alaradirik <|||||>Hi @ekazakos, thanks for the suggestion! And yes, we are planning to integrate it to transformers shortly. @sgugger @NielsRogge in the paper, authors first train a base CLIP model using an image size of 224 × 224 and then resize the image position embeddings with linear interpolation to 768 x 768 before fine-tuning the whole model on the object detection task. Is it a good idea to have separate sections in the configuration file for the inference and training modes?<|||||>Thank you so much!! <|||||>> in the paper, authors first train a base CLIP model using an image size of 224 × 224 and then resize the image position embeddings with linear interpolation to 768 x 768 before fine-tuning the whole model on the object detection task. Is it a good idea to have separate sections in the configuration file for the inference and training modes? No I wouldn't do that, I'd just refer to [this script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) if people are interested in training a CLIP model themselves. Afterwards, they can load the weights into the `OwlViTForObjectDetection` model using a conversion script. This conversion script should include the interpolation of the position embeddings.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>May I ask for about how long will you release the training and/or finetuning code of OWL-ViT? I personally think this feature will boost the usage of the model very much.<|||||>@alaradirik yeah, we need the fine-tuning code :)<|||||>I also would like the fine-tuning code example.<|||||>If anyone is interested I have a repo here: https://github.com/stevebottos/owl-vit-object-detection which is based on the huggingface implementation. It's still a WIP though and more of an experiment, but it works. If @alaradirik is interested I can help get this in.
transformers
20,090
closed
Fix overflow images in layoutxlm and layoutlmv2
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> `LayoutXLMProcessor` and `LayoutLMv2Processor` accept `return_tensors` as a parameter, so it should return different type depends on `return_tensors` instead of list at all time, but now `get_overflowing_images` always return `list` for now. <!-- Remove if not applicable --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-07-2022 06:26:09
11-07-2022 06:26:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20090). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thanks for adding! Can you update LayoutLMv3's processor as well? of course<|||||>> Thanks for adding! Can you update LayoutLMv3's processor as well? @NielsRogge I have fix LayoutLMv3's processor<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,089
closed
Regression: TorchIterableDataset doesn't have __len__
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no (n/a) - Using distributed or parallel set-up in script?: no ### Who can help? * Git blame suggests @sgugger, @anton-l * Would be great if @patil-suraj tries the whole pipeline linked because there are some other issues downstream ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the `Trainer` with a dataset with `streaming=True`, making it iterable. To make the `Trainer` `train_dataset` work with streaming, use `.with_format("torch")` (as suggested in https://github.com/huggingface/datasets/issues/2583#issuecomment-874078780 and [here](https://discuss.huggingface.co/t/using-iterabledataset-with-trainer-iterabledataset-has-no-len/15790/2?u=maxkriegers)). A simple repro is below and in [**this colab**](https://colab.research.google.com/drive/1V5F5ut410hiJgS6c5us5iVz9Fk6JwOF1?usp=sharing) ``` model_name = "gpt2" output_dir = "." from transformers import GPT2Tokenizer, GPT2Model from transformers import Trainer, TrainingArguments from datasets import load_dataset from transformers.data.data_collator import DataCollatorForLanguageModeling dataset = load_dataset("rotten_tomatoes", split="train", streaming=True).shuffle(seed=42) tokenizer = GPT2Tokenizer.from_pretrained(model_name) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) model = GPT2Model.from_pretrained(model_name) training_args = TrainingArguments( output_dir=output_dir, num_train_epochs=5, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset.with_format("torch") ) trainer.train() trainer.save_model() ``` which yields ``` ValueError Traceback (most recent call last) <ipython-input-8-de14de894c00> in <module> 8 args=training_args, 9 data_collator=data_collator, ---> 10 train_dataset=dataset.with_format("torch") 11 ) /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics) 504 505 if train_dataset is not None and not has_length(train_dataset) and args.max_steps <= 0: --> 506 raise ValueError("train_dataset does not implement __len__, max_steps has to be specified") 507 508 if ( ValueError: train_dataset does not implement __len__, max_steps has to be specified ``` ### Expected behavior `TorchIterableDataset` should implement `__len__` but doesn't. It instead has a `.dataset_size` method.
11-07-2022 04:56:15
11-07-2022 04:56:15
Thanks for opening the issue. What exactly is the regression here? On which version of Transformers did it work and when did it stop working? As the error clearly states (copying the full error message would be helpful by the way), you need to use `max_steps` in your training arguments instead of `num_train_epochs` since your dataset doesn't have a length.<|||||>Hey @sgugger, sorry for the missing information. According to [this forum post](https://discuss.huggingface.co/t/using-iterabledataset-with-trainer-iterabledataset-has-no-len/15790/2), the fix for this exact error is to cast the dataset to a torch dataset as described above. However, the error persists. I have not bisected to figure out if it really is a regression, but it seems so from the code online.<|||||>Hi @maxkrieger - in the forum post that you have provided you can see that the `training_args` already contains argument `max_steps=1e6`. In order for your sample to work correctly, you need to both set the `max_steps` argument as well as format your dataset for Pytorch. <|||||>Additionally if I am not mistaken after specifying `max_steps` argument you can drop `num_train_epochs` since it will be override anyways .<|||||>Aahh 🤦 somehow missed that parameter while reading the snippets @Bearnardd. Apologies for the lack of diligence, this is resolved.
transformers
20,088
closed
docs: Resolve many typos in the English docs
# What does this PR do? * Fixes typo in `python -m transformers.onnx --help`. * Fixes many typos in the English documentation via `codespell`. - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @sgugger - Tom Aarsen
11-06-2022 20:11:24
11-06-2022 20:11:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,087
closed
docs: Fixed variables in f-strings
# What does this PR do? Fixes unknown variables in some documentation code blocks' f-strings. - [x] This PR fixes a typo or improves the docs ## In addition... The following code block, which can be found at https://huggingface.co/docs/transformers/custom_models#writing-a-custom-model & [custom_models.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/custom_models.mdx), uses `torch` without `import torch` being performed in any prior docstring. ```python class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` If preferred, I can add `import torch` in this code block in this PR, or make a new one for it. ## Who can review? Documentation: @sgugger - Tom Aarsen
11-06-2022 19:41:53
11-06-2022 19:41:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Done & done!<|||||>Thanks again for your contribution!
transformers
20,086
closed
autogenerated files based on vit model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-06-2022 11:45:06
11-06-2022 11:45:06
I opened a pull request in the library when I was supposed to do it directly in the fork. Closing this one and doing it in my fork
transformers
20,085
closed
ZeroDivisionError: integer division or modulo by zero
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-42-shopee-generic-x86_64-with-glibc2.23 - Python version: 3.9.7 - Huggingface_hub version: 0.2.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I used the scripts (https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim.py) to further pretraining swintransformer in my dataset. I just modified the way to read the data like below ``` ds = {} if 'train' in data_args.data_files.keys(): train_images = os.listdir(data_args.data_files['train']) train_images_files = [os.path.join(data_args.data_files['train'], image) for image in train_images] ds['train'] = Dataset.from_dict({'image': train_images_files}).cast_column("image", Image()) if 'validation' in data_args.data_files.keys(): val_images = os.listdir(data_args.data_files['validation']) val_images_files = [os.path.join(data_args.data_files['validation'], image) for image in val_images] ds['validation'] = Dataset.from_dict({'image': val_images_files}).cast_column("image", Image()) ``` and the scripts is ``` python /data/min.ming/project_user_gender_optimization/main/model_further_pretrain.py \ --model_name_or_path /data/min.ming/project_user_gender_optimization/res/swin-tiny-patch4-window7-224 \ --train_dir /data/min.ming/project_user_gender_optimization/data/avatar/train/ID_TRAIN_100000 \ --do_train \ --per_device_train_batch_size 8 \ --num_train_epochs 5 \ --output_dir /data/min.ming/project_user_gender_optimization/tmp/swintransformer_test \ --overwrite_output_dir \ --report_to none ``` but get ZeroDivisionError: integer division or modulo by zero ### Expected behavior Do you know where is the problem
11-06-2022 09:53:41
11-06-2022 09:53:41
Full error log ``` File "/data/min.ming/project_user_gender_optimization/main/model_further_pretrain.py", line 437, in <module> main() File "/data/min.ming/project_user_gender_optimization/main/model_further_pretrain.py", line 411, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/transformers/trainer.py", line 1498, in train return inner_training_loop( File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/transformers/trainer.py", line 1714, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__ data = self._next_data() File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 721, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2165, in __getitem__ return self._getitem( File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2149, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 491, in query_table pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices) File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 57, in _query_table_with_indices_mapping return _query_table(table, key) File "/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 81, in _query_table return table.fast_slice(key % table.num_rows, 1) ZeroDivisionError: integer division or modulo by zero ```<|||||>@mm1352363 how did you solved the problem? I'm facing the same issue.<|||||>I got the exact same error when I wrapped the model in `model = torch.nn.DataParallel(model)`, passed the wrapped model to `Trainer` and called `trainer.evaluate()` or `trainer.train()`. Turned out the `Trainer` handles multiple GPUs automatically and wrapping the model in `DataParallel` is not necessary. Removing `model = torch.nn.DataParallel(model)` solved the issue.
transformers
20,084
closed
[Docs] Add resources of OpenAI GPT
# What does this PR do? Adds resources of OpenAI GPT according to [this issue](https://github.com/huggingface/transformers/issues/20055) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #20055 (partially) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-06-2022 07:56:41
11-06-2022 07:56:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @stevhliu, I added relevant scripts and notebooks so please have a look!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.<|||||>@stevhliu I changed the doc following your comments! Please have a look<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.<|||||>@stevhliu @sgugger Thanks for your review! Hope to contribute more to transformers!
transformers
20,083
closed
Where is the Translation template ?
I want to translate the doc in leisure time, and I followed the guide, but not found Translation template...
11-06-2022 06:44:12
11-06-2022 06:44:12
There's some relevant issues on this where people encountered the same issue: * #17404 * #17028 For context, @bfss is stating that the [TRANSLATING.md](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) documentation says: > To get started, navigate to the [Issues](https://github.com/huggingface/transformers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button. However, no such issue template exists. --- That said, there are also some larger issues tracking the progress of the translation of a certain language. Perhaps these can be used as an informal template or guide if no issue for your language exists. * https://github.com/huggingface/transformers/issues?q=Tranformers+documentation+translation+to+<|||||>Thank you for your reply~ @tomaarsen <|||||>Trying to add the template right now. https://github.com/huggingface/transformers/pull/20199
transformers
20,082
closed
Models trained using Deepspeed ZeRO stage 3 have corrupted model weight shape
### System Info transformers version: 4.21.1 | 4.24.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am currently trying to use deepspeed to finetune a AutoModelForCausalLM model (facebook/opt1.3b) on a multi-GPU instance with ZeRO optimization with the unmodified `run_clm_no_trainer.py` script from the examples. When I use ZeRO stage 2 to train the model, the model weights can be loaded normally. However, when I try using ZeRO stage 3 with CPU offloads for the optimizer weights, the model training proceeds normally with loss values and metrics that make sense. But I get the follow error when I try loading the weights. ``` RuntimeError: Error(s) in loading state_dict for OPTForCausalLM: size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]). size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2050, 2560]). size mismatch for model.decoder.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). ... size mismatch for model.decoder.layers.31.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.31.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for lm_head.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]). You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method. ``` This is very strange as the `torch.Size([0])` error seems to be pervasive across all layers of the model, suggesting the weights are just empty and uninitialized. This is just speculation as the documentation does not seem to address the specifics of training with different ZeRO stages. I have tried the loading the model manually using `AutoModelForCausalLM.from_pretrained('./model_dir')` where `model_dir` is where the weights were saved after training, yet the same error is still thrown. I am not sure if this is a bug or using ZeRO stage 3 is currently unsupported. Any help would be much appreciated. ### Expected behavior Models trained using ZeRO stage 3 should load correctly.
11-05-2022 20:15:01
11-05-2022 20:15:01
Full error message with all the incorrect layers below: ``` RuntimeError: Error(s) in loading state_dict for OPTForCausalLM: size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]). size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2050, 2560]). size mismatch for model.decoder.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.0.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.0.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.1.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.1.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.1.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.1.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.1.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.2.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.2.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.2.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.2.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.2.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.2.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.3.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.3.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.3.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.3.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.3.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.3.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.4.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.4.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.4.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.4.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.4.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.4.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.5.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.5.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.5.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.5.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.5.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.5.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.6.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.6.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.6.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.6.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.6.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.6.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.7.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.7.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.7.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.7.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.7.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.7.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.8.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.8.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.8.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.8.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.8.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.8.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.9.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.9.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.9.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.9.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.9.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.9.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.10.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.10.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.10.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.10.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.10.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.10.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.11.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.11.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.11.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.11.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.11.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.12.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.12.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.12.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.12.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.12.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.12.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.13.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.13.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.13.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.13.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.13.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.13.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.14.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.14.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.14.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.14.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.14.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.14.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.15.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.15.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.15.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.15.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.15.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.15.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.16.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.16.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.16.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.16.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.16.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.16.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.17.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.17.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.17.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.17.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.17.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.17.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.18.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.18.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.18.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.18.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.18.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.18.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.19.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.19.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.19.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.19.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.19.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.19.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.20.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.20.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.20.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.20.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.20.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.20.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.21.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.21.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.21.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.21.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.21.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.21.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.22.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.22.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.22.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.22.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.22.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.22.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.23.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.23.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.23.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.23.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.23.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.23.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.24.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.24.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.24.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.24.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.24.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.24.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.25.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.25.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.25.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.25.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.25.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.25.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.26.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.26.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.26.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.26.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.26.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.26.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.27.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.27.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.27.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.27.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.27.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.27.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.28.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.28.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.28.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.28.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.28.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.28.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.29.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.29.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.29.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.29.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.29.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.29.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.30.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.30.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.30.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.30.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.30.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.30.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for model.decoder.layers.31.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.31.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.31.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.31.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]). size mismatch for model.decoder.layers.31.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]). size mismatch for model.decoder.layers.31.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]). size mismatch for lm_head.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]). You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method. ```<|||||>cc @pacman100 <|||||>Hello @JohnnyRacer, can you provide script of how you are saving and loading weights post training? Also, what is the accelerate config? <|||||>Hey @pacman100 . I am using the unmodified `run_clm_no_trainer.py` from the latest commit of `transformers`, with the following training commands : ``` accelerate launch run_clm_no_trainer.py \ --model_name_or_path facebook/opt-1.3b \ --dataset_name wikitext \ --num_train_epochs 6 \ --block_size 128 \ --output_dir ./opt-1.3b-wikitext ``` Accelerate config from config file : ```json { "compute_environment": "LOCAL_MACHINE", "deepspeed_config": { "gradient_accumulation_steps": 1, "offload_optimizer_device": "cpu", "offload_param_device": "none", "zero3_init_flag": false, "zero3_save_16bit_model": false, "zero_stage": 3 }, "distributed_type": "DEEPSPEED", "downcast_bf16": "no", "fsdp_config": {}, "gpu_ids": null, "machine_rank": 0, "main_process_ip": null, "main_process_port": null, "main_training_function": "main", "mixed_precision": "fp16", "num_machines": 1, "num_processes": 2, "rdzv_backend": "static", "same_network": true, "use_cpu": false } ``` The config doesn't show it but I have it configured for a multi-GPU setup on a single local instance with 2 accelerators. And I have tried loading the models with the following commands using `transfomers.AutoModelForCausalLM` ```python import torch from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b") model.load_state_dict(torch.load("./opt-1.3b-wikitext/pytorch_model.bin")) ``` As well as loading the model directly from the directory via the model config by using: ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("opt-1.3b-wikitext") ``` Both of these methods results in the same error that I have described above.<|||||>Hello @JohnnyRacer , please refer below code snippet on changes required when saving deepspeed ZeRO-3 model. The example can be found here: [deepspeed_with_config_support.py](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/deepspeed_with_config_support.py) https://github.com/huggingface/accelerate/blob/cea6aaa1161d45f7f23ef33fcc3b0a5999ebb5a1/examples/by_feature/deepspeed_with_config_support.py#L712-L723<|||||>Thanks @pacman100 . Just finished training the model and can confirm loading works correctly with the script you have linked. However I still had to modify the script to include this fix for an issue I had earlier to ensure the weights can be correctly loaded, [link to issue](https://github.com/huggingface/transformers/issues/19959). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,081
closed
Discrepancy between PegasusTokenizer and PegasusTokenizerFast
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.8.0-51-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import PegasusTokenizerFast,PegasusTokenizer toker_fast = PegasusTokenizerFast.from_pretrained("/data/pretrained_model/pegasus_large/") toker = PegasusTokenizer.from_pretrained("/data/pretrained_model/pegasus_large/") print(toker_fast.decode(toker_fast.encode("huggingface/transfomer"),skip_special_tokens=False)) ## huggingface/transfomer</s> print(toker.decode(toker.encode("huggingface/transfomer"),skip_special_tokens=False)) ## huggingface/transfomer ``` ### Expected behavior These two should be the same. I suppose this is the problem of `PegasusTokenizer`. Because EOS token is needed in Generation Task.
11-05-2022 15:06:50
11-05-2022 15:06:50
I think this may be the problem of `PegasusTokenizer.decode()`. It indeed add `<\s>` to the end of the sentence, but decoder fails to keep it even when `skip_special_tokens=False` <img width="819" alt="draft ipynb — selfmem SSH: wict3090 2022-11-06 10-19-50" src="https://user-images.githubusercontent.com/38466901/200150974-1bde2180-8c6a-4ea0-94f8-6e36a20fceca.png"> <|||||>I think this was recently fixed by #15775. At least on `google/pegasus-xsum` and the main branch of Transformers, I don't see any differences in the outputs. Not sure if this is the model you were using since yours is local. Could you give us a repo ID on the Hub if the issue persists on your side?<|||||>after updating transformers to the latest version by ```shell pip uninstall transformers pip install transformers ``` on `google/pegasus-large` and `google/pegasus-xsum`, the problem still exists: ![draft ipynb — selfmem SSH: 45a3159k71 zicp vip 2022-11-07 23-31-11](https://user-images.githubusercontent.com/38466901/200349559-03707957-2f9c-4a80-9f06-909d27b79585.png) ![draft ipynb — selfmem SSH: 45a3159k71 zicp vip 2022-11-07 23-32-12](https://user-images.githubusercontent.com/38466901/200349758-2f9a86a2-0aa0-433d-aa14-a307c164d2d5.png) <|||||>Sorry, I didn't notice this PR has not been merged yet. This https://github.com/huggingface/transformers/pull/15775 indeed solves this problem. Thanks ! ![draft ipynb — selfmem SSH: 45a3159k71 zicp vip 2022-11-07 23-45-40](https://user-images.githubusercontent.com/38466901/200352923-40e6ef4e-d60e-4441-98ca-ecd9aad2c061.png)
transformers
20,080
closed
[Doctest] Add configuration_dpr.py
# What does this PR do? Adds configuration_dpr.py to utils/documentation_tests.txt Based on https://github.com/huggingface/transformers/issues/19487 @ydshieh can you please have a look? thanks :D <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-05-2022 13:19:33
11-05-2022 13:19:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,079
closed
Exception on saving results in official glue example scripts
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-4.14.81.bm.22-amd64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger, @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was running the official glue example script `transformers/examples/pytorch/text-classification/run_glue_no_trainer.py` on STS-B task. ```sh export TASK_NAME=stsb python run_glue_no_trainer.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ``` The training went well, but on saving the results it raised the error below: ``` Configuration saved in /tmp/stsb/config.json Model weights saved in /tmp/stsb/pytorch_model.bin tokenizer config file saved in /tmp/stsb/tokenizer_config.json Special tokens file saved in /tmp/stsb/special_tokens_map.json Traceback (most recent call last): File "run_glue_no_trainer.py", line 633, in <module> main() File "run_glue_no_trainer.py", line 629, in main json.dump({"eval_accuracy": eval_metric["accuracy"]}, f) KeyError: 'accuracy' ``` ### Expected behavior Some of the glue tasks (STS-B, CoLA) don't use "accuracy" as metric. Maybe need to check the metric keys before accessing `eval_metric`. https://github.com/huggingface/transformers/blob/504db92e7da010070c36e185332420a1d52c12b2/examples/pytorch/text-classification/run_glue_no_trainer.py#L627-L629 BTW, I have noticed that this block of code also appears in lots of other example scripts like multiple-choice, semantic-segmentation, etc. I'm not sure whether those scripts have the same issue.
11-05-2022 08:03:59
11-05-2022 08:03:59
Yes, the whole `eval_metric` dict should probably be dumped without accessing keys. Do you want to open a PR with this change? cc @muellerzr who wrote this.<|||||>Yeah, I'd like to help. The `eval_metric` should be dumped with all its keys prefixed by `eval_`, just like what `run_glue.py` does. https://github.com/huggingface/transformers/blob/504db92e7da010070c36e185332420a1d52c12b2/examples/pytorch/text-classification/run_glue.py#L573 I happen to find an example script that already fixed this issue by prefixing all keys in `eval_metric` before saving it. https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py#L66-L86 I will create a PR to migrate this solution to all remaining unfixed examples. Is it ok?<|||||>That would be great, yeah!
transformers
20,078
closed
Converting CLIPText Model (transformers.CLIPTextModel) Embeddings Back to Text
### Feature request Is there a method for converting CLIPText Model (transformers.CLIPTextModel) model embeddings back to text? I looked in the documentation, but could not find anything that addresses this specific query. I am also interested in finding out the following: - Are there tools in the Hugging Face ecosystem for calculating the weighted average of embeddings? - Is there a way to query the CLIP Model using embeddings and not text or image inputs? ### Motivation I would like to edit prompts mathematically. And, the easiest way to do this would be to get the vector embeddings and to effect the desired mathematical transformations on them. ### Your contribution I have no contribution beyond my question.
11-05-2022 06:15:02
11-05-2022 06:15:02
Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.
transformers
20,077
closed
Use huggingface_hub.model_info() to get pipline_tag
# What does this PR do? This PR replaces raw HTTP GET with `huggingface_hub.model_info()` to get `pipeline_tag` of model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Narsil @LysandreJik
11-04-2022 23:50:28
11-04-2022 23:50:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR @y-tag!
transformers
20,076
closed
[Minor change] Remove mention of paying subscription
# What does this PR do? Sorry for the two pull requests, I used the online edit function and wasn't sure how to group the two commits into one PR. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-04-2022 21:11:19
11-04-2022 21:11:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,075
closed
[Minor change] Remove mention of paying subscription
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Maybe @sgugger ? I'm not sure Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-04-2022 21:10:02
11-04-2022 21:10:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,074
closed
Add SpA-Former
# What does this PR do? This PR adds the SpA-Former model to the 🤗 repository. I also opened an Issue for adding the model https://github.com/huggingface/transformers/issues/19971 # Who can review? @NielsRogge
11-04-2022 20:56:32
11-04-2022 20:56:32
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @shivance do you want to proceed with this?
transformers
20,072
closed
save_pretrained not working correctly when using device_map="auto" for big models in from_pretrained
### System Info - `transformers` version: 4.21.3 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.5 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <True> - Using distributed or parallel set-up in script?: <True> ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python ML_MODEL = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto") ML_MODEL.save_pretrained("custom_path") ``` 1) Load Flan-T5-xxl using from_pretrained with device_map="auto" In my case the model is loaded to 24 GB GPU RAM (RTX 3090) and the rest of the model ist loaded to the CPU RAM 2) Save Model to custom path using save_pretrained In my case save_pretrained seems only to save the model information from the GPU RAM. Checking the file sizes in the custom directory only chunk 1 and 2 of the model.bin files have the expected 10GB. Chunk 3 is only partly saved and chunk 4 and 5 have only a few kB. ### Expected behavior When using from_pretrained without device_map="auto" the model is completely loaded into CPU RAM and also completely saved using save_pretrained. Checking the file sizes in the custom path chunk 1 to 4 have the expected 10GB and chunk5 has the expected 6GB. Same file sizes as in the Transformer Cache Directory. This behaviour is expected when using from_pretrained with device_map="auto".
11-04-2022 20:48:25
11-04-2022 20:48:25
Thanks for the report! Using `device_map="auto"` is only for inference, and it indeed does not work with `save_pretrained` yet, especially with offloaded weights. We will look at adding support for this in the future!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,071
closed
Transformer is not compatible with Python 3.11.0
### System Info ``` Microsoft Windows [Version 10.0.22621.674] (c) Microsoft Corporation. All rights reserved. C:\Users\donhu>wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py 'wget' is not recognized as an internal or external command, operable program or batch file. C:\Users\donhu># For security purposes, please check the contents of collect_env.py before running it. '#' is not recognized as an internal or external command, operable program or batch file. C:\Users\donhu>python collect_env.py python: can't open file 'C:\\Users\\donhu\\collect_env.py': [Errno 2] No such file or directory C:\Users\donhu>cd d: D:\ C:\Users\donhu>cd /d D: D:\>python collect_env.py Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Pro GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22621-SP0 Is CUDA available: N/A CUDA runtime version: 11.7.64 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 SUPER Nvidia driver version: 512.77 cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\cudnn_ops_train64_8.dll HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A Versions of relevant libraries: [pip3] numpy==1.23.4 [conda] Could not collect D:\> ``` ### Who can help? @donhuvy ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use project https://github.com/donhuvy/vy_thesis Set up project. Transformer is not compatible with Python 3.11.0 ### Expected behavior Transformer is compatible with Python 3.11.0
11-04-2022 20:30:47
11-04-2022 20:30:47
Hey @donhuvy, I don't see anything specific to `transformers`, only to `torch`. How is this related to transformers?<|||||>It is https://github.com/donhuvy/vy_thesis/blob/main/source/train.py#L22 ``` tokenizer=transformers.AutoTokenizer.from_pretrained(hyps_file["encoder"], use_fast=False), ``` From your experience, Are you sure transformer and other of transformer's dependencies work ok with Python 3.11.0?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Is support for Python 3.11 available now? Please share details. Thanks<|||||>As mentioned in the comments above, the problem came from a soft dependency of Transformers (PyTorch). I think they support Python 3.11 now but bet to ask on their repo/forums if you are encountering any issue :-)<|||||>According to my tests, the latest `huggingface/transformers` installation on Python 3.11 fails because it tries to install `sentencepiece==0.1.97`. ![Screenshot 2023-07-25 at 8 54 47](https://github.com/huggingface/transformers/assets/12232897/f92d74d5-213d-4295-a900-40b46fdf12b9) [sentencepiece==0.1.98 is enabled for Python 3.11](https://github.com/google/sentencepiece/issues/810) Could you please change `huggingface/transformers` requirement to `sentencepiece==0.1.98` ?? <|||||>We do not pin [`sentencepiece`](https://github.com/huggingface/transformers/blob/ee1eb3b325ce360bbd6c910c1402bca9dfb418f9/setup.py#L165), so the upper bound comes from something in your environment, not Transformers. In fact our CI (which run `pip install transformers[all]`) installs `sentencepiece==0.1.99`.
transformers
20,070
closed
IndexError running ESMFold
### System Info CentOS 7, transformers-4.25.0.dev0, Python 3.10.6 ### Who can help? @Rocketknight1 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running the following example script (added in #20000) is throwing an IndexError for me. I tried both the `Rocketknight1/esmfold_v1` and `facebook/esmfold_v1` model repositories ```python from transformers import AutoTokenizer, EsmForProteinFolding model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1") tokenizer = AutoTokenizer.from_pretrained("facebook/esmfold_v1") inputs = tokenizer(["MLKNVQVQLV"], return_tensors="pt") # A tiny random peptide outputs = model(**inputs) folded_positions = outputs.positions ``` ``` Traceback (most recent call last): File "test_esmfold.py", line 7, in <module> outputs = model(**inputs) File "torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "transformers/models/esm/modeling_esmfold.py", line 2121, in forward esmaa = self.af2_idx_to_esm_idx(aa, attention_mask) File "transformers/models/esm/modeling_esmfold.py", line 2211, in af2_idx_to_esm_idx return self.af2_to_esm[aa] IndexError: index 24 is out of bounds for dimension 0 with size 22 ``` ### Expected behavior No IndexError
11-04-2022 17:54:33
11-04-2022 17:54:33
I guess you need to call ``` inputs = tokenizer(["MLKNVQVQLV"], return_tensors="pt", add_special_tokens=False) ``` code example was already updated: https://github.com/huggingface/transformers/blob/main/src/transformers/models/esm/modeling_esmfold.py#L2103 <|||||>Thanks @maxjeblick! And yes - the `ESMFold` tokenizer doesn't use special tokens, but the other `ESM` tokenizers do. I'll see if I can set this in the config so that users don't have to keep remembering it, because I kept getting errors from forgetting it too!
transformers
20,069
closed
Show installed libraries and their versions
# What does this PR do? Similar to #20026, to make this information easier to access.
11-04-2022 15:35:07
11-04-2022 15:35:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,068
closed
Update documentation on absolute position embed seq2seq models
Update documentation on seq2seq models with absolute positional embeddings to be in line with BERT and GPT2. Issue #19581. For models with absolute positional embeddings, it is not usually a good idea to left-pad as if positional embeddings are not shifted the right amount for each element in the batch the results will not be correct. Further work may be required to incorporate `position_ids` kwargs (but possibly for both encoder/decoder) for these models similar to BERT and GPT2. However, at the very least the documentation should be updated to be consistent with BERT/GPT2 to provide a warning.
11-04-2022 15:18:32
11-04-2022 15:18:32
_The documentation is not available anymore as the PR was closed or merged._