repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
17,857
closed
TF: XLA beam search + most generation-compatible models are now also XLA-generate-compatible
# What does this PR do? The much-awaited PR -- beam search is now XLA compatible. GPT2 is the only model with XLA beam search tests, more models will follow in subsequent PRs 🎊 Preliminary tests on my machine shows that XLA beam search on GPU is ~26x faster (greedy search and sample are ~30x faster). Slow tests have been run for the usual generate models (gpt2, t5, rag, speech_to_text, encoder_decoder, vision_encoder_decoder, bart). EDIT: I've also generalized a few functions, and now ALL models that are compatible with generate are also compatible with XLA generate (with a few exceptions, when the models have no past cache support) __________________________________ A hard-earned lesson which is kinda obvious in hindsight: `if` branches can make the XLA compiler confused about variable shapes, tagging their shape as `<unknown>`, which in turn causes all sorts of exceptions. Out of curiosity, I tried replacing the `if` by `tf.cond`, but the `<unknown>` shape persisted (because the tensor could indeed have a different shape at tracing time, depending on the branch taken)
06-23-2022 19:58:17
06-23-2022 19:58:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>Very cool! Can we try it out for at least on Encoder-Decoder architecture as well (just to know that this code holds true here)?<|||||>@patrickvonplaten @Rocketknight1 now with encoder-decoder tests, and ready for review -- I was working on it on a separate branch, so I've merged it into this one. Now, this PR standardizes the XLA model kwargs preparation, and most models can use the XLA functionality. Some models were incompatible for different reasons, so there is a new flag to gate XLA generation (and the flag is set in the problematic architectures). Finally, I'm also considering adding a general test like `test_xla_generate_fast`, but with `@slow`, beam search, and >100 tokens. It will probably break for a few models (like T5), but at least we would be able to automatically track which models are reliable with XLA beam search -- WDYT? <|||||>Note: as per the comment above, if this PR gets merged as it is, I will open an issue to track issues regarding XLA generation (relevant models failing fast tests, as well as models failing the slow tests)
transformers
17,856
closed
Properly calculate the total train iterations and recalculate num epochs in no_trainer scripts
# What does this PR do? This PR fixes a situation where your `max_train_steps` was less than that of a single epoch, but yet the script would still continue through the entire first epoch due to how the number of batches recalculation was performed. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-23-2022 18:21:30
06-23-2022 18:21:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger new timings when compared to old. Main decreases repeatedly were seen between image_classification, swag, and squad (though squad is an iffy one. the previous two I can guarantee): <html><body> <!--StartFragment--><google-sheets-html-origin> Example | Before | After -- | -- | -- image_classification | 99.7 | 40.55 swag | 65.21 | 55.31 squad | 59.45 | 41.67 clm | 37.45 | 35.69 ner | 28.34 | 25.51 glue | 21.88 | 19.35 mlm | 18.52 | 15.47 <!--EndFragment--> </body> </html>
transformers
17,855
closed
LayoutLMv2 training on sagemaker error: undefined value has_torch_function_variadic
### System Info ```shell transformer: 4.17.0 torch: 1.10.2 Platform: Sagemaker Deep Learning Container ``` ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The error only comes when training on Sagemaker using Huggingface. Scripts to start training on Sagemaker: Folder organization: ``` ./ ----sg_training.py ----scripts -------requirements.txt -------train.py ``` sg_training.py: ``` import boto3 import sagemaker from sagemaker.huggingface import HuggingFace if __name__ == "__main__": iam_client = boto3.client(...) role = iam_client.get_role(...)['Role']['Arn'] sess = sagemaker.Session() sagemaker_session_bucket = 's3-sagemaker-session' hyperparameters = {'epochs': 20, 'train_batch_size': 1, 'model_name': "microsoft/layoutxlm-base", 'output_dir': '/opt/ml/model/', 'checkpoints': '/opt/ml/checkpoints/', 'combine_train_val': True, 'exp_tracker': "all", 'exp_name': 'Sagemaker Training' } huggingface_estimator = HuggingFace(entry_point='train.py', source_dir='scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.17.0', pytorch_version='1.10.2', py_version='py38', hyperparameters=hyperparameters, environment={'HF_TASK': 'text-classification'}, code_location='s3://dummy_code_location') huggingface_estimator.fit() ``` Entrypoint scripts folder: requirements.txt: ``` git+https://github.com/facebookresearch/detectron2.git ``` train.py: ``` import argparse import logging import os import sys from transformers import LayoutLMv2ForSequenceClassification def run(): model = LayoutLMv2ForSequenceClassification.from_pretrained('microsoft/layoutxlm-base', num_labels=5) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--epochs", type=int, default=3) parser.add_argument("--exp_name", type=str, default="Sagemaker Training") parser.add_argument("--train-batch-size", type=int, default=2) parser.add_argument("--eval-batch-size", type=int, default=1) parser.add_argument("--warmup_steps", type=int, default=500) parser.add_argument("--model_name", type=str) parser.add_argument("--learning_rate", type=str, default=1e-5) parser.add_argument("--combine_train_val", type=bool, default=False) # Data, model, and output directories parser.add_argument("--output-data-dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"]) parser.add_argument("--checkpoints", type=str, default="/opt/ml/checkpoints") parser.add_argument("--model-dir", type=str, default='/opt/ml/code/model') parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"]) args, _ = parser.parse_known_args() logger = logging.getLogger(__name__) logging.basicConfig( level=logging.getLevelName("INFO"), handlers=[logging.StreamHandler(sys.stdout)], format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) run() ``` ### Expected behavior ```shell Here the log on the error from AWS Cloud Watch: Invoking script with the following command: /opt/conda/bin/python3.8 train.py --checkpoints /opt/ml/checkpoints/ --combine_train_val True --epochs 20 --exp_name Sagemaker_Training_doc_cls --exp_tracker all --model_name microsoft/layoutxlm-base --output_dir /opt/ml/model/ --train_batch_size 1 Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2777, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/opt/conda/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 848, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/opt/conda/lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 48, in <module> from detectron2.modeling import META_ARCH_REGISTRY File "/opt/conda/lib/python3.8/site-packages/detectron2/modeling/__init__.py", line 2, in <module> from detectron2.layers import ShapeSpec File "/opt/conda/lib/python3.8/site-packages/detectron2/layers/__init__.py", line 2, in <module> from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList File "/opt/conda/lib/python3.8/site-packages/detectron2/layers/batch_norm.py", line 4, in <module> from fvcore.nn.distributed import differentiable_all_reduce File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/__init__.py", line 4, in <module> from .focal_loss import ( File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/focal_loss.py", line 52, in <module> sigmoid_focal_loss_jit: "torch.jit.ScriptModule" = torch.jit.script(sigmoid_focal_loss) File "/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py", line 1310, in script fn = torch._C._jit_script_compile( File "/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py", line 838, in try_compile_fn return torch.jit.script(fn, _rcb=rcb) File "/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py", line 1310, in script fn = torch._C._jit_script_compile( RuntimeError: undefined value has_torch_function_variadic: File "/opt/conda/lib/python3.8/site-packages/torch/utils/smdebug.py", line 2962 >>> loss.backward() """ if has_torch_function_variadic(input, target, weight, pos_weight): ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return handle_torch_function( binary_cross_entropy_with_logits, 'binary_cross_entropy_with_logits' is being compiled since it was called from 'sigmoid_focal_loss' File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/focal_loss.py", line 36 targets = targets.float() p = torch.sigmoid(inputs) ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE p_t = p * targets + (1 - p) * (1 - targets) loss = ce_loss * ((1 - p_t) ** gamma) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "train.py", line 6, in <module> from transformers import LayoutLMv2ForSequenceClassification File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2768, in __getattr__ value = getattr(module, name) File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2767, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2779, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.layoutlmv2.modeling_layoutlmv2 because of the following error (look up to see its traceback): undefined value has_torch_function_variadic: File "/opt/conda/lib/python3.8/site-packages/torch/utils/smdebug.py", line 2962 >>> loss.backward() """ if has_torch_function_variadic(input, target, weight, pos_weight): ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return handle_torch_function( binary_cross_entropy_with_logits, 'binary_cross_entropy_with_logits' is being compiled since it was called from 'sigmoid_focal_loss' File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/focal_loss.py", line 36 targets = targets.float() p = torch.sigmoid(inputs) ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE p_t = p * targets + (1 - p) * (1 - targets) loss = ce_loss * ((1 - p_t) ** gamma) ``` ```
06-23-2022 18:09:49
06-23-2022 18:09:49
cc @philschmid (hope I am tagging correctly)<|||||>@Natlem could you try adding `debugger_hook_config=False` to the `HuggingFace` estimator? ```python huggingface_estimator = HuggingFace(entry_point='train.py', source_dir='scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.17.0', pytorch_version='1.10.2', py_version='py38', hyperparameters=hyperparameters, environment={'HF_TASK': 'text-classification'}, code_location='s3://dummy_code_location', debugger_hook_config=False, ) ```<|||||>Hi @philschmid , Added the `debugger_hook_config=False`, the error is gone now. Thanks !<|||||>Awesome, closing the issue. Feel free to reopen if you have more issues.<|||||>@Natlem i forwarded the error to the AWS team to be able use the debugger soon. <|||||>@philschmid Thanks !<|||||>@philschmid do you have any idea why this solves the problem? Is it documented by AWS anywhere? Sagemaker Debugger has cost me multiple days of time in the mysterious problems it produces. Far more than anything else on Sagemaker. I posted an issue on awslabs about this awhile back and never got a reply. I would really like to know what is going on here **For anyone encountering this while using a HyperparameterTuner** Passing `debugger_hook_config=False` in the `Estimator` will not the solve the problem. Further, passing `environment={'USE_SMDEBUG':0}` also will not solve the problem. Somehow these settings never make it to a tuner's constituent training jobs. The only way to solve it is to set `ENV USE_SMDEBUG="0"` in the docker container that will be running the constituent training jobs.<|||||>> Somehow these settings never make it to a tuner's constituent training jobs. Are you using the `HuggingFace` estimator or the `HyperparameterTuner`
transformers
17,854
closed
Fix Splinter test
# What does this PR do? `test_multi_gpu_data_parallel_forward` is not mean to run for `SplinterForQuestionAnswering`, due to the number of question tokens might be different on different replicas. This PR skips this test for `SplinterForQuestionAnswering`.
06-23-2022 18:00:47
06-23-2022 18:00:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,853
closed
Fail when using pipeline for the inference of DeBERTa-Vx ORTModels
### System Info ```shell - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.4.0-1080-aws-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu102 (True) ``` ### Who can help? @LysandreJik @Narsil ### Reproduction ### Context The DeBERTa tokenizers output `token_type_ids` by default. However, in the use case of using `transformers.pipeline` for inference of `ORTModelForXXX` in Optimum. Depending on `config.type_vocab_size` the exported IR doesn't always take `token_type_ids` as input, and this will lead to failure as `onnxruntime.InferenceSession` is not tolerant of invalid inputs. * PR #17617 - Support of DeBERTa onnx * PR [#225](https://github.com/huggingface/optimum/pull/225) - Discussion in Optimum ```python from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForSequenceClassification model = ORTModelForSequenceClassification.from_pretrained("microsoft/deberta-base",from_transformers=True) tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base") onnx_classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) text = "Hello, my dog is cute" pred = onnx_classifier(text) ``` Error Message: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/text_classification.py", line 138, in __call__ result = super().__call__(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1043, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1050, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/base.py", line 959, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/text_classification.py", line 163, in _forward return self.model(**model_inputs) File "/home/ubuntu/optimum/optimum/modeling_base.py", line 31, in __call__ return self.forward(*args, **kwargs) File "/home/ubuntu/optimum/optimum/onnxruntime/modeling_ort.py", line 520, in forward outputs = self.model.run(None, onnx_inputs) File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 192, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids ``` ### Ideas To enable Pipeline for DeBERTa-Vxxx ONNX model, I am thinking of configuring `return_token_type_ids` argument in the [`preprocess` method](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/token_classification.py#L195-L200) of `XXXPipeline.preprocess` depending on `self.model.config.type_vocab_size`, which can remove `token_type_ids` from inputs when it is unused. WDYT?
06-23-2022 16:22:19
06-23-2022 16:22:19
Hi @JingyaHuang , The use case seems solid. Ideally we should be able to run the code as-is. But as far as I understand it's not really doable because `token_type_ids` **are** used in some models but not others, so finding a all covering solution is tricky. Am I correct ? Then, for adding the argument instead of adding `return_token_type_ids` I suggest adding `tokenizer_args` as a dict where you could pass `tokenizer_args={"return_token_type_ids": False}` . We can always think about promoting this particular argument to first class, but it seems that going more explicit is better here WDYT ? <|||||>Hi @Narsil, Exactly, the pipelines work well for many other ort models except for DeBERTa(s). As the exported DeBERTa ONNX model with `token_type_ids` can only be traced by `torch.jit.trace` when `model.config.type_vocab_size`>0. And `token_type_ids` are not traced thus not a valid input when `model.config.type_vocab_size`=0(default), it is definitely tricky. `tokenizer_kwargs` sounds good to me! We might want to enable users to do something like this: ```python tokenizer = AutoTokenizer.from_pretrained("{checkpoint}") model = ORTModelForSequenceClassification.from_pretrained("{checkpoint}") onnx_classifier = pipeline("text-classification", model=model, tokenizer=tokenizer, return_token_type_ids=False) text = "Hello, my dog is cute" pred = onnx_classifier(text) ``` This snippet works already as [`TextClassificationPipeline.preprocess` takes `tokenizer_kwargs` as input](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_classification.py#L147), however it is not yet supported for other tasks(e.g. [token-classification](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/token_classification.py#L193), `FeatureExtractionPipeline`...) Is it something that we want to apply to other pipelines or there might be some other considerations? <|||||>Pinging @mfuntowicz to get his input on wether it's a tracing issue or a pipeline issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,852
closed
Index RNG states by global rank in saves
# What does this PR do? As pointed out in #17829, in a multi-node training the `Trainer` saves the RNG states with the same filenames in the various nodes. This causes problems when the nodes share the same file system, so it's easier to just save each file indexed by global rank instead. Fixes #17829
06-23-2022 16:21:57
06-23-2022 16:21:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,851
closed
Add `trace_device` argument to `smp.DistributedModel` call in `Trainer()`
### Feature request Allow `trace_device` to be passed to `smp.DistributedModel` so that `Trainer` jobs that use SageMaker Model Parallel can support models that exceed a single GPU's memory. Tagging @philschmid and @patil-suraj, because I think you both are involved in the SageMaker work? ### Motivation The `Trainer` class provides native support for the SageMaker Model Parallel (smp) library; however, it does not support specifying the device where model tracing is conducted at the beginning of a smp training job. Because the default trace device is GPU, it is not possible to train a model that cannot fit in (a single) GPU memory, which, of course exactly when you would want to use model parallelism. This can be fixed by passing [trace_device](https://sagemaker.readthedocs.io/en/v2.20.0/api/training/smd_model_parallel_pytorch.html) to the `smp.DistributedModel` [call](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/trainer.py#L1236) in `Trainer()`. The value of `trace_device` could be specified in the [same way other smp parameters are specified](https://github.com/huggingface/transformers/blob/acb709d55150501698b5b500ca49683b913d4b3d/src/transformers/training_args.py#L904), which is via a json string of smp parameters, `mp_parameters`. As noted [elsewhere](https://github.com/huggingface/transformers/issues/14851#issuecomment-1013422175_), `trace_device` is not currently supported in the HF DLC, but it is apparently roadmapped. Accordingly, my current workaround is to use a SageMaker pytorch estimator and a custom Trainer class with the necessary overrides to ensure that `trace_device` is passed. It would be nice to be able to use the base trainer class rather than this workaround. ### Your contribution If this seems reasonable, I'd be happy to open a PR.
06-23-2022 16:13:53
06-23-2022 16:13:53
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,850
closed
ViTForImageClassification
null
06-23-2022 15:53:57
06-23-2022 15:53:57
@lyutovad What's the issue?<|||||>Please follow the template when opening issues.
transformers
17,849
closed
Fix: torch.utils.checkpoint import error.
# What does this PR do? missing import statements still happens on training deberta models with gradient checkpointing. Fixes #17848 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik
06-23-2022 15:24:04
06-23-2022 15:24:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,848
closed
Raise AttributeError on training deberta models with gradient checkpointing
### System Info ```shell - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction - deberta models and `Trainer` with `gradient_checkpointing=True` - `trainer.train()` - raise `AttributeError: module 'torch.utils' has no attribute 'checkpoint'` Refs #9617 ### Expected behavior ```shell nothing raised on training with `gradient_checkpointing=True`. ```
06-23-2022 15:22:22
06-23-2022 15:22:22
transformers
17,847
closed
Troubleshooting.mdx Translation Italian
# What does this PR do? Fixes # (issue) Translation of Troubleshooting.mdx (english) to Italian and Update toctree #17459 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @mfumanelli
06-23-2022 14:49:35
06-23-2022 14:49:35
@F02934 Hi, I remade the translation because the previous PR had problem with the commits history. You can check if everything good with the translation and with the toctree<|||||>Hi @F02934! Sorry for the late reply. No worries! You simply have to edit the _toctree file, removing the empty sections, and adding a title, your _toctree file should look like this: ``` yaml - sections: - local: index title: πŸ€— Transformers - local: quicktour title: Tour rapido - local: installation title: Installazione title: Iniziare - sections: - local: pipeline_tutorial title: Pipeline per l'inferenza - local: autoclass_tutorial title: Carica istanze pre-allenate con AutoClass title: Esercitazione - sections: - local: troubleshooting title: Risoluzione dei problemi title: Guide pratiche ``` I forgot to add a section and to translate the last title!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @F02934 could you tanslate *How-to Gudes* --> Guide pratiche like in the @mfumanelli example? @sgugger @omarespejel <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,846
closed
Update modeling_cvt.py type hints
As shown in the colab notebook I added the missing type hints for " CvtForImageClassification CvtModel " # What does this PR do? Add missing type hints for CTV pytorch. #16059 following [this Colab notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Rocketknight1
06-23-2022 14:27:25
06-23-2022 14:27:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,845
closed
add MobileNetV2 model
# What does this PR do? Adds MobileNetV2 to the Transformers library. This includes an image classification head and a basic DeepLabV3+ semantic segmentation head. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-23-2022 13:45:31
06-23-2022 13:45:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.<|||||>Good to merge @hollance ?<|||||>@sgugger It seems to fail some tests but they don't look they're from my changes? If you're OK with that test failing then I think this is ready to get merged.<|||||>Failure is a flaky test, unrelated to this PR.
transformers
17,844
closed
Update run_mlm.py
made comments consistent with run_glue.py # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> improves the comments to make more coherent with run_glue.py ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-23-2022 13:45:16
06-23-2022 13:45:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17844). All of your documentation changes will be reflected on that endpoint.<|||||>Your PR now touches 518 files, which is not what you intended probably. Make sure to use the specific versions of `black` we do by doing `pip install -e . [quality]` in the repo.<|||||>> Your PR now touches 518 files, which is not what you intended probably. Make sure to use the specific versions of `black` we do by doing `pip install -e . [quality]` in the repo. Sorry accidentally pressed requested a review <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,843
closed
[Flax] Add remat (gradient checkpointing)
# What does this PR do? Adds gradient checkpointing in Flax (_c.f._ #17399). The API currently takes the form of a method: ```python from transformers import BertConfig, FlaxBertModel model = FlaxBertModel(BertConfig()) model.enable_gradient_checkpointing() ``` Note: checkpointing has currently only been implemented for FlaxBert. Implementing for all Flax models is a TODO. TODO: - [x] Add checkpointing to `init` - [x] Add checkpointing to `from_pretrained` - [x] Add model tests for FlaxBert in `test_modeling_flax_bert` - [x] Decide on API: checkpointing with a kwarg (`gradient_checkpointing=True`) or a method (`model.gradient_checkpointing_enable()`)? - [x] Add API functionality for remat policies (c.f. https://github.com/google/jax/blob/636345fd67758c19c5345bee2301df34b6f1c540/jax/_src/ad_checkpoint.py#L44) - [ ] Copy checkpointing logic to all Flax models - [ ] Move model tests to `test_modeling_flax_common` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc @borisdayma
06-23-2022 13:39:22
06-23-2022 13:39:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>Is there an inconvenient in adding it to all layers? In my case I used it only on transformers blocks (attention + feed forward).<|||||>> Is there an inconvenient in adding it to all layers? By wrapping `FlaxBertLayer` in a `remat` operation, each Bert layer (attention, intermediate FF, final FF + optional cross-attention layers) has `remat` applied to it: https://github.com/huggingface/transformers/blob/ea8150a5f932ab28efbe2e7a31fee1ca77c289a5/src/transformers/models/bert/modeling_flax_bert.py#L555 We then use this remat'd layer to construct the Transformer block (layers collection): https://github.com/huggingface/transformers/blob/ea8150a5f932ab28efbe2e7a31fee1ca77c289a5/src/transformers/models/bert/modeling_flax_bert.py#L559-L562 Meaning that each component of the Bert layer is checkpointed, and that **all** Bert layers in the Transformer block (layers collection) are checkpointed. Would you like to see `remat` on the embeddings and pooler layers too? Imagine this wouldn't make a huge difference to performance at train time vs just checkpointing the entire Transformer block? <|||||>> Would you like to see `remat` on the embeddings and pooler layers too? Imagine this wouldn't make a huge difference to performance at train time vs just checkpointing the entire Transformer block? No actually Iβ€―thought it was on all layers but the way you did is great!<|||||>Cool! Once the tests are green, happy to merge it here :-)
transformers
17,842
closed
Fix FlaxBigBirdEmbeddings
# What does this PR do? Current `FlaxBigBirdEmbeddings` applies layer norm before dropout, while `BigBirdEmbeddings` and Google's original `BigBird` applies dropout first. This PR fixes this inconsistency. Flax (layernorm --> dropout) https://github.com/huggingface/transformers/blob/6f29029b05df221c0c37fd2e87aeadc9cb6ce5d7/src/transformers/models/big_bird/modeling_flax_big_bird.py#L232-L233 PyTorch (dropout immediately after embedding) https://github.com/huggingface/transformers/blob/6f29029b05df221c0c37fd2e87aeadc9cb6ce5d7/src/transformers/models/big_bird/modeling_big_bird.py#L311-L312 Google (dropout immediately after embedding) https://github.com/google-research/bigbird/blob/5f2a5aa7fbab23e32e0e0b41c5f0192f0c023e05/bigbird/core/utils.py#L565-L566
06-23-2022 13:36:44
06-23-2022 13:36:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>I will merge this PR today.
transformers
17,841
closed
Fix broken test for models with batchnorm
One of the Keras tests assumed that fitting a model for one iteration with a learning rate of zero would not change any weights. This is not true for `BatchNorm`, which updates its running means and variances regardless! As a result, the model after the iteration had slightly different outputs, which caused the test to be very flaky. We now reinitialize the model after the single training epoch to make sure this doesn't happen.
06-23-2022 13:33:25
06-23-2022 13:33:25
Related to #17427, cc @amyeroberts .<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>🧠 <|||||>Give @amyeroberts the brain emoji for that one - she identified the whole problem, I just fixed the test!
transformers
17,840
closed
Fix an error message in `BigBird`
# What does this PR do? Fix an error message in `BigBird` model file, so we can see the actual/correct difference.
06-23-2022 11:15:31
06-23-2022 11:15:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,839
closed
CLI: handle multimodal inputs
# What does this PR do? Adds support for multimodal inputs (for models like CLIP), and adds the special input case for Wav2Vec2 (different audio input name).
06-23-2022 11:04:23
06-23-2022 11:04:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,838
closed
Get different weights from model.get_input_embeddings()
### System Info ```shell @patil-suraj, @patrickvonplaten - `transformers` version: 4.10.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```py from transformers import GPT2Config, GPT2LMHeadModel config = GPT2Config.from_pretrained("gpt2") model = GPT2LMHeadModel(config=config) # print (model1.get_input_embeddings().weight.shape) arr = model.get_input_embeddings().weight.detach().numpy() print (arr) print (arr.shape) ``` <img width="573" alt="image" src="https://user-images.githubusercontent.com/27731754/175279258-f0551493-2c42-4dbd-a17f-257d8298ae7c.png"> ### Expected behavior ```shell I think `model.get_input_embeddings()` should give same weights every time when it was called ```
06-23-2022 10:35:09
06-23-2022 10:35:09
Sorry, I made a mistake. I got answer from the documents. > config (:class:`~transformers.GPT2Config`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. I need use `GPT2LMHeadModel.from_pretrained("gpt2")` rather than `GPT2LMHeadModel(config=config)`
transformers
17,837
closed
BLOOM - Fix mask creation by default
- fix niche case where the mask is not fed to the forward function cc @justheuristic
06-23-2022 09:45:50
06-23-2022 09:45:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>This happens if user decides to run `model(input_ids=some_tensor)`, which originally happened during prototyping (in a notebook). It is indeed a niche case; However, the fact that mask _is_ optional can hint other users to the idea that they can keep it None - and it would be more intuitive if having mask=None would be equivalent to passing all ones, like in GPT2Model for example.
transformers
17,836
closed
replace `Python-base tokenizer` by `non-fast tokenizer` in error message
# What does this PR do? As one user rightly pointed out in an issue #17809, when a user receives the error `"tokens() is not available when using Python-based tokenizers"` it is not obvious that a python-based tokenizer refers to a tokenizer class without the term Fast at the end. I therefore propose to change the error messages using this term to refer to the term fast which is more easily understood by users. Fixes #17809 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Would love to have the approval of @sgugger , @LysandreJik , @patrickvonplaten or @patil-suraj :hugs:
06-23-2022 09:06:06
06-23-2022 09:06:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,835
closed
Batch mismatch in the given course example
### System Info ```shell torch 1.11.0 cuda 0.13 transformers 4.20.1 linux box , ubuntu 20.04 ``` ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Go to The example provided. [Course fine-tune ](https://huggingface.co/course/chapter3/3?fw=pt) 2. Open a python file and dump all the cell provided as is. (no changes). UP To the `trainer.train()` 3. error ***** Running training ***** Num examples = 3668 Num Epochs = 3 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 1377 0%| | 0/1377 [00:00<?, ?it/s]Traceback (most recent call last): File "~/.conda/envs/hugg/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3457, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-10d9618241c7>", line 55, in <module> trainer.train() File " ~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 1413, in train ignore_keys_for_eval=ignore_keys_for_eval, File " ~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 1651, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 2345, in training_step loss = self.compute_loss(model, inputs) File "~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 2377, in compute_loss outputs = model(**inputs) File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1775, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 1165, in forward label_smoothing=self.label_smoothing) File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/functional.py", line 2996, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) ValueError: Expected input batch_size (624) to match target batch_size (8). ### Expected behavior ```shell Training the model. No errors. ``` ### My comments The local env matches that of colab in terms of package versions. I used python3.7 as the provided colab has, and the machine has a 1 GPU (cuda 11.3).
06-23-2022 08:01:32
06-23-2022 08:01:32
restarted jetbrains and it worked! hmmmmm. can i delete this issue.
transformers
17,834
closed
Issue in wav2vec2ForPretraining
### System Info ```shell In the example mentioned in the doc, when I print the loss, it is "None" ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import torch from transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices from datasets import load_dataset feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") model = Wav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-base") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # compute masked indices batch_size, raw_sequence_length = input_values.shape sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2) mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long) with torch.no_grad(): outputs = model(input_values, mask_time_indices=mask_time_indices) # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states) cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1) # show that cosine similarity is much higher than random cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5 # for contrastive loss training model should be put into train mode model = model.train() loss = model(input_values, mask_time_indices=mask_time_indices).loss ### Expected behavior ```shell Modify the example doc ```
06-23-2022 07:27:24
06-23-2022 07:27:24
Hey @annihi1ation, What `transfromres` version are you using?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,833
closed
LayoutLMv3Model output shape is different
### System Info ```shell - `transformers` version: 4.20.1 - Platform: Linux-4.4.0-62-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.2.1 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoProcessor, AutoModelForTokenClassification, AutoModel from datasets import load_dataset processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) model = AutoModel.from_pretrained("microsoft/layoutlmv3-base", num_labels=7) dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train") example = dataset[0] image = example["image"] words = example["tokens"] boxes = example["bboxes"] word_labels = example["ner_tags"] encoding = processor(image, words, boxes=boxes, return_tensors="pt") outputs = model(**encoding) encoding.input_ids.shape, outputs.last_hidden_state.shape ``` outputs ``` (torch.Size([1, 208]), torch.Size([1, 405, 768])) ``` ### Expected behavior ``` (torch.Size([1, 208]), torch.Size([1, 208, 768])) ``` Hi! Thank you very much for contributing layoutlmv3 model to huggingface. While using the model, I think I found out that the model has some different parts from the specs. https://github.com/huggingface/transformers/blob/v4.20.1/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py#L1043 https://github.com/microsoft/unilm/blob/master/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py#L1070 This huggingface implementation has different output shape than original implementation. In the [documentation](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/layoutlmv3#transformers.LayoutLMv3Model), it says last_hidden_state has shape of `(torch.FloatTensor of shape (batch_size, sequence_length, hidden_size))` but it does not. (original implementation has, but huggingface implementation does not) Sequence length includes (Presumably because of that) It makes different training result on FUNSD dataset. In summary, - `LayoutLMv3Model` outputs different shape (sequence length) than that is written in documentation. - and that is different from the original implementation. Thank you.
06-23-2022 05:48:37
06-23-2022 05:48:37
Hi, The sequence length of the last hidden states of LayoutLMv3 equals the number of text tokens + image tokens. If you have a text of 208 tokens (as is the case in the code example above), LayoutLMv3 also appends 197 image tokens to it. There are 197 image tokens because `LayoutLMv3Processor` resizes images to 224x224, which, at a patch resolution of 16x16 gives (224/16)**2 = 196 tokens, and one adds one for the CLS token. This is also what is done in the original implementation. The docstrings of the last hidden states can be improved, definitely. Feel free to open a PR regarding this.<|||||>Oops my bad. I confirmed that both have the same shapes. I thought two would have different shape because of these: - transformers https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py#L1041-L1043 - original https://github.com/microsoft/unilm/blob/4301ebe1a832b7bcb33be0ab3a460306d467a912/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py#L1070-L1073 ```python sequence_output = outputs[0] sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) ``` I will try to work on updating the documentation. Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,832
closed
Multi-modal VisualBERT can be used for classification task?
Hi, I have images and descriptions for products and want to train multi-modal by using Image and text embeddings. I just came across VisualBert model and was wondering whether we can use VisualBERT for classification task taking image and text as an input. Also, if any other multi-modal algorithm can be recommended apart from VisualBERT to train multi-modal using Image and text for classification task. thanks
06-23-2022 05:06:55
06-23-2022 05:06:55
Hi, VisualBERT can probably be used for this, but I'd recommend checking out [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt), which is a very simple extension of ViT (or BERT) for multimodal tasks. The benefit of ViLT over VisualBERT is that one doesn't need to prepare image embeddings, as the ViLT model creates them by itself internally. You can just feed `input_ids` and `pixel_values` to it. To use ViLT for multimodal classification, you can create a class like so: ``` from transformers import ViltPreTrainedModel, ViltModel from transformers.modeling_outputs import SequenceClassifierOutput from torch import nn class MultimodalClassifier(ViltPreTrainedModel): def __init__(self, config): super().__init__(config) self.config = config self.vilt = ViltModel(config) self.classifier = nn.Linear(config.hidden_size, config.num_labels) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, pixel_values=None, pixel_mask=None, head_mask=None, inputs_embeds=None, image_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=True, ): outputs = self.vilt( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, pixel_values=pixel_values, pixel_mask=pixel_mask, head_mask=head_mask, inputs_embeds=inputs_embeds, image_embeds=image_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) pooler_output = outputs.pooler_output if return_dict else outputs[1] logits = self.classifier(pooler_output) loss = None if labels is not None: loss_fct = nn.CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ``` Creating this model (with a pre-trained base) can then be done as follows: ``` model = MultimodalClassifier.from_pretrained("dandelin/vilt-b32-mlm", num_labels=10) ``` Doing a forward pass on a batch of image+text pairs can be done as follows: ``` from transformers import ViltProcessor import torch import requests from PIL import Image processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) text = "this is an image of two cats" inputs = processor(image, text, return_tensors="pt") outputs = model(**inputs, labels=torch.tensor([1])) ```<|||||>> Hi, > > VisualBERT can probably be used for this, but I'd recommend checking out [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt), which is a very simple extension of ViT (or BERT) for multimodal tasks. The benefit of ViLT over VisualBERT is that one doesn't need to prepare image embeddings, as the ViLT model creates them by itself internally. You can just feed `input_ids` and `pixel_values` to it. > > To use ViLT for multimodal classification, you can create a class like so: > > ``` > from transformers import ViltPreTrainedModel, ViltModel > from transformers.modeling_outputs import SequenceClassifierOutput > from torch import nn > > class MultimodalClassifier(ViltPreTrainedModel): > def __init__(self, config): > super().__init__(config) > self.config = config > self.vilt = ViltModel(config) > self.classifier = nn.Linear(config.hidden_size, config.num_labels) > > def forward( > self, > input_ids=None, > attention_mask=None, > token_type_ids=None, > pixel_values=None, > pixel_mask=None, > head_mask=None, > inputs_embeds=None, > image_embeds=None, > labels=None, > output_attentions=None, > output_hidden_states=None, > return_dict=True, > ): > outputs = self.vilt( > input_ids, > attention_mask=attention_mask, > token_type_ids=token_type_ids, > pixel_values=pixel_values, > pixel_mask=pixel_mask, > head_mask=head_mask, > inputs_embeds=inputs_embeds, > image_embeds=image_embeds, > output_attentions=output_attentions, > output_hidden_states=output_hidden_states, > return_dict=return_dict, > ) > > pooler_output = outputs.pooler_output if return_dict else outputs[1] > > logits = self.classifier(pooler_output) > > loss = None > if labels is not None: > loss_fct = nn.CrossEntropyLoss() > loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) > > if not return_dict: > output = (logits,) + outputs[2:] > return ((loss,) + output) if loss is not None else output > > return SequenceClassifierOutput( > loss=loss, > logits=logits, > hidden_states=outputs.hidden_states, > attentions=outputs.attentions, > ) > ``` > > Creating this model (with a pre-trained base) can then be done as follows: > > ``` > model = MultimodalClassifier.from_pretrained("dandelin/vilt-b32-mlm", num_labels=10) > ``` > > Doing a forward pass on a batch of image+text pairs can be done as follows: > > ``` > from transformers import ViltProcessor > import torch > import requests > from PIL import Image > > processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") > > url = 'http://images.cocodataset.org/val2017/000000039769.jpg' > image = Image.open(requests.get(url, stream=True).raw) > text = "this is an image of two cats" > > inputs = processor(image, text, return_tensors="pt") > > outputs = model(**inputs, labels=torch.tensor([1])) > ``` @NielsRogge Thanks for your reply. I will definitely look into ViLT for the IMAGE+ TEXT classification tasks. There are following two things I want to highlight if you can suggest me: 1. I have text in the Spanish language, How I can get the Bert model pre-trained in the Spanish language incorporated here in ViLT implementation? 2. While classifying, i just don't want to classify with single label. Classification is more of an Multi-Label. How I can incorporate this multi-label output in ViLT? Your suggestion on these two will help me to understand the things properly. Thanks again. Waiting for your reply.<|||||>> I have text in the Spanish language, How I can get the Bert model pre-trained in the Spanish language incorporated here in ViLT implementation? ViLT was pre-trained on English text only, unfortunately. In that case, a better alternative might be to just forward the text through a BERT-like model pre-trained on Spanish text (could be multilingual), forward the image through a ViT-like model, and simply concatenate the hidden states of both modalities, which are then fed to a classifier. > While classifying, i just don't want to classify with single label. Classification is more of an Multi-Label. How I can incorporate this multi-label output in ViLT? Multi-label classification requires to use the `BCEWithLogitsLoss` as seen [here](https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/models/bert/modeling_bert.py#L1592) for example. The labels need to be a tensor of shape (batch_size, num_labels), containing the one-hot encoded labels for a batch.<|||||>> > I have text in the Spanish language, How I can get the Bert model pre-trained in the Spanish language incorporated here in ViLT implementation? > > ViLT was pre-trained on English text only, unfortunately. In that case, a better alternative might be to just forward the text through a BERT-like model pre-trained on Spanish text (could be multilingual), forward the image through a ViT-like model, and simply concatenate the hidden states of both modalities, which are then fed to a classifier. > > > While classifying, i just don't want to classify with single label. Classification is more of an Multi-Label. How I can incorporate this multi-label output in ViLT? > > Multi-label classification requires to use the `BCEWithLogitsLoss` as seen [here](https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/models/bert/modeling_bert.py#L1592) for example. The labels need to be a tensor of shape (batch_size, num_labels), containing the one-hot encoded labels for a batch. Thanks @NielsRogge for replying. Can I pass pre-trained tokenizer (pretrained on spanish language) to ViLT processor ? Also, which model in ViLT shall i use to extract image features in combination with pretrained spanish bert model? please advise.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,831
closed
DisjunctiveConstraint fails in corner case
### System Info ```shell - `transformers` version: 4.20.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.8.12 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The current **trie-only** implementation failed to handle the second corner case introduced in the `Figure 1b` of [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://aclanthology.org/N19-1090/), where the prefix of one constraint is not prefix but subsequence of another. The minimal code snippet to reproduce is following: ```python3 >>> import transformers >>> c = transformers.ConstraintListState([transformers.DisjunctiveConstraint( >>> [[1, 2, 3, 4], >>> [2, 3, 5], >>> [3, 6], >>> [7]] >>> )]) >>> c.reset([1, 2, 3, 5]) >>> print(c.completed) False >>> c.reset([1, 2, 3, 6]) >>> print(c.completed) False >>> c.reset([1, 2, 3, 7]) >>> print(c.completed) False >>> c.reset([1, 2, 3, 4]) >>> print(c.completed) True ``` ### Expected behavior ```shell all print statements should output True instead. The [`AC automaton`](https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm) is the desired algorithm to supersede Trie here. I can prepare a PR to do the upgrade and fix if necessary. ```
06-22-2022 23:29:15
06-22-2022 23:29:15
Interesting edge case! @cwkeam has you encountered this before?<|||||>Hey @boy2000-007man, Sorry to reply so late here. I'm a bit hesitant to add so much new code. Could you maybe show a case with input strings and generate and how the current implementation fails? E.g. above I see with some abstract numbers how it fails, but could you maybe also show how the current `generate(...)` method fails for an edge case?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,830
closed
hf_BigBird failing on torchdynamo
### System Info ```shell - `transformers` version: 4.12.1 - Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyTorch version (GPU?): 1.13.0.dev20220609 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @ydshieh ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ./torchbench.py --only hf_BigBird --speed-ts ### Expected behavior ```shell This model should run just fine. But right now I am seeing: File "/home/xx/anaconda3/envs/torchdynamo/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 607, in bigbird_block_sparse_attention gathered_key = self.torch_gather_b2(blocked_key_matrix, rand_attn) File "/home/xx/anaconda3/envs/torchdynamo/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 969, in torch_gather_b2 raise ValueError( File "/home/xx/anaconda3/envs/torchdynamo/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 964, in torch_gather_b2 @staticmethod ValueError: Make sure that the first two dimensions of params and indices are identical, but they are params: (1, 12) vs. indices: (1, 12) ``` ```
06-22-2022 23:12:49
06-22-2022 23:12:49
Hi @shingjan The error message is very strange as it says `they are params: (1, 12) vs. indices: (1, 12)`, so it should be identical. So far I am not able to reproduce as the installation of `benchmark` and `torchdynamo` gives several errors. Could you try to identity the input that is passed to the model in `modeling_big_bird.py` when `torchbench.py` is running that causes the issue? Also, you might try to check with the latest stable `transformers` + `PyTorch` version. Thanks! <|||||>Hi @ydshieh Thanks for your prompt reply! I am using pytorch nightly, version `1.13.0.dev20220609`, as it is a requirement for `torchdynamo` and `torchbench` so I can't really fall back to pytorch stable like 1.11.0. This error seems odd to me as well since those two indices looks identical. This is the setup I have for repro: https://github.com/pytorch/torchdynamo#minimal-developer-setup I will dig in and see if there is more I can provide you with about a repro.<|||||>The two values seems identical because there was a bug in the error message, see #17840. You can re-run your code to see what is the actual values. If you can only use pytorch nightly, could you also check what's your `torchvision`, `torchaudio` and `torchtext` version? Also, for these (torch), did you install the version with CUDA, or the version with CPU only? <|||||>``` torchtext 0.14.0.dev20220609 py38 pytorch-nightly torchvision 0.14.0.dev20220609 py38_cu113 pytorch-nightly torchaudio 0.13.0.dev20220609 py38_cu113 pytorch-nightly ``` The above are the specs. My installation does include cuda 11.3. This model is supposed to be compiled on cpu/llvm only if that helps.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,829
closed
RNG states in checkpoint corrupted
### System Info ```shell transformers-cli env WARNING:tensorflow:From /autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2022-06-22 16:37:33.424936: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /device:GPU:0 with 14042 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0035:03:00.0, compute capability: 7.0 Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.19.2 - Platform: Linux-4.18.0-193.46.1.el8_2.ppc64le-ppc64le-with-glibc2.28 - Python version: 3.9.7 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.10.0 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction In distributed training, I cannot restart from a checkpoint due to a corrupted RNG state archive. ``` File "/gpfs/alpine/bif136/world-shared/contact_pred_pair_update/train/../train.py", line 447, in <module> main() File "/gpfs/alpine/bif136/world-shared/contact_pred_pair_update/train/../train.py", line 434, in main train_result = trainer.train(resume_from_checkpoint=last_checkpoint) File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/trainer.py", line 1317, in train return inner_training_loop( File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/trainer.py", line 1525, in _inner_training_loop self._load_rng_state(resume_from_checkpoint) File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/trainer.py", line 1826, in _load_rng_state checkpoint_rng_state = torch.load(rng_file) File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/torch/serialization.py", line 600, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/torch/serialization.py", line 242, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: invalid header or archive is corrupted ``` I am training with 96 ranks (6 local ranks/node), and it looks like the zip files `rng_state_1.pth` are named by local rank, but written by **every rank**. This would explain that there is a write conflict to a shared filesystem (GPFS in this case) from multiple ranks, resulting in corrupted data. The rng files should either ``` - only be created by the first node - or created and be named differently by all `torch.distributed` ranks ``` This is the problematic line https://github.com/huggingface/transformers/blob/df8e6804c004903753d3e635d85f32694e3d2c39/src/transformers/trainer.py#L2074 ### Expected behavior ```shell No write conflict, restarting from checkpoint works as advertised For now, removing the `rng_state_?.pth` files manually from the checkpoint seems to be the only solution ```
06-22-2022 20:43:23
06-22-2022 20:43:23
transformers
17,828
closed
Italian/model sharing
# What does this PR do? Italian translation of model_sharing.mdx See issue: [17459](https://github.com/huggingface/transformers/issues/17459) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? @omarespejel @sgugger
06-22-2022 20:00:31
06-22-2022 20:00:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,827
closed
Update type hints modeling_yoso.py
# What does this PR do? Type hints for modelling yoso PYTORCH #16059 Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Rocketknight1
06-22-2022 17:49:59
06-22-2022 17:49:59
HI @Rocketknight1 this should be it, you can check if everything is good. If there's some problem I will resolve it tonight!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Nope, this is perfect. Thanks for the PR, and sorry for the confusion with the last one!
transformers
17,826
closed
Add Jukebox model (replaces #16875)
This is a draft pull request. # What does this PR do? This PR will progressively add the [Jukebox](https://openai.com/blog/jukebox/) model to the hub. It is linked to [#16870](https://github.com/huggingface/transformers/issues/16870). # Currently planned steps (WIP) - [x] Create template files with `transformeres-cli add-new-model-like` - [x] `src/transformers/tokenization_jukebox.py` - [x] `src/transformers/test_tokenization_jukebox.py` - [x] `src/transformers/configuration_jukebox.py` - [x] `src/transformers/modeling_jukebox.py` - [x] `src/transformers/configuration_jukebox.py` - [x] `docs/source/model_doc/jukebox.rst` - [ ] `src/transformers/tokenization_jukebox_fast.py` (will most probably use WordLevel tokenizer). Also requires to implement a converter function `class JukeboxConverter(Converter):`
06-22-2022 17:43:08
06-22-2022 17:43:08
Replaces (#16875) <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint.<|||||>Okay, the `1b lyrics` and `5b lyrics` match the original code. Just need refactoring to have better variable names and wrap the sampling kwargs for easy use. <|||||>@ArthurZucker let me know when this is ready for the final review<|||||>@ArthurZucker do you want to let me know once you want to have a final review? Let's try to not let it hang around for too long<|||||>Apart from the `kwargs` I think it is done! @patrickvonplaten feel free to review<|||||>Slow tests are now passing, the only issue left to attend is the memory. The slow tests need a lot of RAM, and running inference with the model should also automatically send the unused `Priors` and `VQVAE` to the `cpu`. The documentation and models will be ready soon.<|||||>As it was previously requested, you can now instantiate `JukeboxVQVAE` and `JukeboxPrior` individually. This is convenient if people only want to use the VQVAE or generate form juste the top level prior. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint.
transformers
17,825
closed
[Closed - code changes not shown on GH for unknown reason] replace `Python-base tokenizer` by `non-fast tokenizer` in error message
# What does this PR do? As one user rightly pointed out in an issue #17809, when a user receives the V error it is not obvious that a python-based tokenizer refers to a tokenizer class without the term Fast at the end. I therefore propose to change the error messages using this term to refer to the term fast which is more easily understood by users. Fixes #17809 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed.
06-22-2022 17:28:39
06-22-2022 17:28:39
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17825). All of your documentation changes will be reflected on that endpoint.
transformers
17,824
closed
fix type of None special tokens in not verbose mode (duplicate of #17797)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #17796 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-22-2022 17:16:58
06-22-2022 17:16:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Duplicate. Closing in favor of #17797
transformers
17,823
closed
BLOOM minor changes on tokenizer
# What does this PR do? - Attempts to fix minor issues with the BloomTokenizer - Remove unused args - Added a new attribute on `TokenizerTesterMixin` for models that are not agnostic to sequence length (typically models that use ALiBi positional embeddings) Still need to discuss if it is worth it to force the padding side to the left cc @SaulLu
06-22-2022 15:40:25
06-22-2022 15:40:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>We may merge this at the same time as https://github.com/huggingface/transformers/pull/17837 to make it a patch release of minor fixes<|||||>Let's not add a new attribute and change the common tests for one model only. You can override the test in the subclass of the main model tester, leaving the common tests as they are.
transformers
17,822
closed
Use higher value for hidden_size in Flax BigBird test
# What does this PR do? #17658 changed `hidden_size` from `32` to `4` in `FlaxBigBirdModelTester`, which cause the PT/Flax difference increased by ~ 50 times. This PR changes it back (but keep other changes untouched). We can therefore use `1e-5` instead of `5e-5`. (`hidden_size=4` with `num_attention_heads=2` is likely to introduce some edge cases in random init.) The testing time (on GCP CPU VM) is 66 vs. 64 seconds (with `-n 1`) and 46 vs. 44 seconds (with `-n 2`), so this change doesn't make the test slower.
06-22-2022 13:02:57
06-22-2022 13:02:57
_The documentation is not available anymore as the PR was closed or merged._<|||||>@patil-suraj If you have one minute to review this one, I can merge it today πŸ˜„ πŸ™ but not urgent
transformers
17,821
closed
Add VideoMAE
# What does this PR do? This PR adds [VideoMAE](https://github.com/MCG-NJU/VideoMAE), which extends [ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae) to videos. The only difference between VideoMAE and ViT is that you need to replace `nn.Conv2d` by `nn.Conv3d` in the patch embedding class. πŸ˜‚ To do: - [ ] Decide on a name for `VideoMAEFeatureExtractor` (should we keep it, or rename to `VideoMAEProcessor`, `VideoMAEPreprocessor`?) - [ ] Decide on the input format for video models; currently I've chosen `pixel_values` of shape (batch_size, num_frames, num_channels, height, width). The original implementation uses (B, C, T, H, W) - [ ] Doc examples + tests - [x] Incorporate changes of #17731 - [ ] Make VideoMAEFeatureExtractor robust with return_tensors="np" by default, better tests
06-22-2022 12:17:02
06-22-2022 12:17:02
@NielsRogge do you have any ETA on this feature? I am developing a video classification fine-tuning framework, would love to use this model if it gets merged into main! Currently only video model is PerceiverIO, right?<|||||>There seems to remain an issue with the docs: ``` Traceback (most recent call last): File "/usr/local/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.8/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/usr/local/lib/python3.8/site-packages/doc_builder/commands/build.py", line 96, in build_command build_doc( File "/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py", line 405, in build_doc sphinx_refs = check_toc_integrity(doc_folder, output_dir) File "/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py", line 460, in check_toc_integrity raise RuntimeError( RuntimeError: The following files are not present in the table of contents: - model_doc/videomae Add them to ../transformers/docs/source/en/_toctree.yml. ```<|||||>@LysandreJik yes I was aware of that, should be fixed now. Don't merge already please, I'm transferring checkpoints and updating the conversion script.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
17,820
closed
How to use LayoutLMv3 for Document Layout Detection task?
### System Info ```shell trasformers = 4.20.1 Models: layoutlmv3 How to use LayoutLMv3 for Document Layout Detection, for example microsoft unilm(https://github.com/microsoft/unilm/tree/master/layoutlmv3)? I do not find Document Layout Detection task infomation in example and modeling. The typical dataset for Document Layout Detection is call PubLayNet (https://github.com/ibm-aur-nlp/PubLayNet). example :https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3 modeling: https://github.com/huggingface/transformers/tree/main/src/transformers/models/layoutlmv3 ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction How to use LayoutLMv3 for Document Layout Detection, for example microsoft unilm(https://github.com/microsoft/unilm/tree/master/layoutlmv3)? I do not find Document Layout Detection task infomation in example and modeling. The typical dataset for Document Layout Detection is call PubLayNet (https://github.com/ibm-aur-nlp/PubLayNet). ### Expected behavior ```shell I can know how to use layoutlmv3 model in transformers for Document Layout Detection? I would really appreciate it if you can give me an example code. ```
06-22-2022 10:35:08
06-22-2022 10:35:08
Hi, If you read a bit more closely: https://github.com/microsoft/unilm/tree/master/layoutlmv3#document-layout-analysis-on-publaynet You'll see they provide a guide regarding fine-tuning LayoutLMv3 on PubLayNet. The Mask R-CNN framework is leveraged. This framework currently is not available in Huggingface Transformers, so you'll need to use the unilm repo for that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,819
closed
Cannot import name 'load_offloaded_weights' from 'accelerate.utils'
### System Info ```shell >>> import transformers >>> transformers.__version__ '4.20.1' >>> import accelerate >>> accelerate.__version__ '0.8.0' >>> import sys >>> sys.platform 'linux' >>> sys.version '3.8.10 (default, Mar 15 2022, 12:22:08) \n[GCC 9.4.0]' >>> import os >>> os.system("lsb_release -a") No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.4 LTS Release: 20.04 Codename: focal ``` ### Who can help? @99991 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Execute the example from "Getting started" https://github.com/UKPLab/sentence-transformers#getting-started 2. Observe crash. Error message: ``` Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback): cannot import name 'load_offloaded_weights' from 'accelerate.utils' (/home/username/.local/lib/python3.8/site-packages/accelerate/utils/__init__.py) The above exception was the direct cause of the following exception: File "/home/username/Desktop/computing_embeddings.py", line 2, in main model = SentenceTransformer('all-MiniLM-L6-v2') ``` ### Expected behavior It should not crash.
06-22-2022 08:00:23
06-22-2022 08:00:23
The solution is to upgrade the `accelerate` library. I had version `0.8.0` and upgraded to `0.10.0`. ``` pip install --upgrade accelerate ``` Version `0.10.0` has the missing function `load_offloaded_weights`: https://github.com/huggingface/accelerate/commit/8b8c5345cd84ba96cca810b677601204e06853ba#diff-331ffa5527e400ee60607a11481feb0197abd8492c000255c337d0bf4312c0c0R43
transformers
17,818
closed
Clean modeling utils, linked to #17760 and #17713
# What does this PR do? The recently introduced `TF` and `FLAX` sharding scripts #17760 and #17713 both use ` convert_file_size_to_int, get_checkpoint_shard_files`, which were moved to `transformers.utils.hub`. This PR just removed the definition of these two function and imports them form `transformers.utils.hub`
06-22-2022 07:22:28
06-22-2022 07:22:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,817
closed
Bump numpy from 1.21.0 to 1.22.0 in /examples/research_projects/lxmert
Bumps [numpy](https://github.com/numpy/numpy) from 1.21.0 to 1.22.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p> <blockquote> <h2>v1.22.0</h2> <h1>NumPy 1.22.0 Release Notes</h1> <p>NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:</p> <ul> <li>Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.</li> <li>A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.</li> <li>NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.</li> <li>New methods for <code>quantile</code>, <code>percentile</code>, and related functions. The new methods provide a complete set of the methods commonly found in the literature.</li> <li>A new configurable allocator for use by downstream projects.</li> </ul> <p>These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.</p> <p>The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.</p> <h2>Expired deprecations</h2> <h3>Deprecated numeric style dtype strings have been removed</h3> <p>Using the strings <code>&quot;Bytes0&quot;</code>, <code>&quot;Datetime64&quot;</code>, <code>&quot;Str0&quot;</code>, <code>&quot;Uint32&quot;</code>, and <code>&quot;Uint64&quot;</code> as a dtype will now raise a <code>TypeError</code>.</p> <p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19539">gh-19539</a>)</p> <h3>Expired deprecations for <code>loads</code>, <code>ndfromtxt</code>, and <code>mafromtxt</code> in npyio</h3> <p><code>numpy.loads</code> was deprecated in v1.15, with the recommendation that users use <code>pickle.loads</code> instead. <code>ndfromtxt</code> and <code>mafromtxt</code> were both deprecated in v1.17 - users should use <code>numpy.genfromtxt</code> instead with the appropriate value for the <code>usemask</code> parameter.</p> <p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19615">gh-19615</a>)</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/numpy/numpy/commit/4adc87dff15a247e417d50f10cc4def8e1c17a03"><code>4adc87d</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20685">#20685</a> from charris/prepare-for-1.22.0-release</li> <li><a href="https://github.com/numpy/numpy/commit/fd66547557f57c430d41be2fc0764f74a62e8ccf"><code>fd66547</code></a> REL: Prepare for the NumPy 1.22.0 release.</li> <li><a href="https://github.com/numpy/numpy/commit/125304b035effcd82e366e601b102e7347eaa9ba"><code>125304b</code></a> wip</li> <li><a href="https://github.com/numpy/numpy/commit/c283859128b1a4b57014581570a23ed7950a24ea"><code>c283859</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20682">#20682</a> from charris/backport-20416</li> <li><a href="https://github.com/numpy/numpy/commit/5399c03d4a069fe81a1616be0184c9749d7271ee"><code>5399c03</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20681">#20681</a> from charris/backport-20954</li> <li><a href="https://github.com/numpy/numpy/commit/f9c45f8ebf31340b1a5a0371bfca25afcfc4794e"><code>f9c45f8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20680">#20680</a> from charris/backport-20663</li> <li><a href="https://github.com/numpy/numpy/commit/794b36f7e1bf2a8c42774ab0db86a74bd32f674b"><code>794b36f</code></a> Update armccompiler.py</li> <li><a href="https://github.com/numpy/numpy/commit/d93b14e3d7abaa1d837825e51671f817788e120f"><code>d93b14e</code></a> Update test_public_api.py</li> <li><a href="https://github.com/numpy/numpy/commit/7662c0789cc6a70d5ad4d950ee2e95f3afef7df6"><code>7662c07</code></a> Update <strong>init</strong>.py</li> <li><a href="https://github.com/numpy/numpy/commit/311ab52488a7d096ac3bc4c2de0fdae17ecd13ef"><code>311ab52</code></a> Update armccompiler.py</li> <li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.21.0...v1.22.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=numpy&package-manager=pip&previous-version=1.21.0&new-version=1.22.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
06-22-2022 04:57:03
06-22-2022 04:57:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,816
closed
Bump numpy from 1.21.0 to 1.22.0 in /examples/research_projects/visual_bert
Bumps [numpy](https://github.com/numpy/numpy) from 1.21.0 to 1.22.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p> <blockquote> <h2>v1.22.0</h2> <h1>NumPy 1.22.0 Release Notes</h1> <p>NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:</p> <ul> <li>Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.</li> <li>A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.</li> <li>NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.</li> <li>New methods for <code>quantile</code>, <code>percentile</code>, and related functions. The new methods provide a complete set of the methods commonly found in the literature.</li> <li>A new configurable allocator for use by downstream projects.</li> </ul> <p>These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.</p> <p>The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.</p> <h2>Expired deprecations</h2> <h3>Deprecated numeric style dtype strings have been removed</h3> <p>Using the strings <code>&quot;Bytes0&quot;</code>, <code>&quot;Datetime64&quot;</code>, <code>&quot;Str0&quot;</code>, <code>&quot;Uint32&quot;</code>, and <code>&quot;Uint64&quot;</code> as a dtype will now raise a <code>TypeError</code>.</p> <p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19539">gh-19539</a>)</p> <h3>Expired deprecations for <code>loads</code>, <code>ndfromtxt</code>, and <code>mafromtxt</code> in npyio</h3> <p><code>numpy.loads</code> was deprecated in v1.15, with the recommendation that users use <code>pickle.loads</code> instead. <code>ndfromtxt</code> and <code>mafromtxt</code> were both deprecated in v1.17 - users should use <code>numpy.genfromtxt</code> instead with the appropriate value for the <code>usemask</code> parameter.</p> <p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19615">gh-19615</a>)</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/numpy/numpy/commit/4adc87dff15a247e417d50f10cc4def8e1c17a03"><code>4adc87d</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20685">#20685</a> from charris/prepare-for-1.22.0-release</li> <li><a href="https://github.com/numpy/numpy/commit/fd66547557f57c430d41be2fc0764f74a62e8ccf"><code>fd66547</code></a> REL: Prepare for the NumPy 1.22.0 release.</li> <li><a href="https://github.com/numpy/numpy/commit/125304b035effcd82e366e601b102e7347eaa9ba"><code>125304b</code></a> wip</li> <li><a href="https://github.com/numpy/numpy/commit/c283859128b1a4b57014581570a23ed7950a24ea"><code>c283859</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20682">#20682</a> from charris/backport-20416</li> <li><a href="https://github.com/numpy/numpy/commit/5399c03d4a069fe81a1616be0184c9749d7271ee"><code>5399c03</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20681">#20681</a> from charris/backport-20954</li> <li><a href="https://github.com/numpy/numpy/commit/f9c45f8ebf31340b1a5a0371bfca25afcfc4794e"><code>f9c45f8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20680">#20680</a> from charris/backport-20663</li> <li><a href="https://github.com/numpy/numpy/commit/794b36f7e1bf2a8c42774ab0db86a74bd32f674b"><code>794b36f</code></a> Update armccompiler.py</li> <li><a href="https://github.com/numpy/numpy/commit/d93b14e3d7abaa1d837825e51671f817788e120f"><code>d93b14e</code></a> Update test_public_api.py</li> <li><a href="https://github.com/numpy/numpy/commit/7662c0789cc6a70d5ad4d950ee2e95f3afef7df6"><code>7662c07</code></a> Update <strong>init</strong>.py</li> <li><a href="https://github.com/numpy/numpy/commit/311ab52488a7d096ac3bc4c2de0fdae17ecd13ef"><code>311ab52</code></a> Update armccompiler.py</li> <li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.21.0...v1.22.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=numpy&package-manager=pip&previous-version=1.21.0&new-version=1.22.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
06-22-2022 04:56:47
06-22-2022 04:56:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,815
closed
Improve encoder decoder model docs
# What does this PR do? This PR improves the documentation of encoder decoder model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Issues [link](https://github.com/huggingface/transformers/issues/16135) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ -->
06-22-2022 04:50:48
06-22-2022 04:50:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Agree with the reviews of @ydshieh and @NielsRogge ! @Threepointone4 do you want to apply them ? Think we can merge after :-)<|||||>Great job @Threepointone4 ! Merging :-)
transformers
17,814
closed
Fix Constrained beam search duplication and weird output issue
# What does this PR do? - prevent duplicates between *(topk) generic beam search best model next tokens* and *(advance) constraints forcing the next token* - ensure unfulfilled hypothesis will advance with correct beam score instead of wrong token score <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #17812 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-22-2022 04:25:53
06-22-2022 04:25:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>@cwkeam - do you have an idea here?<|||||>Good catch @boy2000-007man !! @patrickvonplaten I just read through the issue and saw the code changes and they're all right-on. Thanks for finding these problems!<|||||>Thank you for this fix! I found this pull request while I was searching to figure out why constrained beam search was churning out such repetitive results on 4.20.1. Installing the current repo of transformers fixed this immediately!
transformers
17,813
closed
push_to_hub returns "OSError: error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408"
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following the docs at https://huggingface.co/course/chapter5/5#uploading-the-dataset-to-the-hugging-face-hub. In my case I try to upload 413299 smaller jsonl files that are separate tasks that get joined depending on the data subset in the dataset setup later (which is why I would like to keep them separated). This happens after some time when I run `repo.push_to_hub()` but nothing shows up on the dataset site on the HF hub (currently private until everything is finished): ``` Several commits (5) will be pushed upstream. The progress bars may be unreliable. error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408 fatal: the remote end hung up unexpectedly fatal: the remote end hung up unexpectedly Everything up-to-date --------------------------------------------------------------------------- CalledProcessError Traceback (most recent call last) File ~/miniconda3/envs/lmproj2/lib/python3.8/site-packages/huggingface_hub/repository.py:1201, in Repository.git_push(self, upstream, blocking, auto_lfs_prune) 1200 if return_code: -> 1201 raise subprocess.CalledProcessError( 1202 return_code, process.args, output=stdout, stderr=stderr 1203 ) 1205 except subprocess.CalledProcessError as exc: CalledProcessError: Command '['git', 'push', '--set-upstream', 'origin', 'main']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) Input In [100], in <cell line: 1>() ----> 1 repo.push_to_hub() File ~/miniconda3/envs/lmproj2/lib/python3.8/site-packages/huggingface_hub/repository.py:1475, in Repository.push_to_hub(self, commit_message, blocking, clean_ok, auto_lfs_prune) 1473 self.git_add(auto_lfs_track=True) 1474 self.git_commit(commit_message) -> 1475 return self.git_push( 1476 upstream=f"origin {self.current_branch}", 1477 blocking=blocking, 1478 auto_lfs_prune=auto_lfs_prune, 1479 ) File ~/miniconda3/envs/lmproj2/lib/python3.8/site-packages/huggingface_hub/repository.py:1206, in Repository.git_push(self, upstream, blocking, auto_lfs_prune) 1201 raise subprocess.CalledProcessError( 1202 return_code, process.args, output=stdout, stderr=stderr 1203 ) 1205 except subprocess.CalledProcessError as exc: -> 1206 raise EnvironmentError(exc.stderr) 1208 if not blocking: 1210 def status_method(): OSError: error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408 fatal: the remote end hung up unexpectedly fatal: the remote end hung up unexpectedly Everything up-to-date ``` ### Expected behavior ```shell Data is completely uploaded and shows up on the dataset site on the HF hub. ```
06-22-2022 04:21:59
06-22-2022 04:21:59
Maybe there is also a git command line workaround?<|||||>Yes, can you try with a `git push` command line to see if it works that way?<|||||>Hi @julien-c, thank you for the fast feedback, I just tried it two times, at the beginning it looks like it works but then gets stuck again: ``` > git push Enumerating objects: 413419, done. Counting objects: 100% (413419/413419), done. Delta compression using up to 6 threads Compressing objects: 100% (410375/410375), done. error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408 fatal: the remote end hung up unexpectedly Writing objects: 100% (413417/413417), 561.79 MiB | 1.75 MiB/s, done. Total 413417 (delta 3098), reused 413302 (delta 3042) fatal: the remote end hung up unexpectedly Everything up-to-date > git push Enumerating objects: 413419, done. Counting objects: 100% (413419/413419), done. Delta compression using up to 6 threads Compressing objects: 100% (410375/410375), done. error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408 fatal: the remote end hung up unexpectedly3 MiB | 239.00 KiB/s Writing objects: 100% (413417/413417), 561.79 MiB | 1.81 MiB/s, done. Total 413417 (delta 3098), reused 413302 (delta 3042) fatal: the remote end hung up unexpectedly Everything up-to-date ``` The total size of the data directory is around 5GB.<|||||>@julien-c It seems that I run all the time into the issue from above. I guess the best workaround would be to go for a single jsonl files setup, or what do you think?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, it worked fine with me when i connected to internet via my mobile hotspot . i don't know what was the reason but this was the solution<|||||>I also had the same problem, switching to a Wi-Fi network worked for me.<|||||>Switching to a Wi-Fi network also worked for me.<|||||>I also had the same problem, switching to a Wi-Fi network worked for me. but i don't know y if any one know the answer please tell me<|||||>I also had the same problem, switching to a Wi-Fi network also worked for me. If anybody know why please post it.<|||||>error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408 I tried all possible method but it still fails. <|||||>It didn't worked for me<|||||>switching wifi does not work for me.<|||||>So for those of you still suffering from this issue and for those in the future, here is a detailed log of what I have tried: entering git config http.postBuffer <number of bytes, default = 1 MiB> into the terminal can work, but its a one-time fix and should be avoided. This is a good option if you've included large files, like zip files, into your commit history and dont want to fix that issue. Remember to set it back later. In my case, it just lead to another issue. It could also be the time out feature when pushing/pulling. This could be the case if you are working with large files. Here you'd want to edit the http.lowSpeedLimit, http.lowSpeedTime config values. Documentation for this and http.postBuffer can be found here: https://git-scm.com/docs/git-config Changing wiFi can work for some people, but depending on the root cause of the issue this wont work. If you sometimes get this error and sometimes don't regardless of what you are pushing/pulling, then this is most likely the solution for you. I was finally able to fix my issue. First, I changed http.postBuffer to be 50MiB. This then showed me that lfs files were still being pushed despite being tracked by lfs. I had to do git lfs migrate import --everything --verbose --include="<file extension>" to fix it for me. I hope this helps! <|||||>> I also had the same problem, switching to a Wi-Fi network also worked for me. If anybody know why please post it. Yes,for me also it had the same issue. Thanks.<|||||>Increase the buffer size by typing git config http.postBuffer 9999999999999999 and use the SSH link instead of HTTP as SSH is more stable git remote set-url origin **[email protected]:username/repository.git**<|||||>> Increase the buffer size by typing > > git config http.postBuffer 9999999999999999 > > and use the SSH link instead of HTTP as SSH is more stable > > git remote set-url origin **[[email protected]](mailto:[email protected]):username/repository.git** Thank you. It worked for me. 9999999999999999 was too big that it caused an error in my case. The solution was for me to type : **git config http.postBuffer 99999999**
transformers
17,812
closed
Constrained Beam Search outputs duplication and weird results
### System Info ```shell - `transformers` version: 4.20.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.8.12 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. duplication case: the outputs should not contain the same sequence. ```python3 >>> from transformers import GPT2LMHeadModel, GPT2Tokenizer >>> model = GPT2LMHeadModel.from_pretrained("gpt2") >>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") >>> force_word = "are" >>> force_words_ids = [ >>> tokenizer([force_word], add_prefix_space=True, add_special_tokens=False).input_ids, >>> ] >>> starting_text = ["The soldiers"] >>> input_ids = tokenizer(starting_text, return_tensors="pt").input_ids >>> outputs = model.generate( >>> input_ids, >>> max_new_tokens=3, >>> force_words_ids=force_words_ids, >>> num_beams=10, >>> num_return_sequences=10, >>> no_repeat_ngram_size=1, >>> remove_invalid_values=True, >>> ) >>> outputs = outputs[:, input_ids.shape[-1]:] >>> print("Output:\n" + 100 * '-') >>> for s in tokenizer.batch_decode(outputs, skip_special_tokens=True): >>> print(s) >>> import collections >>> print(collections.Counter(map(tuple, outputs.tolist())).most_common(1)) Output: ---------------------------------------------------------------------------------------------------- , who are , who were who are in , who had , who are who are fighting who are still who are killed who are not who are on [((11, 508, 389), 2)] ``` 2. weird case: the output looks weird, repeat progressing constraints with unreasonable tokens ```python3 >>> from transformers import GPT2LMHeadModel, GPT2Tokenizer >>> model = GPT2LMHeadModel.from_pretrained("gpt2") >>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") >>> force_word = "were not allowed" >>> force_words_ids = [ >>> tokenizer([force_word], add_prefix_space=True, add_special_tokens=False).input_ids, >>> ] >>> starting_text = ["The soldiers"] >>> input_ids = tokenizer(starting_text, return_tensors="pt").input_ids >>> outputs = model.generate( >>> input_ids, >>> force_words_ids=force_words_ids, >>> num_beams=10, >>> num_return_sequences=10, >>> no_repeat_ngram_size=1, >>> remove_invalid_values=True, >>> ) >>> print("Output:\n" + 100 * '-') >>> for s in tokenizer.batch_decode(outputs, skip_special_tokens=True): >>> print(s) Output: ---------------------------------------------------------------------------------------------------- The soldiers, who were wearing were not in were not allowed to leave the barracks. The commander of The soldiers, who were wearing were not in were not allowed to leave the barracks. The military said The soldiers, who were wearing were not in were not allowed to enter the building. The military said The soldiers, who were wearing were not in were not allowed to enter the building. The police said The soldiers, who were wearing were not in were not allowed to leave the barracks. The commander said The soldiers, who were wearing were not in were not allowed to enter the building. The police had The soldiers, who were wearing were not in were not allowed to leave the barracks. The army said The soldiers, who were wearing were not in were not allowed to enter the building. The police then The soldiers, who were wearing were not in were not allowed to leave the barracks. The commander told The soldiers, who were wearing were not in were not allowed to enter the building. The police and ``` ### Expected behavior ```shell 1. I believe the bug is due to the insufficient initialization of [`track_new["new_seqs"]`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L658). It appears if the top-k hypothesis also advances the constraints. Then this hypothesis may appear multiple times in the beam as the example output. The fix is one line in-place. Updated to `track_new = {"new_seqs": full_hypotheses.tolist(), "new_states": [], "new_indices": [], "new_tokens": [], "new_scores": []}` 2. I believe the bug is due to the incorrect value assignment of [`scores_for_all_vocab`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_utils.py#L3223), what stores the next **token scores**. `scores_for_all_vocab` is first passed to [`constrained_beam_scorer.process()`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_utils.py#L3262), and later passed to [`constrained_beam_scorer.step_sentence_constraint()`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L596) as `vocab_scores`. Within its scope, `vocab_scores` is sliced to [`this_batch_token_scores`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L654). `this_batch_token_scores` is finally added to [`track_new["new_scores"]`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L686). However, the derived [`new_scores`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L727) is concated with [`sent_beam_scores`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L731), reveal it should represent **beam scores**. The **token scores** is larger than the expected **beam scores** as ignoring past token scores. So the unfulfilled hypothesis will advance with an unexpected higher score, and dominate the beam as the example output. The fix is also straightforward in-place. # scores_for_all_vocab = next_token_scores_processed.clone() next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores) scores_for_all_vocab = next_token_scores.clone() ```
06-22-2022 01:57:30
06-22-2022 01:57:30
transformers
17,811
closed
Fix GPT-NeoX-20B past handling, attention computation
# What does this PR do? * Fixes GPT-NeoX-20B handing of the past object to correctly be used in .generate * Swaps attention computation for one more similar in the original training code, to hopefully avoid NaNs * Update docstring, removed unnecessary dropout configs in config object <!-- Remove if not applicable --> Fixes # (issue) https://github.com/huggingface/transformers/issues/17632 https://github.com/huggingface/transformers/issues/17452 (Hopefully) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
06-21-2022 22:30:58
06-21-2022 22:30:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>There are a few equivalence tests failing with the PR, if you can dive into it. Let us know if you need any help!<|||||>I've run the tests locally and they pass, so I can't seem to reproduce the test errors. Can someone else give them a try?<|||||>The tests pass on GPU but not on CPU on my side. So doing ``` CUDA_VISIBLE_DEVICES="" pytest tests/models/gpt_neox/test_modeling_gpt_neox.py ``` reproduces the failure.<|||||>Thanks again! Nice to be able to use GPT-Neo-X in float16 for generations :-)
transformers
17,810
closed
Offload fixes
# What does this PR do? This PR fixes a few bugs in the current offload to disk implementation via Accelerate. 1. The `offload_folder` is not created if it doesn't exists, loading to cryptic errors about missing files. 2. When the model is a task model and the checkpoint one of the base model (like for OPT), there are two issues arising: - if `offload_state_dict=True`, the weights should be reloaded in `model_to_load` from the temporary offload - all the weights offloaded to disk are missing the `base_model_cls` prefix since they were offloaded as weights of `model_to_load` and not of `model`.
06-21-2022 21:58:36
06-21-2022 21:58:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,809
closed
AutoTokenizer vs. BertTokenizer
### System Info ```shell - `transformers` version: 4.20.1 - Platform: Linux-5.17.4-200.fc35.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.7 - Huggingface_hub version: 0.1.0 - PyTorch version (GPU?): 1.9.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? With transformers-4.20.1 and tokenizers-0.12.1, I get the following behaviour: ```python In [1]: from transformers import AutoTokenizer, BertTokenizer In [2]: auto_tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased') In [3]: auto_tokens = auto_tokenizer('This is a sentence.'.split(), is_split_into_words=True) In [4]: auto_tokens.word_ids() Out[4]: [None, 0, 1, 2, 3, 3, None] In [7]: bert_tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') In [9]: bert_tokens = bert_tokenizer('This is a sentence.'.split(), is_split_into_words=True) In [10]: bert_tokens.word_ids() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-10-d69d0750fb87> in <module> ----> 1 bert_tokens.word_ids() /mount/arbeitsdaten33/projekte/tcl/Users/nikolady/embedalign/lib/python3.9/site-packages/transformers/tokenization_utils_base.py in word_ids(self, batch_index) 350 """ 351 if not self._encodings: --> 352 raise ValueError("word_ids() is not available when using Python-based tokenizers") 353 return self._encodings[batch_index].word_ids 354 ValueError: word_ids() is not available when using Python-based tokenizers ``` Regardless of whether this is expected or not, this is unintuitive and confusing. E.g., am I even getting correct tokenisation by using a more general tokeniser class? @SaulLu @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction See above. ### Expected behavior ```shell Word ids from BertTokenizer or a more informative error message. ```
06-21-2022 20:48:04
06-21-2022 20:48:04
Hi, The `AutoTokenizer` defaults to a fast, Rust-based tokenizer. Hence, when typing `AutoTokenizer.from_pretrained("bert-base-uncased")`, it will instantiate a `BertTokenizerFast` behind the scenes. Fast tokenizers support `word_ids`. Here you're comparing it to a `BertTokenizer`, which is a slow, Python-based tokenizer. So the behaviour is expected, and the error message pretty self-explanatory if you ask me.<|||||>The docs for AutoTokenizer say, > The tokenizer class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it’s missing, by falling back to using pattern matching on `pretrained_model_name_or_path`. <...> > > bert β€” [BertTokenizer](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/bert#transformers.BertTokenizer) or [BertTokenizerFast](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/bert#transformers.BertTokenizerFast) (BERT model). I do not pass a config, so I would assume that AutoTokenizer would instantiate `BertTokenizer`, which goes first in the list of options. Moreover, the docs for `BertTokenizer` and `BertTokenizerFast` do not mention that they are Python and Rust based respectively, so the user cannot really figure this out.<|||||>Hi @macleginn , Thanks for letting us know that this behavior isn't intuitive for you! Regarding the fact that `AutoTokenizer.from_pretrained` loads a fast tokenizer by default, we have in [the documentation](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/auto#transformers.AutoTokenizer.from_pretrained) a line for the `use_fast` argument that you can change in the `from_pretrained` method. As indicated in the documentation, this argument is set to `True`: > use_fast (bool, optional, defaults to True) β€” Whether or not to try to load the fast version of the tokenizer. Do you think we should do something differently to make it clearer? Regarding the error message that you're getting, do you think it would have been clearer to have: > ValueError: word_ids() is not available when using non-fast tokenizers (e.g. `XxxTokenizerFast`)<|||||>Hi @SaulLu, > Regarding the error message that you're getting, do you think it would have been clearer to have: >> ValueError: word_ids() is not available when using non-fast tokenizers (e.g. XxxTokenizerFast) Yes, sure. Given this message, I would realise, first, that I need to use `BertTokenzerFast` if I want `word_id`s, and second, that this is what `AutoTokenizer` most likely resolved to. > Do you think we should do something differently to make it clearer? Perhaps mention this in the preamble to the model list? Something along the lines of > Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary. > > The tokenizer class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it’s missing, by falling back to using pattern matching on `pretrained_model_name_or_path`. The fast version of the tokenizer will be selected by default when available (see the `use_fast` parameter above). But if you assume that the user should familiarise themselves with the params, it's okay as it is, as long as the error message points to something that can be found in the docs.<|||||>Hi, It seems the `AutoTokenizer` class has a problem with the character-based model _google/canine-s_. However, I set `use_fast` to True, I got this value error `word_ids() is not available when using non-fast tokenizers`.<|||||>Hi, CANINE is a bit of a special model, it doesn't have a fast implementation since it's character based (Rust implementations are only for these fancy tokenization algorithms like WordPiece, BPE etc). I'd recommend to just use `CanineTokenizer`<|||||>Hello, using `CanineTokenizer `doesn't solve the problem... It doesn't have a "Fast" version with `word_ids()` implemented
transformers
17,808
closed
Unable to Load the Pre-Trained Model using Spark-Submit
Hi Team, I am trying to trigger spark-submit and passing the pre-trained model with --archives ; however Spark is not able to locate the model while executing the BertTokenizer.from_pretrained(BERT_MODEL_NAME). Please advise how to I get the spark to point to the BERT Model ; I tried placing in HDFS as well as Local Linux path - but same error. `22/06/20 17:09:23 INFO e: Traceback (most recent call last): File "etl.py", line 509, in run self.ingest() File "etl.py", line 315, in ingest BERT_MODEL_NAME="score/bert-base-cased/", File "/data-12/hadoop/yarn/local/usercache/svc/appcache/application_1641/container_e397_000001/e_process-0.0.1-py3.7.egg/classification/pipelinescore.py", line 30, in __init__ self.tokenizer = BertTokenizer.from_pretrained(BERT_MODEL_NAME,cache_dir="/cache") File "/data-12/hadoop/yarn/local/usercache/svc/appcache/application_1641/container_e397_00001/dep.tar.gz/transformers/tokenization_utils_base.py", line 1773, in from_pretrained f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from " OSError: Can't load tokenizer for 'classification/score/bert-base-cased/'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'classification/score/bert-base-cased/' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer. `
06-21-2022 20:23:34
06-21-2022 20:23:34
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,807
closed
Add Spanish translation of custom_models.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add the Spanish translation for `custom_models.mdx` as part of the #15947 issue. Changes include the Spanish version of the original document and the updated `_toctree.yml` file. Tried to follow the [guideline](https://github.com/huggingface/transformers/blob/26a6a426087582c48593f8be980603951a7bcddd/CONTRIBUTING.md#start-contributing-pull-requests) to generate the Markdown files and check them before submitting this PR but the command `doc-builder build transformers docs/source/ --build_dir ~/tmp/test-build` fails since the `_toctree.yml` file is no longer in `./docs/source/` (but in `./docs/source/en/`). First contribution to the πŸ€— Transformers project! ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Task assignment [here](https://github.com/huggingface/transformers/issues/15947#issuecomment-1161854258). - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] ~~Did you write any new necessary tests?~~ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Tagging @omarespejel, @osanseviero, or @sgugger to review or assign reviewers :) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-21-2022 20:13:52
06-21-2022 20:13:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@omarespejel could you take a look at this PR Please?<|||||>Hola @donelianc! Thank you very much for your translation. Sorry for the delay in replying; it won't happen again. I made some small comments to be applied. Thank you!<|||||>@omarespejel I committed the suggested changes :) If you don't mind, can you assign me the translation of [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx)? I'll be happy to help with one more doc.
transformers
17,806
closed
Add TF DeiT implementation
# What does this PR do? Adds the TF implementations of DeiT ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
06-21-2022 19:10:28
06-21-2022 19:10:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>Please also incorporate the updates of #17731 <|||||>> Please also incorporate the updates of https://github.com/huggingface/transformers/pull/17731 @NielsRogge Will do! I originally added, but it resulted in lots of changes because of all the `Copied From` statements, so will wait until your PR is merged and those updates finalised. <|||||>@sgugger could you maybe give it a quick review as it's vision? (otherwise happy to do it if you're busy)
transformers
17,805
closed
Add logits_processor parameter, used by `generate`, to `Seq2SeqTrainer` methods `evaluate` and `predict`
# What does this PR do? Following the discussion in #17748, this PR adds the `logits_processor` param to Seq2SeqTrainer `predict` and `evaluate` methods, for easy extensibility. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante @sgugger
06-21-2022 16:42:21
06-21-2022 16:42:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger What are your thoughts about adding to the `test_finetune_bert2bert` a call to `trainer.evaluate()` and `trainer.predict(eval_dataset)`?<|||||>Sure, we can add this. But is it to test any of this functionality? If not, it should go in its own PR.<|||||>It somewhat tests this functionality by calling the methods I changed, but not directly. I can add it later in its own PR.<|||||>Merging this one then, thanks again for your contribution!
transformers
17,804
closed
`seed_generator` would casue tensorflow gpu memory growth immediately
https://github.com/huggingface/transformers/blob/7cced021fa8ddc59f0f77384300760d34545394e/src/transformers/generation_tf_utils.py#L349 In transformers version >=4.18.0, import something from `generation_tf_utils.py` would casue gpu memory growth immediately if I don't set env `TF_FORCE_GPU_ALLOW_GROWTH=true` before importing modules from this file. And I locate the problem at this line. Making attr `seed_generator` lazy loaded could solve this. ``` # ... class TFGenerationMixin: """ A class containing all of the functions supporting generation, to be used as a mixin in [`TFPreTrainedModel`]. """ # seed_generator = tf.random.Generator.from_non_deterministic_state() _seed_generator = None @property def seed_generator(self): if self._seed_generator is None: self._seed_generator = tf.random.Generator.from_non_deterministic_state() return self._seed_generator # ... ```
06-21-2022 16:41:21
06-21-2022 16:41:21
Hey @Atakey πŸ‘‹ Great finding! Would you be able to open a PR? The suggestion you gave looks great!
transformers
17,803
closed
Fix test for BF16 detection
# What does this PR do? When `no_cuda=True`, we shouldn't try to detect the support for BF16 GPU.
06-21-2022 16:07:13
06-21-2022 16:07:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,802
closed
Properly check for a TPU device in is_torch_tpu_available
# What does this PR do? This PR adds a better check to see if a TPU is available in the environment when checking `is_torch_tpu_available` by adding a quick import check for `xm.tpu_device()`. This raises a `RuntimeError` if a TPU device is not found. Mimics solution in https://github.com/huggingface/accelerate/pull/456 Fixes # (issue) https://github.com/huggingface/transformers/issues/17752 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-21-2022 15:49:09
06-21-2022 15:49:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>I have the same problem in vertex AI. it is not solver even with `pip install git+https://github.com/huggingface/transformers` The weird thing is that I dont even use TPU! I made it work by `!pip uninstall torch-xla` (at least on vertex, I dont know about other env.) thanks :)
transformers
17,801
closed
TF: generate without `tf.TensorArray`
# What does this PR do? Some models, like XLNet, need more than just the previous token when `past` is used. This PR solves this problem with the help of some refactoring -- we no longer use `TensorArray`, instead we scatter updates into a fixed-size tensor. This refactor simplifies `generate`, especially `beam_search`, which may prove to be helpful in enabling XLA. Slow tests have been run for the usual generate models (gpt2, t5, rag, speech_to_text, encoder_decoder, vision_encoder_decoder, bart). ### Why was this refactor needed? As it can be read in [this issue](https://github.com/tensorflow/tensorflow/issues/56272), `TensorArray` is meant to be used as a write-once array, anything else falls in the unexpected behavior domain -- in other words, our use was dangerous. The original solution to the XLNet problem was to read all existing tokens from the `TensorArray`, using the same logic as in this PR, but it failed with XLA -- and the behavior depended on what was written into the variable on its first write. Since we use fixed-size tensors, a normal tensor works just fine, and with simpler code (assuming the reader is familiar with how scatter works :D ).
06-21-2022 15:02:28
06-21-2022 15:02:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ydshieh -- this PR fixes the XLNet generate error we have been seeing :)<|||||>Cool!<|||||>@Rocketknight1 no differences in terms of execution speed πŸ‘ `GPT2` `sample` on a 3090, average of 10 runs (excluding compilation time) - Eager: 884 ms -> 888 ms - XLA: 29.2 ms -> 29.3 ms - JAX: 19.7 ms
transformers
17,800
closed
Fix forward reference imports in DeBERTa configs
# What does this PR do? #17617 introduced some imports that will create cyclical references, just for type hinting. This PR makes them only imported in a type checking block.
06-21-2022 14:57:44
06-21-2022 14:57:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>FYI @JingyaHuang @michaelbenayoun I'll take a look at the slow test tomorrow :)
transformers
17,799
closed
add MobileNetV1 model
# What does this PR do? Adds the MobileNet V1 model to the library. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-21-2022 14:29:36
06-21-2022 14:29:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>Note: The checkpoints still point to my own account, but should be changed to `google` once the changes are approved.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17799). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17799). All of your documentation changes will be reflected on that endpoint.<|||||>I've rebased it so that it can merged. Perhaps @sgugger or @NielsRogge could merge it?
transformers
17,798
closed
Update CodeParrot readme to include training in Megatron
This PR updates the README to explain how to train CodeParrot with [Megatron](https://github.com/NVIDIA/Megatron-LM), and redirects model and dataset imports to [CodeParrot organization](https://huggingface.co/codeparrot).
06-21-2022 13:57:11
06-21-2022 13:57:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,797
closed
Fix properties of unset special tokens in non verbose mode
# What does this PR do? Fixes #17796. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @n1t0, @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-21-2022 13:03:00
06-21-2022 13:03:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,796
closed
Properties of unset special tokens return the string 'None' in non verbose Tokenizers
### System Info ```shell - `transformers` version: 4.20.0 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction For example, `MarianTokenizer` does not set `bos_token`. The corresponding property returns the string `'None'`: ```python >>> import transformers >>> tokenizer = transformers.MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de", verbose=False) >>> tokenizer.bos_token 'None' ``` ### Expected behavior ```shell The property should return None, not the string 'None'. ```
06-21-2022 12:15:33
06-21-2022 12:15:33
Hi @guillaumekln , Thanks a lot for letting us know about this issue! ~~I'm planning to fix it in the PR #17824~~ EDIT: sorry, I haven't seen your PR. Closing mine in favor of yours :hugs: <|||||>Thanks for the update! I also suggested a fix here #17797. Feel free to close this PR if needed.<|||||>Let's keep yours!
transformers
17,795
closed
Big Model Inference: OOM with simple forward pass
### System Info ```shell - `transformers` version: 4.20.0 - Platform: Linux-5.11.0-40-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 208... Off | 00000000:01:00.0 Off | N/A | | 30% 50C P8 13W / 250W | 5MiB / 11019MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 208... Off | 00000000:02:00.0 Off | N/A | | 29% 42C P8 27W / 250W | 5MiB / 11019MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1237 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 1237 G /usr/lib/xorg/Xorg 4MiB | +-----------------------------------------------------------------------------+ ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Loading `T03B` for big model inference: ```python $ python3 Python 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") ``` 2. The model is distributed over two GPUs: ```python >>> model.hf_device_map {'shared': 0, 'decoder': 0, 'encoder.embed_tokens': 0, 'encoder.block.0': 0, 'encoder.block.1': 0, 'encoder.block.2': 0, 'encoder.block.3': 0, 'encoder.block.4': 0, 'encoder.block.5': 0, 'encoder.block.6': 0, 'encoder.block.7': 0, 'encoder.block.8': 0, 'encoder.block.9': 0, 'encoder.block.10': 0, 'encoder.block.11': 0, 'encoder.block.12': 0, 'encoder.block.13': 0, 'encoder.block.14': 0, 'encoder.block.15': 0, 'encoder.block.16': 0, 'encoder.block.17': 0, 'encoder.block.18': 0, 'encoder.block.19': 1, 'encoder.block.20': 1, 'encoder.block.21': 1, 'encoder.block.22': 1, 'encoder.block.23': 1, 'encoder.final_layer_norm': 1, 'encoder.dropout': 1, 'lm_head': 1} ``` 3. Generation works fine: ```python >>> inputs = tokenizer("Task: copy but say the opposite. PSG won its match against Barca.", return_tensors="pt") >>> inputs = inputs.to(0) >>> tokenizer.decode(model.generate(inputs['input_ids'])[0].tolist()) '<pad> Paris St-Germain beat Barcelona 1-0 in their Champions League Group B match on Tuesday.</s>' ``` 4. But a simple forward pass throws an OOM error: ```python >>> inputs = tokenizer("Task: copy but say the opposite. PSG won its match against Barca.", return_tensors="pt") >>> inputs = inputs.to(0) >>> labels = tokenizer("PSG lost its match against Barca.", return_tensors="pt") >>> labels = labels.to(0) >>> output = model(**inputs, labels=labels['input_ids']) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1601, in forward encoder_outputs = self.encoder( File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1033, in forward layer_outputs = layer_module( File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 716, in forward hidden_states = self.layer[-1](hidden_states) File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 326, in forward forwarded_states = self.DenseReluDense(forwarded_states) File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 305, in forward hidden_gelu = self.act(self.wi_0(hidden_states)) File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/transformers/activations.py", line 34, in forward return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 10.76 GiB total capacity; 9.72 GiB already allocated; 3.69 MiB free; 9.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` ### Expected behavior ```shell There should be no OOM error, especially so as generation works fine. ```
06-21-2022 11:38:51
06-21-2022 11:38:51
You need to put your forward pass inside a `torch.no_grad()` context manager, otherwise you will get memory used for the activations saved for the backward pass (which is not supported anyway for big model inference), which is why you get OOM.<|||||>Thanks for the quick reply, @sgugger! With the context manager I get another error (which seems to relate to the fact the model resides on two GPUs?): ```python # inputs and labels are on cuda:0 >>> inputs {'input_ids': tensor([[16107, 10, 2405, 68, 497, 8, 6401, 5, 276, 9945, 751, 165, 1588, 581, 1386, 658, 5, 1]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')} >>> labels {'input_ids': tensor([[ 276, 9945, 1513, 165, 1588, 581, 1386, 658, 5, 1]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')} # forward: >>> with torch.no_grad(): ... output = model(**inputs, labels=labels['input_ids']) ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1671, in forward loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1163, in forward return F.cross_entropy(input, target, weight=self.weight, File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/functional.py", line 2996, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_nll_loss_forward) ```<|||||>Indeed, your labels will need to be on the same device as the last layer of the model if you want to compute the loss inside the model (here device 1).<|||||>Thanks so much, that makes sense -- and works: ```python >>> labels = labels.to(1) >>> with torch.no_grad(): ... output = model(**inputs, labels=labels['input_ids']) >>> output[0].tolist() 0.9372970461845398 ``` (As far as I'm concerned, this is solved.)
transformers
17,794
closed
Token indices sequence length is longer than the specified maximum sequence length for this model (821 > 512). Running this sequence through the model will result in indexing errors
I am trying to generate a Boolean question using T5 transformer. using [this](https://github.com/ramsrigouthamg/generate_boolean_questions_using_T5_transformer/blob/master/t5_inference.py ![jupyter error](https://user-images.githubusercontent.com/72002381/174787727-ebc22c3d-0785-45b6-a5bd-5d0d652adb7e.PNG) ) link as a reference. ``` def topkp_decoding (inp_ids,attn_mask): topkp_output = model.generate(input_ids=inp_ids, attention_mask=attn_mask, max_length=256, do_sample=True, top_k=40, top_p=0.80, num_return_sequences=(len(z)*2), no_repeat_ngram_size=2, early_stopping=True ) Questions = [tokenizer.decode(out, skip_special_tokens=True,clean_up_tokenization_spaces=True) for out in topkp_output] return [Question.strip().capitalize() for Question in Questions] ``` ``` start = time.time() passage =wo truefalse ="no" + "yes" #text = "truefalse: %s passage: %s </s>" % (passage, truefalse) max_len = 256 encoding = tokenizer.encode_plus(text, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) #print ("Context: ",passage) global output output = beam_search_decoding(input_ids,attention_masks) #print ("\nBeam decoding [Most accurate questions] ::\n") #for out in output: #print(out) global outputs outputs = topkp_decoding(input_ids,attention_masks) #print ("\nTopKP decoding [Not very accurate but more variety in questions] ::\n") #for out in outputs: #print (out) end = time.time() #print ("\nTime elapsed ", end-start) #print ("\n") ``` But I am getting an error **Token indices sequence length is longer than the specified maximum sequence length for this model (821 > 512). Running this sequence through the model will result in indexing errors** while running this code in jupyter. how to solve this issue?
06-21-2022 11:21:46
06-21-2022 11:21:46
Hi, This question is better suited for our [forum](https://discuss.huggingface.co/), as we'd like to keep Github issues for bugs/feature requests. This one is not a bug, you just need to truncate the inputs as they are longer than what the model expects: ```` encoding = tokenizer(text, truncation=True, return_tensors="pt") ```` Thanks!<|||||>Hey @NielsRogge shouldn't ideally in such a case there should be an error triggered or even if the warning is triggered the ideal nature of the tokenization process be to, forcefully truncate the input, as this could cause errors on a production grade system if left unchecked.<|||||>No we don't truncate by default, that would break the design of the library quite a bit. Users need to specify the truncation or padding behaviour themselves, as it can happen from the right or the left for instance, the max length can differ, etc. See here for a guide on ["everything you always wanted to know about tokenization"](https://huggingface.co/docs/transformers/v4.15.0/preprocessing#everything-you-always-wanted-to-know-about-padding-and-truncation). <|||||>Okay sure @NielsRogge, I understand now.
transformers
17,793
closed
IndexError with Reformer Model when padding the sequence
Hi, I am trying to use Reformer for text classification, but I am getting the following RuntimeError. I tried both using a custom head and Huggingface's Reformer for sequence classification. The code runs fine when I swap Reformer for BigBird. I am using the google/reformer-crime-and-punishment checkpoint and have padded the text to max_length. ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-27-5c75e8f9ad52>](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in <module>() 9 for batch in training_loader: 10 batch = {k: v.to(config['device']) for k, v in batch.items()} ---> 11 outputs = model(**batch) 12 loss = criterion(outputs, batch['labels']) 13 loss.backward() 8 frames [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, position_ids, attention_mask, head_mask, inputs_embeds, num_hashes, labels, output_hidden_states, output_attentions, return_dict) 2554 output_hidden_states=output_hidden_states, 2555 output_attentions=output_attentions, -> 2556 return_dict=return_dict, 2557 ) 2558 [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, num_hashes, past_buckets_states, use_cache, output_hidden_states, output_attentions, return_dict) 2102 position_ids=position_ids, 2103 inputs_embeds=inputs_embeds, -> 2104 start_idx_pos_encodings=start_idx_pos_encodings, 2105 ) 2106 [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, position_ids, inputs_embeds, start_idx_pos_encodings) 253 254 if inputs_embeds is None: --> 255 inputs_embeds = self.word_embeddings(input_ids) 256 257 if position_ids.shape[-1] > self.max_position_embeddings: [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input) 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 162 def extra_repr(self) -> str: [/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2181 # remove once script supports set_grad_enabled 2182 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2183 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2184 2185 RuntimeError: CUDA error: device-side assert triggered ``` I have tried the following example on CPU and I think padding is the problem: ``` This works: from transformers import ReformerTokenizer, ReformerModel import torch tokenizer = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") tokenizer.add_special_tokens({'pad_token': '[PAD]'}) model = ReformerModel.from_pretrained("google/reformer-crime-and-punishment") inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute"], return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` This doesn't: ``` from transformers import ReformerTokenizer, ReformerModel import torch tokenizer = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") tokenizer.add_special_tokens({'pad_token': '[PAD]'}) model = ReformerModel.from_pretrained("google/reformer-crime-and-punishment") inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute"], return_tensors="pt", padding='max_length', max_length=524288) outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Error: ``` IndexError Traceback (most recent call last) [<ipython-input-68-d9b515284501>](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in <module>() 7 8 inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute"], return_tensors="pt", padding='max_length', max_length=524288) ----> 9 outputs = model(**inputs) 10 11 last_hidden_states = outputs.last_hidden_state 6 frames [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, num_hashes, past_buckets_states, use_cache, output_hidden_states, output_attentions, return_dict) 2102 position_ids=position_ids, 2103 inputs_embeds=inputs_embeds, -> 2104 start_idx_pos_encodings=start_idx_pos_encodings, 2105 ) 2106 [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, position_ids, inputs_embeds, start_idx_pos_encodings) 253 254 if inputs_embeds is None: --> 255 inputs_embeds = self.word_embeddings(input_ids) 256 257 if position_ids.shape[-1] > self.max_position_embeddings: [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input) 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 162 def extra_repr(self) -> str: [/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2181 # remove once script supports set_grad_enabled 2182 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2183 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2184 2185 IndexError: index out of range in self ```
06-21-2022 10:35:47
06-21-2022 10:35:47
If I add the Eos token as padding token it works. In any case, the tokenizer tells me to add a padding token. I do believe that the documentation says that one should exit however.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,792
closed
__init__() got an unexpected keyword argument '_name_or_path'
### System Info ```shell sentence_transformer versiob 2.2.0 windows python version- 3.9.12 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying the following code :- 1. it gave me error no file "C:\\Users\\ra*****\\Downloads\\bert-base-nli-mean-tokens\\1_Pooling\\config.json" the i create folder 1_Pooling and kept the downloaded json but then it gave me error 2.__init__() got an unexpected keyword argument '_name_or_path' NOTE:- I downloaded all the files from https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens/tree/main" from sentence_transformers import SentenceTransformer embedder = SentenceTransformer(r'C:\Users\raj****\Downloads\bert-base-nli-mean-tokens') corpus = ['A man is eating food.', 'A man is eating a piece of bread.', 'The girl is carrying a baby.', 'A man is riding a horse.', 'A woman is playing violin.', 'Two men pushed carts through the woods.', 'A man is riding a white horse on an enclosed ground.', 'A monkey is playing drums.', 'A cheetah is running behind its prey.'] corpus_embeddings = embedder.encode(corpus) sentence_transformer version 2.2.0 ### Expected behavior ```shell we should be able to load the sentence_transformer from local ```
06-21-2022 10:33:55
06-21-2022 10:33:55
Hi @rajnishrajput12 It might be better to ask on the [community tab](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens/discussions) of the model on the Hub. Otherwise, on [sentence-transformers](https://github.com/UKPLab/sentence-transformers) repository. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Close as this is a question for the library `sentence_transformer`
transformers
17,791
closed
Add link to Albumentations notebook
# What does this PR do? This PR adds a link to the image classification with Albumentations notebook (present in our Notebooks repository).
06-21-2022 10:04:11
06-21-2022 10:04:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,790
closed
OPT-350m cannot be loaded from local files generated using the save_pretrained method
### System Info ```shell - `transformers` version: 4.19.3 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.7 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ``` ### Who can help? @younesbelkada @patrickvonplaten @Lys ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Load opt-350m model from the hub with `AutoModelForCausalLM.from_pretrained('facebook/opt-350m')` 2. Save the model using `model.save_pretrained('save_dir/')` 3. Try loading back the model with `AutoModelForCausalLM.from_pretrained('save_dir/')` Full code: ```python from transformers import AutoModelForCausalLM opt350m = AutoModelForCausalLM.from_pretrained('facebook/opt-350m') opt350m.save_pretrained("local_save_dir/") loaded_opt = AutoModelForCausalLM.from_pretrained('local_save_dir/') ``` ### Expected behavior ```shell A RuntimeError will be raised when loading the model from save_dir/ Logs: Traceback (most recent call last): File "/Users/gregoireretourne/opt/miniconda3/envs/health/lib/python3.9/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 5, in <module> File "lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "lib/python3.9/site-packages/transformers/modeling_utils.py", line 2059, in from_pretrained model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model( File "lib/python3.9/site-packages/transformers/modeling_utils.py", line 2251, in _load_pretrained_model raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") RuntimeError: Error(s) in loading state_dict for OPTForCausalLM: size mismatch for lm_head.weight: copying a param with shape torch.Size([50272, 512]) from checkpoint, the shape in current model is torch.Size([50272, 1024]). ```
06-21-2022 09:56:23
06-21-2022 09:56:23
From what I understand, this is a related to a known issue that is unique to opt-350m amongst the opt models, because the `hidden_size` (512) is different from the `word_embed_proj_dim` (1024). When I load the opt model using the hub and do `opt350m.lm_head.weight.shape` I have `torch.Size([50272, 512])`. When I manually load the weight files saved by the `save_pretrained` method, and go to the lm_head weight, it is also `torch.Size([50272, 512])` But for some reasons, as the lm_head is a `Linear(in_features=1024, out_features=50272, bias=False)`, there is a problem loading the weights in the model. I have tried to reproduce with other models (namely `opt-125m` and `opt-1.3b`), but it works well with them.<|||||>Hmm sorry I did not see it was fixed in last release. Duplicate of #17389. Closing<|||||>Just to clarify, I just ran the command with Transformers >= 4.20.1 and it works as expected. Are there still any problems?<|||||>Yes @patrickvonplaten, now it works as expected, just had to bump my transformers version
transformers
17,789
closed
assertEqual of non-frozen parameters in test_resume_training_with_frozen_params
### System Info ```shell - `transformers` version: 4.20.0 - Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0a0+gita4c10ee (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no - Specific hardware: Habana HPU ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behaviour: 1. Setup an AWS DL1 instance 2. Clone [optimum-habana](https://github.com/huggingface/optimum-habana) 3. Install the package with `pip install optimum-habana/[tests]` 4. Uncomment `test_resume_training_with_frozen_params` in `optimum-habana/tests/test_trainer.py` 5. Run `pytest tests/test_trainer.py -k "frozen"` ### Expected behavior `test_resume_training_with_frozen_params` in `tests/trainer/test_trainer.py` should not assert if `b` and `b1` are equal [here](https://github.com/huggingface/transformers/blob/eb16be415a74328e5ab62e050330a43054f6bd11/tests/trainer/test_trainer.py#L1429). While `a` is frozen, `b` is not so why should `b` and `b1` be equal? I understand I use a very specific hardware and the test passes on a GPU, but I actually think it should not pass on the latter and would like to understand why it does :)
06-21-2022 09:52:58
06-21-2022 09:52:58
Please use the forums for questions like this as we keep issues for bugs and feature requests only. b and b1 should be equal because they are the results of the same trainings. One started from scratch and one resumed from an intermediate checkpoint. If that test fails, you have a problem in reproducibility on HPUs.<|||||>Sure, sorry for the inconvenience! Yep I thought `checkpoint-5` comes from a completely different run, closing this as this issue does not come from the test definition.
transformers
17,788
closed
Fix artifact path for cuda extension test in push CI
# What does this PR do? Fix artifact path for cuda extension test in push CI ### More details In #17335, `working-director` is updated (a fix) for (single-gpu) cuda extension test, but the artifact path was not updated, which leads to strange slack report. ```bash - name: Run all non-slow selected tests on GPU working-directory: /workspace/transformers ... ```
06-21-2022 09:28:48
06-21-2022 09:28:48
transformers
17,787
closed
Add MVP model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add MVP models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-21-2022 09:23:11
06-21-2022 09:23:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @StevenTang1998, Cool to see a new model here! Do you need help with the failing CI? <|||||>Thanks for your concern. And I really met two issues. 1. I want to add two tokens in the tokenizer. I tried two ways, but they both didn't pass the test. - add them to the `additional_special_tokens` during tokenizer init. - use `tokenizer.unique_no_split_tokens` to add them. 2. Another issue is my model didn't pass the test `MvpModelTest.test_beam_sample_generate`, I am a little confused why my model passed other generation tests, while failed in this one. And I didn't understand the motivation of this test, so I don't have some ideas to fix it. Could you offer some instructions about these issues? Thanks very much!<|||||>Hi, @patrickvonplaten. I have fixed the tokenizer issue. The model issue is ``` E AssertionError: Lists differ: [[2, 84, 28, 28], [2, 51, 35, 35], [2, 51, 28, 28], [2, 51, 28, 51]] != [[2, 0, 0, 12], [2, 0, 29, 29], [2, 0, 0, 12], [2, 0, 0, 12]] E E First differing element 0: E [2, 84, 28, 28] E [2, 0, 0, 12] E E - [[2, 84, 28, 28], [2, 51, 35, 35], [2, 51, 28, 28], [2, 51, 28, 51]] E + [[2, 0, 0, 12], [2, 0, 29, 29], [2, 0, 0, 12], [2, 0, 0, 12]] ``` - As for the model issue, I found the reason: My model inherits from BART, and I set `forced_bos_token_id` explicitly. During the generation test (for example in [`_beam_sample_generate`](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L423)), the test [`generate`](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L440) option will reprocesss the `logits_processor` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L1253). So the `logits_processor` has `ForcedBOSTokenLogitsProcessor`. Whereas, the test [`beam_sample`](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L474) option only adds `InfNanRemoveLogitsProcessor` [here](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L470). According to the error result, the first test option generates the `bos_token` (0) in the second position, where the second test option samples the second token. - Moreover, after I upload files, the failing CI contains errors not related to us. Could you offer some help to solve them? <|||||>Hi @patrickvonplaten, remove specifying `forced_bos_token_id` in the my Config can solve the issue. However, I still met some issues not related to my model. Could you offer some help?<|||||>Hey @StevenTang1998, Yes indeed the failing tests are not your fault -> could you try to do the same as explained here: https://github.com/huggingface/transformers/pull/17784#issuecomment-1162446039<|||||>Hi @patrickvonplaten, thanks for your help! Now I passed all the test, so what is the next step?<|||||>Thanks for your comments, I will update it soon.<|||||>> 3.) I'd be slighly in favor of renaming all classes to MVPTokenizer and MVPModel because MVP is an acronym the actual name of the paper (but ok for me to leave here as well what do you think @sgugger ?) Actually disagree rather strongly here ;-) BERT is an acronym and we still use Bert for the model classes. IMO it was a mistake to use all capital GPT, so would prefer keeping Mvp as is!<|||||>Hi @patrickvonplaten, thanks very much for your comments. - We have uploaded our paper in [here](https://github.com/RUCAIBox/MVP/blob/main/paper.pdf), it will be published on ArXiv at Mon, 27 Jun 2022 00:00:00 GMT. We will update the paper url once it announced. - We have added `# Copied from ...` statements everywhere where applicable. - Our model can be fine-tuned for sequence classification and question answering (we conducted experiments in our paper), so we reserve them. - According to the comment of @sgugger, maybe our model name do not need to be changed? - We have merged conflicts. <|||||>Hi @patrickvonplaten, we have update the arxiv paper link.<|||||>@sgugger Thanks for your comments! We will fix them following your advice.<|||||>@patrickvonplaten @sgugger Thanks for your valuable advice! I have made changes, please review them at your convenience.<|||||>Can you just explain why there can't be any Copied from statements for `MvpAttention`, `MvpEncoderLayer` and `MvpDecoderLayer`? The first twos don't have any answer on my comment and the last one as a cryptic "same". Thanks :-)<|||||>Hi @sgugger, I'm sorry maybe I didn't press the Comment button yesterday. We didn't add `Copied from` because we add prompts in these three module. We are a little unclear about the `Copied from` mechanism that we should add them when the code are exactly the same? or almost the same?<|||||>Thanks for your explanation! That was the last thing standing, so merging this new model. Thanks a lot for your contribution!<|||||>Thanks a lot for your patient comments and guidance!
transformers
17,786
closed
add doctests for DETR
# What does this PR do? Enable doctests for DETR @patrickvonplaten
06-21-2022 08:56:52
06-21-2022 08:56:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>Good to merge for me!
transformers
17,785
closed
Add final_layer_norm to OPT model
# What does this PR do? Fixes #17653 , #17545 OPT models have a final_layer_norm: https://github.com/facebookresearch/metaseq/blob/e0c4f6b0e4c523906ad8d561f727e3f2ac3a8e73/metaseq/models/transformer.py#L466-L477 So we update HF models + conversion script to take in account that missing layer norm. Test on OPT-125m (`restored.pt` file from `patrickvonplaten/opt_metaseq_125m`): ``` >>> model_path="fixed_opt_125m" >>> prompt="Hello my name is" >>> log_probs_with_ppl(model_path, prompt) Input torch.Size([1, 5]) Logits torch.Size([1, 5, 50272]) torch.return_types.max( values=tensor([[0.2398, 0.2326, 0.3332, 0.9363, 0.0097]], grad_fn=<MaxBackward0>), indices=tensor([[ 100, 6, 766, 16, 1236]])) argmax probility: [[0.23982257 0.23258895 0.33315504 0.9362957 0.00967377]] argmax log probability: [[-1.4278558 -1.4584825 -1.0991473 -0.06582398 -4.6383367 ]] argmax tokens: I, name is j cross entropy loss: 4.051314830780029 ppl: 57.47297286987305 ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review?
06-20-2022 22:51:08
06-20-2022 22:51:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you very much for the fix! I think that we'll have to change the generation tests a bit for the other models as well <|||||>Great find @thomasw21 - thanks a lot for fixing it! Think the checkpoints were then also incorrectly loaded inside the metaseq codebase - could you maybe double check that the following script gives identical results between fairseq and transformers: https://huggingface.co/patrickvonplaten/opt_metaseq_125m -> The logits should match there (maybe an incorrect configuration in the metaseq model?) Also could you please update the slow model tests?<|||||>@thomasw21 I can update the tests and check the outputs if you want <|||||>@patrickvonplaten from what I understood logits comparison equality test were only done in 350m? @younesbelkada I actually manually converted `restored.pt` from https://huggingface.co/patrickvonplaten/opt_metaseq_125m using the updated conversion script. @ArthurZucker if you have the bandwidth, I'd appreciate it! Thanks!<|||||>@patrickvonplaten Yep I've looked at the changes with your comment, feel free to merge those : D<|||||>When releasing the patch can we merge at the same time #17437 ? The problem of NaNs for batched generation still persists with this fix, but is resolved with #17437 <|||||>BTW @patrickvonplaten do you have the expected values for the slow test? <|||||>> BTW @patrickvonplaten do you have the expected values for the slow test? Corrected the tests as well now<|||||>Good job @thomasw21 !
transformers
17,784
closed
Flax t5 Encoder
# What does this PR do? I notice that there hasn't been a T5EncoderModel for Flax implementation, taking a stab to add it. I didn't find an issue related to this. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? In terms of the test, I wrote it under `pytest tests/models/t5/test_modeling_flax_t5.py` now but seems I'm having trouble running it on my machine so far, although I can import the `FlaxT5EncoderModel` externally and able to use it successfully. I'm working on that, and just want to send the PR first to know if there's more I should add. ## Who can review? t5: @patrickvonplaten, @patil-suraj
06-20-2022 21:51:53
06-20-2022 21:51:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @crystina-z, The PR looks very nice already - think we're close to merging it! Great job so far :-) A FlaxT5Encoder model class is very useful (also to build Diffusion Pipelines like Imagen cc @borisdayma @patil-suraj ) Some tests are currently failing because the branch of the PR is not up to date with Transformers' main branch I think. Could you try merging the `main` branch into your PR, e.g.: ``` git pull upstream main ``` (if you called the remote `upstream`) <|||||>Let me know if you need help with anything, otherwise I think we can do a final review once the Circle CI is green :-)<|||||>Hi @patrickvonplaten , thanks for comment! Lemme try merge it with main first then. There are indeed some tests failure I'm not sure about the reason now, like `jax.errors.ConcretizationTypeError` that happens when running `tests/models/t5/test_modeling_flax_t5.py` in my local environment. might need some help later if merging with main not solves it and I can't still find the reason later :P<|||||>Hi @patil-suraj, @patrickvonplaten. I do have two questions regarding to the failing CI tests - 1. under the `run_tests_tf`, it shows `FAILED tests/models/mobilebert/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings ` [here](https://app.circleci.com/pipelines/github/huggingface/transformers/42780/workflows/5d5e1e0f-0561-4bd2-a3ac-7dbc520a8d73/jobs/492835?invite=true#step-111-4402) without specific failing reason. however, when running the test locally, the test seems passed, tho with some warning cases. Wonder if you have any idea why it's happening? ``` $ pytest tests/models/mobilebert/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings =============================================================== test session starts =============================================================== platform linux -- Python 3.7.13, pytest-7.1.2, pluggy-1.0.0 rootdir: /scratch/czhang/src/task-mrtydi/mrtydi-ood/transformers, configfile: setup.cfg plugins: mock-3.6.1, typeguard-2.12.1, xdist-2.5.0, hypothesis-6.47.3, dash-2.5.1, timeout-2.1.0, forked-1.4.0 collected 1 item tests/models/mobilebert/test_modeling_tf_mobilebert.py . [100%] ================================================================ warnings summary ================================================================= ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/flatbuffers/compat.py:19 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:36 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST, ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:37 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR, ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:38 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC, ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:39 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. 'hamming': pil_image.HAMMING, ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:40 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. 'box': pil_image.BOX, ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:41 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. 'lanczos': pil_image.LANCZOS, src/transformers/modeling_tf_utils.py:575 /scratch/czhang/src/task-mrtydi/mrtydi-ood/transformers/src/transformers/modeling_tf_utils.py:575: DeprecationWarning: invalid escape sequence \d bit_search = re.search("[^\d](\d+)$", dtype.name) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ==================================================== 1 passed, 8 warnings in 265.15s (0:04:25) ==================================================== ``` 2. under the `check_repository_consistency`, it says public documentation is needed for `FlaxMT5EncoderModel ` and `FlaxT5EncoderModel` [here](https://github.com/huggingface/transformers/runs/7014080791?check_suite_focus=true#step:9:50). While in the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs#generating-the-documentation), it says we only need to build documentations locally for inspection but not to commit them - > You only need to generate the documentation to inspect it locally (if you're planning changes and want to check how they look like before committing for instance). You don't have to commit the built documentation. I'm a bit confused about how should I deal with the documentation files. and if we'll need to update the documentation, do we just copy the contents under `~/tmp/test-build` to `docs/source/`? Thanks so much in advance!<|||||>Hey @crystina-z > 1. under the run_tests_tf, it shows FAILED You can ignore the `run_tests_tf` since it's unrelated, rebasing should fix this. > I'm a bit confused about how should I deal with the documentation files. and if we'll need to update the documentation, do we just copy the contents under ~/tmp/test-build to docs/source/? This means we need to add `FlaxT5EncoderModel ` and `FlaxMT5EncoderModel` in the [t5.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/t5.mdx) and [mt5.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/mt5.mdx) files respectively. <|||||>Hi @patrickvonplaten @patil-suraj All tests passed! Also updated all comments above. lmk if there's anything else to change!<|||||>That's great! Reviewing now :-)<|||||>Nice! Thank you all for reviewing as well! Just wonder when could I expect this PR to be merged? @patrickvonplaten @sanchit-gandhi <|||||>Thanks for being patient @crystina-z. Let's wait for @patil-suraj to give his final review then we can merge!<|||||>Awesome sg, thanks!<|||||>Good to merge I think - @patil-suraj feel free to leave comments if you don't like something, but overall it's good to merge :-) Great job @crystina-z !!!
transformers
17,783
closed
Add missing type hints for QDQBertModel
# What does this PR do? Adding missing type hints for QDQBertModel as referenced in this issue.(https://github.com/huggingface/transformers/issues/16059#issuecomment-1160830014). ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Rocketknight1 EDIT: Anyone feel free to review too πŸ˜„
06-20-2022 21:47:04
06-20-2022 21:47:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, this looks really good! Would you be willing to add type hints to the other model classes in the same file too? (`QDQBertLMHeadModel` and the classes that start with `QDQBertFor...`)<|||||>@Rocketknight1 I've pushed a change that also adds type hints for `QDQBertLMHeadModel` and the classes that start with `QDQBertFor...`. I've also removed `config` which was unused parameter, but left the other classes untouched, as those seem a bit tricky to type hint for now<|||||>@Rocketknight1 I am getting the following error for tests ``` E ImportError: cannot import name 'login' from 'huggingface_hub' (/home/circleci/.local/lib/python3.7/site-packages/huggingface_hub/__init__.py) ``` Is there a dependency change for huggingface_hub perhaps? Also, I have removed the type hints for the config objects passed as `python utils/check_copies.py` fails due to line above `class QDQBertEmbeddings` ``` # Copied from transformers.models.bert.modeling_bert.BertEmbeddings with Bert -> QDQBert ```<|||||>@willtai I suspect something like that is the problem, yes. Can you try: 1) Pulling upstream commits from `transformers` to your repository's `main` branch (you can do this in the Github interface) 2) Pull those changes from Github to your local machine's `main` branch 3) Rebase your PR branch onto `main` locally 4) Force push (`push -f`) your PR branch to Github This should resolve those issues if they're caused by a recent code change <|||||>This is perfect, thank you! The rebase looks clean and all tests are passing, so I'm happy to merge it now.
transformers
17,782
closed
TF element-wise equals requires tf.equals() instead of ==
### System Info ```shell Running in a Google Colab with GPU backend. ``` ### Who can help? @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running the 'fill-mask' pipeline: ``` import tensorflow.compat.v1 as tf import tensorflow_datasets import transformers from transformers import pipeline, set_seed # Load tokenizer and model from presaved locations: tokenizer = transformers.GPT2Tokenizer.from_pretrained('/path/to/pretrained/GPT2/') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) model = transformers.TFGPT2LMHeadModel.from_pretrained('/path/to/pretrained/GPT2/') # Create pipeline with this model and tokenizer: unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) print(unmasker("Hello I'm a [MASK] model")) ``` This produces the following error: `{{function_node __wrapped__Where_device_/job:localhost/replica:0/task:0/device:GPU:0}} WhereOp: Unhandled input dimensions: 0 [Op:Where] ` My best guess is that this is because of line 56 of fill_mask.py: `masked_index = tf.where(input_ids == self.tokenizer.mask_token_id).numpy()` TF requires `tf.equals(tensor1, tensor2)` instead of `==`. This bug may occur elsewhere as well -- I haven't checked exhaustively. ### Expected behavior ```shell A dictionary object with the keys `sequence`, `score`, and `token` is printed. ```
06-20-2022 15:25:01
06-20-2022 15:25:01
Hi @ekayen πŸ‘‹ `self.tokenizer.mask_token_id` (on [L56 in fill_mask.py](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/fill_mask.py#L56)) should be a single integer, and `tf.where(tensor == integer)` works well in TF1 and TF2 (see the examples below) ```python import tensorflow as tf a = tf.range(10) b = tf.where(a == 5) print(b) ``` ```python import tensorflow.compat.v1 as tf a = tf.range(10) b = tf.where(a == 5) print(b) ``` From the error message, we can read "... Unhandled input dimensions: 0". Can you confirm that your tokenizer has `tokenizer.mask_token_id` set? That would be my biggest suspicion :) P.S.: 1 - `transformers` does not support TF1 behavior -- it is possible that things break, and we won't be able to provide help in that situation :( I'd highly recommend updating your project to TF2, if you have the resources to do so. 2 - Our bandwidth as maintainers of a large project is very limited. When opening an issue, if you can provide an example that can run on any machine (and that does not depend on local files), the odds of getting a useful response are much higher πŸ™Œ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,781
closed
KeyError: 'src_texts' in train_distil_marian_enro.sh
### System Info ```shell 0% 0/6 [00:00<?, ?it/s][INFO|trainer_utils.py:686] 2022-06-20 14:53:01,661 >> The following columns in the training set don't have a corresponding argument in `MarianMTModel.forward` and have been ignored: id, src_texts, tgt_texts. If id, src_texts, tgt_texts are not expected by `MarianMTModel.forward`, you can safely ignore this message. Traceback (most recent call last): File "transformers/examples/legacy/seq2seq/finetune_trainer.py", line 375, in <module> main() File "transformers/examples/legacy/seq2seq/finetune_trainer.py", line 313, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1413, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1625, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 530, in __next__ data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 570, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer_utils.py", line 696, in __call__ return self.data_collator(features) File "/content/transformers/examples/legacy/seq2seq/utils.py", line 298, in __call__ batch = self._encode(batch) File "/content/transformers/examples/legacy/seq2seq/utils.py", line 334, in _encode [x["src_texts"] for x in batch], File "/content/transformers/examples/legacy/seq2seq/utils.py", line 334, in <listcomp> [x["src_texts"] for x in batch], KeyError: 'src_texts' 0% 0/6 [00:00<?, ?it/s] ``` ``` ### Who can help? Model Marian: @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ./examples/legacy/seq2seq/train_distil_marian_enro.sh ### Expected behavior ```shell fine-tune the marian model ```
06-20-2022 14:59:23
06-20-2022 14:59:23
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Same issue<|||||>pip install transformers==4.15.0
transformers
17,780
closed
Use 5e-5 For BigBird PT/Flax equivalence tests
# What does this PR do? Use `5e-5` For BigBird PT/Flax equivalence tests to avoid flaky test failure. Also change the name `check_outputs` to `check_pt_flax_outputs` (similar to `check_pt_tf_outputs`) and update its logic similar to `check_pt_tf_outputs`.
06-20-2022 14:42:42
06-20-2022 14:42:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,779
closed
Add DPT Flax
# What does this PR do? I tried to implement DPT (Dense Prediction with Transformers) in Flax during my free time! πŸš€ By the way it is the first Segmentation and Depth Estimation model implemented in Flax on the library! Nits/TODOs: - [x] Figure out how to properly call `BatchNorm` and `Dropout` inside a `Sequential` - [x] Deal correctly with `Sequential` layers - [x] Test equivalency tests - [ ] Write documentation - For now they're just copy/pasted Quetions: - Why the loss is not implemented in `modeling_dpt.py` ? I can probably help on that since I have already implemented the loss for a university project: https://github.com/antocad/FocusOnDepth/blob/master/FOD/Loss.py cc @NielsRogge @sanchit-gandhi @patil-suraj
06-20-2022 09:59:21
06-20-2022 09:59:21
All the keys match now but the equivalency test does not pass with `1e-5` but `1e-4` instead<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17779). All of your documentation changes will be reflected on that endpoint.<|||||>Would be great to also incorporate the updates of #17731 <|||||>The Flax model finally predicts the correct depths for the cats (left is Flax and right is Pytorch)! ![Screenshot 2022-06-25 at 20 12 08](https://user-images.githubusercontent.com/49240599/175785580-bd9b3b25-cc90-4ad0-a262-fbf335e1351f.png) For that it appears that the transpose conv does not give the same result as Pytorch's implementation that uses a gradient based operation. I fixed it by creating a custom function based on this PR: https://github.com/google/jax/pull/5772 the PR does not seem to be merged soon. We can probably go for this hack for now until the PR in JAX gets merged <|||||>As wee discussed, it seems that `align_corners` set to `False` for both model would not require lowering the tolerance in one of the cases right? <|||||>@ArthurZucker exact. I have put a new attribute in the `DPTConfig` and modified a bit the original modeling code but should not break backward compatibility. Now all tests pass with a tolerance of `1e-5` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Thank you very much @sanchit-gandhi for the very detailed review! I had a second round of refactoring while catching up on Flax projects and would love to have a second round of review (left also some unresolved comments) πŸ’ͺ Thanks again πŸ™ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,778
closed
Dataset Format for training RAG on custom Dataset
### System Info ```shell - `transformers` version: 4.19.2 - Platform: Linux-4.14.219-164.354.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.9 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): 2.3.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce error - No format mentioned for data preparation for training RAG on custom dataset. [link to script](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/finetune_rag.py) [broken link to format in which data needs to be prepared](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag#:~:text=Our%20finetuning%20logic%20is%20based%20on%20scripts%20from%20examples/seq2seq.%20We%20accept%20training%20data%20in%20the%20same%20format%20as%20specified%20there%20%2D%20we%20expect%20a%20directory%20consisting%20of%206%20text%20files%3A) ### Expected behavior ```shell https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/finetune_rag.py This is a script to fine tune rag on a custom dataset, however the link mentioned on github (format in which data needs to be prepared if training on custom dataset) is broken. Please let me know in what format do I have to prepare my training data if I want to train RAG on a custom dataset https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag#:~:text=Our%20finetuning%20logic%20is%20based%20on%20scripts%20from%20examples/seq2seq.%20We%20accept%20training%20data%20in%20the%20same%20format%20as%20specified%20there%20%2D%20we%20expect%20a%20directory%20consisting%20of%206%20text%20files%3A ```
06-20-2022 05:35:34
06-20-2022 05:35:34
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,777
closed
For model long-t5-tglobal-x, fix 'float' object cannot be interpreted as an integer
On line 180, `torch.tensor(-1.0, dtype=global_block_ids.dtype)` gives the error __TypeError: 'float' object cannot be interpreted as an integer__ . This is because the dtype here is `int64`. For `dtype=int64`, this needs to simply be `-1`. This impacts the `long-t5-tglogbal-x` model. It does not impact the `long-t5-local-x` version which does not appear to call this line in the code. The torch version where I see this is 1.11.0+cu113. I'm not certain if older, or non-gpu versions of torch allowed this but 1.11.0+cu113 does not. Note that torch does not complain when casting an int to a float so it should be safe to change this to `-1` even if there are occasions where `global_block_ids.dtype` is a float. # What does this PR do? Fixes # (no issue # created). There is a simple error in the code where torch fails when trying to create a constant int64 tensor using `-1.0` instead of `-1`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? This model is new. I would suggest someone from the original upload team review this. Here are the first 3 in the file history.. @stancld, @PhungVanDuy, @sgugger
06-19-2022 18:49:38
06-19-2022 18:49:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>@patil-suraj could you maybe take a look here? :-) Also cc @stancld in case you're interested and have an idea what the problem could be<|||||>Interesting. Looks like this is a change in python, not torch. UBT 22.04 uses Python 3.10.4 and this is fully broken for that version.<|||||>Great looks good to me than as well!
transformers
17,776
closed
Nezha Pytorch implementation
# What does this PR do? This PR adds a pytorch implementation of the NEZHA model to transformers. [NEZHA](https://arxiv.org/abs/1909.00204) was introduced by Huawei Noah's Ark Lab in late 2019 and it is widely used in the Chinese NLP community. This implementation is based on the official pytorch implementation of NEZHA and the current BERT pytorch implementation . The model checkpoints are also from the [official implementation](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-TensorFlow). Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Since the model is quite similar to bert, maybe @LysandreJik?
06-19-2022 15:01:48
06-19-2022 15:01:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>ready for review! I'll upload the rest of the pre-trained models later today<|||||>addressed all the comments from @sgugger and uploaded the two remaining models. Ready for a final round of review.<|||||>Thanks again for your contribution!
transformers
17,775
closed
Translation troubleshooting 17459
# What does this PR do? <!-- --> <!-- Remove if not applicable --> Fixes # (issue) Translation in Italian of Troubleshooting 17459 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @mfumanelli
06-19-2022 10:51:13
06-19-2022 10:51:13
@mfumanelli Hi, I don't know why it contains commits from previous pull request.<|||||>I will remake it because I made a mistake with the branch @mfumanelli
transformers
17,774
closed
RHO loss
### Feature request https://github.com/oatml/rho-loss Abstract > Training on web-scale data can take months. But most computation and time is wasted on redundant and noisy points that are already learnt or not learnable. To accelerate training, we introduce Reducible Holdout Loss Selection (RHO-LOSS), a simple but principled technique which selects approximately those points for training that most reduce the model's generalization loss. As a result, RHO-LOSS mitigates the weaknesses of existing data selection methods: techniques from the optimization literature typically select 'hard' (e.g. high loss) points, but such points are often noisy (not learnable) or less task-relevant. Conversely, curriculum learning prioritizes 'easy' points, but such points need not be trained on once learned. In contrast, RHO-LOSS selects points that are learnable, worth learning, and not yet learnt. RHO-LOSS trains in far fewer steps than prior art, improves accuracy, and speeds up training on a wide range of datasets, hyperparameters, and architectures (MLPs, CNNs, and BERT). On the large web-scraped image dataset Clothing-1M, RHO-LOSS trains in 18x fewer steps and reaches 2% higher final accuracy than uniform data shuffling. ### Motivation It's always nice to speed up training :) ### Your contribution I am not sure if I am able to add this to transformers on myself but would be happy to give it a try with advices how to design the API
06-19-2022 08:40:29
06-19-2022 08:40:29
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,773
closed
Converting a tensor to a Python boolean might cause the trace to be incorrect when converting gpt2 to onnx format
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Darwin-21.5.0-x86_64-i386-64bit - Python version: 3.7.9 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patil-suraj @patrickvonplaten @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to convert transformers GPT2LMHeadModel to onnx format using `torch.onnx.export` I got the following warning: ``` /Users/Project/.venv/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:797: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if batch_size <= 0: ``` I tried to print `batch_size` here, and I got `tensor(3)`(which should be `3` here?) ### Expected behavior ```shell No warnings when converting ```
06-19-2022 07:43:53
06-19-2022 07:43:53
@lewtun for ONNX here @HaoboGu before we can answer the issue could you please add a complete reproducible code snippet here?<|||||>@patrickvonplaten Sure ```python from transformers import GPT2Config from transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel import torch def convert_gpt2_model_to_onnx() -> str: """ convert pytorch model to onnx format """ # Load model and set the model to eval mode model: GPT2LMHeadModel = GPT2LMHeadModel.from_pretrained('sshleifer/tiny-gpt2') model.eval() # batch_size, input_ids_length and past_sequence_length are dynamic axes # We have to initialize a random input(the value doesn't matter) for the model, because the converting requires execution of the model # See: https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html batch_size = 3 input_ids_length = 2 past_sequence_length = 1 config: GPT2Config = model.config num_attention_heads = config.n_head hidden_size = config.n_embd num_layer = config.n_layer vocab_size = config.vocab_size config.is_decoder = True # past(`past_key_values` in model) is a list, its length is num_layer. # each element in the list is a tuple(key, value), and key/value's shape is past_shape, # aka [batch_size, n_heads, past_sequence_length, embd_size_each_head] past_shape = [batch_size, num_attention_heads, past_sequence_length, int(hidden_size/num_attention_heads)] past = [(torch.rand(past_shape, dtype=torch.float32, device='cpu'), torch.rand(past_shape, dtype=torch.float32, device='cpu')) for _ in range(num_layer)] # input_ids is a [batch_length, input_ids_length] tensor input_ids = torch.randint( low=0, high=vocab_size - 1, size=(batch_size, input_ids_length), dtype=torch.long, device='cpu', ) # attention_mask is a 0/1 tensor of [batch_size, past_sequence_length + input_ids_length] attention_mask = torch.ones( [batch_size, past_sequence_length + input_ids_length]).to(torch.long) # token_type_ids is not needed in our case, its size is [batch_size, input_ids_length] token_type_ids = torch.zeros([batch_size, input_ids_length]).to(torch.long) # position_ids, size is [batch_size, input_ids_length] position_ids = attention_mask.long().cumsum(-1) - 1 position_ids.masked_fill_(position_ids < 0, 0) position_ids = position_ids[:, past_sequence_length:].to(torch.long) # Run the model and get output from the model output = model(input_ids, past_key_values=past, attention_mask=attention_mask, position_ids=position_ids, return_dict=True, use_cache=True) # Set output names output_names = ['logits'] for i in range(num_layer): output_names.append("present_key" + str(i)) output_names.append("present_value" + str(i)) # Set input_names input_names = ['input_ids'] for i in range(num_layer): input_names.append("past_key" + str(i)) input_names.append("past_value" + str(i)) input_names += ['attention_mask', 'position_ids'] # Set dynamic axes dynamic_axes = {} dynamic_axes['input_ids'] = {0: 'batch_size', 1: 'input_ids_length'} dynamic_axes['attention_mask'] = {0: 'batch_size', 1: 'total_length'} dynamic_axes['position_ids'] = {0: 'batch_size', 1: 'input_ids_length'} dynamic_axes['logits'] = {0: 'batch_size', 1: 'input_ids_length'} for i in range(num_layer): dynamic_axes['past_key' + str(i)] = {0: 'batch_size', 2: 'past_sequence_length'} dynamic_axes['past_value' + str(i)] = {0: 'batch_size', 2: 'past_sequence_length'} dynamic_axes['present_key' + str(i)] = {0: 'batch_size', 2: 'total_length'} dynamic_axes['present_value' + str(i)] = {0: 'batch_size', 2: 'total_length'} # The first input is required, and other inputs can be passed to torch.onnx.export using dict inputs = (input_ids, { 'attention_mask': attention_mask, 'position_ids': position_ids, 'past_key_values': past, }) # Do export using torch.onnx.export exported_model = "converted_model.onnx" torch.onnx.export( model, args=inputs, f=exported_model, export_params=True, verbose=False, input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes, opset_version=11, ) return exported_model convert_gpt2_model_to_onnx() ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten any ideas?<|||||>Hi @HaoboGu thanks for sharing the code snippet! Just so I understand a bit better - is there a specific reason why you're trying to avoid the warnings? <|||||>@lewtun I just worry about the warning may lead to unpredictable results when I use the model.<|||||>Hey @HaoboGu, I don't think the warning would lead to unpredictable results (or can you clarify a bit more) :-) Can we maybe just leave it? <|||||>@patrickvonplaten yeah, no problem if it's expected
transformers
17,772
closed
[WIP] Adding Omnivore Model to HF
This PR added `Omnivore Model` to HF. The model use single backbone of `SwinTransformer` adapted for 3D and classifies `image`, `video` and `RGBD scene` using same backbone weights with different head for each classification. **TODO and Problems**: - [ ] Solve Docstring Test for Final Classification Model. Currently, we have only single `id2label` and `label2id`. Here we need pair of three of them for each image, video and rgbd. I have done them using `<input_type>_id2label` and `<input_type>_label2id` where `input_type` is one of image, video and rgbd. This breaks the docstring test. - [ ] The above problem also create another problem where during final prediction say for image `model.config.image_id2label[pred_id]` fails when `pred_id` is integer, we need to convert it to string as during loading of pretrained weights, the config doesn't convert them into `<int, string>` pairs. - [ ] I'm unsure on how to do the feature extractor for this model. How to load video (couldn't find something like PIL and more several type of transformations are used) and rgbd part. @NielsRogge Can you please do this part πŸ™. Sorry for the trouble. - [ ] Finally, I want to have a review on naming of final model for classification. It's not exactly image classification model only. i have named it here OmnivoreForVisionClassification (support three vision modalities). We might need to create separate task for it since even pipeline test is broken if kept in image classification task. Apart of these @NielsRogge Please review and suggest any other changes the changes. πŸ™‚ @sayakpaul. You can move for TF Model from here. Although I believe once above queries and TODO are done adding TF model should be more straightforward. The uploaded weights at my hub are correct fot SwinT only. After final changes I will port rest of them.
06-19-2022 05:54:41
06-19-2022 05:54:41
@AnugunjNaman thanks for working on this. Couple of pointers from me: * I will work on a separate PR for the TF port for a cleaner separation. * > The uploaded weights at my hub are correct fot SwinT only. After final changes I will port rest of them. Shouldn't this be added to the Facebook organization by one of the HF team members? * > OmnivoreForVisionClassification I think it'd be fair to have a clear separation here for RGB images, RGB-D images, and videos even if the backbone is the same. We could likely add detailed comments / documentation to let the users know that the same backbone is being used but for API consistency we've developed separate classes. WDYT @NielsRogge? <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17772). All of your documentation changes will be reflected on that endpoint.<|||||>> Shouldn't this be added to the Facebook organization by one of the HF team members? This will be done once everything works. I wanted to test so I uploaded a smaller model on my hub. > * I will work on a separate PR for the TF port for a cleaner separation. I agree it would be good to have different PR. > I think it'd be fair to have a clear separation here for RGB images, RGB-D images, and videos even if the backbone is the same. We could likely add detailed comments / documentation to let the users know that the same backbone is being used but for API consistency we've developed separate classes. WDYT @NielsRogge? If we do this not sure how the training part goes, but yeah implementation will be smooth it that case :) <|||||>> If we do this not sure how the training part goes, but yeah implementation will be smooth it that case :) Good point. In that case, it might make sense to expose the class `OmnivoreForVisionClassification`. There's a trade-off here between API consistency and confusion. Let's see what Niels has to say. <|||||>Some first comments: * I like the name `OmnivoreForVisionClassification`. * Pinging @sgugger and @LysandreJik regarding the id2label question. So for context, Omnivore is a single model that is trained on 3 modalities at the same time (images, video and single-view 3D images). The model just takes tensors of shape `(batch_size, time, num_channels, height, width)` as input and returns logits of shape `(batch_size, num_labels)`. However, the model has 3 different classification heads for the 3 types of data, and one needs to indicate which modality is provided to the model during a forward pass in order for it to know which head to use. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@AnugunjNaman @NielsRogge how is the PR going? I am developing a video classification fine-tuning framework, would love to use this model if it gets merged into main!<|||||>I’m not sure when I will be able to finish it. My current job offer was rescinded so I’m looking for a new one. Probably when that gets sorted out so maybe a month or so. Sorry mate!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,771
closed
Added OPT to models exportable with ONNX
# What does this PR do? ```python # !python setup.py install ``` ```python # !pip install -e ".[dev]" ``` ```python # pip install onnxruntime ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float32) # the fast tokenizer currently does not work correctly tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=False) prompt = "Hello, I'm am conscious and" input_ids = tokenizer(prompt, return_tensors="pt").input_ids generated_ids = model.generate(input_ids) tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ["Hello, I'm am conscious and I'm a bit of a noob. I'm looking"] ```python ``` ```python !python -m transformers.onnx --model=facebook/opt-350m onnx/opt-350m/ ``` Using framework PyTorch: 1.11.0+cu102 Overriding 1 configuration item(s) - use_cache -> False /root/transformers/src/transformers/models/opt/modeling_opt.py:513: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: /root/transformers/src/transformers/models/opt/modeling_opt.py:64: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask = torch.full((tgt_len, tgt_len), torch.tensor(float("-inf"))) /root/transformers/src/transformers/models/opt/modeling_opt.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /root/transformers/src/transformers/models/opt/modeling_opt.py:210: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /root/transformers/src/transformers/models/opt/modeling_opt.py:242: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): Validating ONNX model... -[βœ“] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[βœ“] (2, 8, 512) matches (2, 8, 512) -[x] values not close enough (atol: 1e-05) Traceback (most recent call last): File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/root/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/root/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/root/transformers/src/transformers/onnx/convert.py", line 441, in validate_model_outputs "Outputs values doesn't match between reference model and ONNX exported model: " ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 4.57763671875e-05 ```python from transformers.models.opt import OPTConfig, OPTOnnxConfig config = OPTConfig() onnx_config = OPTOnnxConfig(config) output_keys = list(onnx_config.outputs.keys()) print(output_keys) ``` ['last_hidden_state'] ```python from onnxruntime import InferenceSession ``` ```python import onnxruntime as ort from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") ort_session = ort.InferenceSession("onnx/opt-350m/model.onnx") inputs = tokenizer("Using OPT in ONNX!", return_tensors="np") outputs = ort_session.run(["last_hidden_state"], dict(inputs)) print(outputs) ``` [array([[[ -1.3539658 , -0.25787818, -0.3093884 , ..., -1.311745 , 0.26136506, -1.4270447 ], [ -0.51148593, -5.1948047 , 3.1015701 , ..., 1.9010596 , -2.0694203 , 0.96382034], [ -1.4861462 , -4.3613157 , -2.8032331 , ..., -0.65176994, -6.0503354 , -0.08128738], ..., [ -2.290329 , -9.395232 , 3.9363523 , ..., -0.5923378 , -3.7993686 , 0.13608676], [ -4.7750826 , -12.562761 , 1.9932727 , ..., -4.361832 , -2.3446696 , 1.2666583 ], [ -3.7153127 , -6.4608436 , -3.683312 , ..., -2.824885 , -0.75467056, -1.9532645 ]]], dtype=float32)] ![image](https://user-images.githubusercontent.com/6279035/174467137-4c0f151e-16b3-436b-b412-2a39b7f361af.png) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-19-2022 05:16:20
06-19-2022 05:16:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17771). All of your documentation changes will be reflected on that endpoint.<|||||>Great work! It works for most use cases, but I did discover that `causal-lm-with-past` isn't working. `python -m transformers.onnx --model=facebook/opt-350m --feature=causal-lm-with-past onnx/opt-350m/` yields ``` 2022-07-04 07:44:09.810891: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Using framework PyTorch: 1.11.0+cu113 Overriding 1 configuration item(s) - use_cache -> True Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module> main() File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 94, in main args.output, File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 335, in export return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 142, in export_pytorch model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.PYTORCH) File "/usr/local/lib/python3.7/dist-packages/transformers/models/opt/configuration_opt.py", line 213, in generate_dummy_inputs self.num_attention_heads, File "/usr/local/lib/python3.7/dist-packages/transformers/models/opt/configuration_opt.py", line 184, in num_attention_heads return self._config.n_head File "/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py", line 253, in __getattribute__ return super().__getattribute__(key) AttributeError: 'OPTConfig' object has no attribute 'n_head' ``` That's because the naming is a bit different for the OPT config, but it can be fixed simply by replacing: ``` @property def num_layers(self) -> int: return self._config.n_layer @property def num_attention_heads(self) -> int: return self._config.n_head ``` with ``` @property def num_layers(self) -> int: return self._config.num_hidden_layers @property def num_attention_heads(self) -> int: return self._config.num_attention_heads ``` in the configuration_opt.py file. But when I tried that, I ran into another problem: ``` python -m transformers.onnx --model=facebook/opt-350m --feature=causal-lm-with-past onnx/opt-350m/ 2022-06-17 20:59:09.712912: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Error using standard tokenizer settings, testing specifying use_fast=False Using framework PyTorch: 1.11.0+cu113 Overriding 1 configuration item(s) - use_cache -> True /usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:513: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: /usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:64: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask = torch.full((tgt_len, tgt_len), torch.tensor(float("-inf"))) /usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:69: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if past_key_values_length > 0: /usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:210: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:242: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 112, in <module> main() File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 99, in main args.output, File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 335, in export return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 198, in export_pytorch opset_version=opset, File "/usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py", line 309, in export export_modules_as_functions) File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 122, in export custom_opsets=custom_opsets, export_modules_as_functions=export_modules_as_functions) File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 724, in _export dynamic_axes=dynamic_axes) File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 507, in _model_to_graph module=module) File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 230, in _optimize_graph torch._C._jit_pass_onnx_set_dynamic_input_shape(graph, dynamic_axes, input_names) RuntimeError: Dynamic shape axis should be no more than the shape dimension for past_sequence + sequence ``` And that's where I got stuck, and I don't really know how to solve it... Another problem, that hopefully will be fixed soon, is that the fast tokenizer doesn't works, and local models tries to use the fast tokenizer when exporting to onnx: ``` from transformers import AutoTokenizer, AutoModelForCausalLM model_name="facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) pt_model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer.save_pretrained("local-pt-checkpoint") pt_model.save_pretrained("local-pt-checkpoint") ``` then ` python -m transformers.onnx --model=local-pt-checkpoint --preprocessor=tokenizer onnx/opt-350m/ ` Results in: ``` 2022-07-04 08:02:21.613420: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module> main() File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 64, in main preprocessor = AutoTokenizer.from_pretrained(args.model) File "/usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py", line 580, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 1810, in from_pretrained **kwargs, File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 1948, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 152, in __init__ "Currenty GPT2's fast tokenizer does NOT support adding a BOS token." ValueError: Currenty GPT2's fast tokenizer does NOT support adding a BOS token.Instead you should use GPT2's slow tokenizer class `GPT2Tokenizer` as follows: `GPT2Tokenizer.from_pretrained('local-pt-checkpoint')` or `AutoTokenizer.from_pretrained('local-pt-checkpoint', use_fast=False)` This issue will be fixed soon, see: https://github.com/huggingface/tokenizers/pull/1005. so that the fast tokenizer works correctly. ``` Just to try it out, I added this to the src/transformers/onnx__main__.py file (row 63): ``` elif args.preprocessor == "tokenizer": try: preprocessor = AutoTokenizer.from_pretrained(args.model) except: logger.info(f"Error using standard tokenizer settings, testing specifying use_fast=False") preprocessor = AutoTokenizer.from_pretrained(args.model, use_fast=False) ``` Which fixed the issue, but it's not something we want there and, I just wanted to see if there was any other issues as well. <|||||>@lewtun @rushic24 Seems like the problem has to do with OPT handling past_key_values differently overall, I can't even make past key values work with the normal pytorch model. I made a notebook to demonstrate, that passing past_key_values works for gpt2, but when doing the exact same thing with opt, it raises an error. Here is the notebook: https://colab.research.google.com/drive/14A-hm-aFzW64ZIxghJDVJaU6a4ZLGCBi?usp=sharing Here is the error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-17-d5f8e80cb463>](https://localhost:8080/#) in <module>() 9 print() 10 print(gpt_inputs["input_ids"]) ---> 11 gpt_outputs = model(gpt_inputs.input_ids,return_dict=True,past_key_values=gpt_outputs.past_key_values) 12 13 print(gpt_outputs.logits.shape) 4 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py](https://localhost:8080/#) in _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length) 527 expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) 528 combined_attention_mask = ( --> 529 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask 530 ) 531 RuntimeError: The size of tensor a (5) must match the size of tensor b (10) at non-singleton dimension 3 ``` If anyone has any ideas on how to fix it, that would be appreciated. I think I will be able to look into this more deeply next week.<|||||>I had a deeper look into the past_key_values error, but am really struggling to understand code in the modeling_opt.py file. The function that raises the error looks like this: ``` # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): # create causal mask # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] combined_attention_mask = None if input_shape[-1] > 1: combined_attention_mask = _make_causal_mask( input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length ).to(inputs_embeds.device) if attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) combined_attention_mask = ( expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask ) return combined_attention_mask ``` An what causes the error is this line: ``` combined_attention_mask = ( expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask ) ``` Since expanded_attn_mask and combined_attention_mask are different shapes, and can't be added together. I tried printing both out before the error is rasied, and expanded_attn_mask looks like this: ``` tensor([[[[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]]]) ``` and combined_attention_mask looks like this: ``` tensor([[[[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -3.4028e+38, -3.4028e+38, -3.4028e+38], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -3.4028e+38, -3.4028e+38], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -3.4028e+38], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]]]]) ``` I got stuck here, not really sure what's even going on...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @younesbelkada who worked on the OPT port and may be able to shed some insight on why the generations don't work with past key-value pairs<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @ViktorThink @lewtun sorry for the delay, taking a look right now! Pinging also @ArthurZucker here<|||||>Hey all! Thanks for your awesome work! Regarding the `past_key_value`, given that the integration tests worked properly, I am not sure why it doesn't work but will have a look to see what is wrong. <|||||>Hey @rushic24 @ViktorThink I believe several fixes have landed for the OPT models, so would you like to revisit this PR now? In particular, can you test if the `causal-lm-with-past` feature is now working as expected? If you want a fast way to test this, you can use `optimum` as follows (pointing to a local `model.onnx` file instead of a Hub checkpoint): https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForCausalLM.forward.example<|||||>@lewtun I did a very simple test with past key values in this colab: https://colab.research.google.com/drive/1UO7uioZZs2Gu6nroQRgPilSpYQhNLiSZ?usp=sharing I didn't try to export the model, but since past doesn't seem work in pytorch format, exporting it to Onnx shouldn't be possible. It works for GPT models, but not for OPT. The link you sent leads to a 404 error.<|||||>> @lewtun I did a very simple test with past key values in this colab: https://colab.research.google.com/drive/1UO7uioZZs2Gu6nroQRgPilSpYQhNLiSZ?usp=sharing > > I didn't try to export the model, but since past doesn't seem work in pytorch format, exporting it to Onnx shouldn't be possible. > > It works for GPT models, but not for OPT. The link you sent leads to a 404 error. Thanks for sharing a reproducible example @ViktorThink - this indeed looks like a bug! Pinging @younesbelkada to have a look since he implemented this model :)<|||||>I'll have a look when I can! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey, there doesn't seem to be a bug with OPT's `past_key_value` scheme as the following works : ```python import torch from transformers import AutoTokenizer, OPTModel, set_seed tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b") model = OPTModel.from_pretrained("facebook/opt-1.3b") inputs = tokenizer("No I'm not missing the ", return_tensors="pt") input_ids = inputs.input_ids attention_mask = inputs.attention_mask with torch.no_grad(): model.config.use_cache = False set_seed(0) output = model(input_ids, attention_mask = attention_mask, use_cache =False) print(output.last_hidden_state[:,-1,:]) model.config.use_cache = True output_1 = model(input_ids[:,:-1], use_cache = True, attention_mask = attention_mask[:,:-1]) pkv = output_1.past_key_values output_2 = model(input_ids[:,-1:], past_key_values = pkv , attention_mask = attention_mask, use_cache = True) print(output_2.last_hidden_state[:,-1,:]) torch.testing.assert_allclose(output.logits[:,-1,], output_2.logits[:,-1,:], rtol = 1e-4, atol = 1e-4) ``` This is the expected format as inside the `generate` function, we are passing only the last inputs, while the full attention mask was already created. The only issue I see here is consistency : the behaviours are different for `gpt2` and `opt`. πŸ˜… Moreover, the default attention mask created when the `past_key_values` are given also seem wrong : ```python output_2 = model(input_ids[:,-1:], past_key_values = pkv , use_cache = True) ... ValueError: Attention mask should be of size (1, 1, 0, 7), but is torch.Size([1, 1, 1, 1]) ``` Should probably work, as the length of the previous sequence is given. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,770
closed
Flax implementation of DPT
# What does this PR do? I tried to implement DPT (Dense Prediction with Transformers) in Flax during my free time! πŸš€ By the way it is the first Segmentation and Depth Estimation model implemented in Flax! Nits/TODOs: - [x] Figure out how to properly call `BatchNorm` and `Dropout` inside a `Sequential` - [ ] Test equivalency tests - [ ] Write documentation - For now they're just copy/pasted Quetions: - Why the loss is not implemented in `modeling_dpt.py` ? I can probably help on that since I have already implemented the loss for a university project: https://github.com/antocad/FocusOnDepth/blob/master/FOD/Loss.py cc @NielsRogge @sanchit-gandhi @patil-suraj
06-18-2022 20:39:38
06-18-2022 20:39:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,769
closed
Improve error message Union not allowed
I am working with a lot of custom Dataclasses inside the `HfArgumentParser`, and while my Python code was technically correct (using `Union`), I did get the brief error message _Only `Union[X, NoneType]` (i.e., `Optional[X]`) is allowed for `Union`_. From the error message it was not clear what triggered it, and it took me a while to figure out why I could not use Union. My assumption now is that we cannot use Union due to limitations of `argparse` (I am not sure yet how I can allow for floats and ints though). This PR simply clarifies the error message a bit and gives the field name of the offending item. Minimal test case: ```python from typing import Union from dataclasses import dataclass, field from transformers import HfArgumentParser @dataclass class OtherArguments: validation_size: Union[float, int] = field( default=0.2 # might be a float for percentage of training set or int for absolute split ) if __name__ == '__main__': dataclass_tester = OtherArguments(0.5) parser = HfArgumentParser((OtherArguments, )) oargs = parser.parse_args_into_dataclasses() ``` EDIT: my current solution to allow for floats and ints is below, but the PR is still useful in itself I think. ```python from argparse import ArgumentTypeError from typing import Union from dataclasses import dataclass, field from transformers import HfArgumentParser def float_or_int(arg: str): # I am aware that this is very naive (e.g. scientific notations), but it works for my purposes. # Other suggestions welcome though likely_float = "." in arg try: arg = float(arg) except ValueError: raise ArgumentTypeError(f"{arg} is not a float-able input") if not likely_float: arg = int(arg) return arg @dataclass class OtherArguments: validation_size: float_or_int = field( default=0.2, metadata={"help": "If a validation set is not present in your dataset, it will be created automatically from" " the training set. You can set the ratio train/valid here (float) or an exact number of" " samples that you wish to include in the validation set (int)."} ) if __name__ == '__main__': parser = HfArgumentParser((OtherArguments, )) oargs = parser.parse_args_into_dataclasses()[0] print(oargs.validation_size) print(type(oargs.validation_size)) ``` @sgugger
06-18-2022 12:13:56
06-18-2022 12:13:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,768
closed
Translation italian: multilingual.mdx
# What does this PR do? * added multilingual.mdx * updated _toctree.yml See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @omarespejel @sgugger
06-18-2022 07:28:26
06-18-2022 07:28:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @nickprock! I just had a chance to have a look at your PR, can I ask you if you could fix a couple of things? - "Tuttavia non tutti gli usi dei modelli multilingua sono diversi" this sentence is a bit difficult to understand written like this, I would try something like: "Non tutti gli utilizzi dei modelli multilingue sono perΓ² diversi" - I see that sometimes you use 'multilingua' and sometimes 'multilingue' maybe I would standardise it with 'multilingue', what do you think? - "non le utilizzano" -> "non li utilizzano" - "del input_ids" -> "dell'input_ids" - "perchΓ¨" -> "perchΓ©" - "Questo tensorre dovrebbe" -> "Questo tensore dovrebbe" - "identificare ul linguaggio" -> "identificare il linguaggio" - I would translate also these parts: (Many-to-many multilingual machine translation, 50 languages), (Many-to-one multilingual machine translation, 50 languages), ... - "Applica il tokenizer sul testo" -> "Applica il tokenizer al testo" - "MBart fforza" -> "MBart forza" - "nel target language" -> "nella lingua target" or "nella lingua obiettivo" Thanks!! πŸŽ‰<|||||>Thanks @mfumanelli I will try ti fix it tomorrow.<|||||>Hi @mfumanelli, how would you translate "Masked Language Modeling"? I would leave it in English, "Modello di linguaggio mascherato" doesn't sounds good for me. Thanks<|||||>Yes @nickprock, maybe you can write only the first time you mention it "Modello di linguaggio mascherato (Masked Language Model, in inglese)", and from then on call it by its English name. What do you think?<|||||>I agree<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I fixed and I'm waiting for the review<|||||>cc @omarespejel
transformers
17,767
closed
Snacky Brain Bites for HF Transformers
### Feature request Nugget Republic has these very cool nuggets based on the HF course. I would like to submit a PR to integrate these nuggets into either the Readme or into the official documentation if allowed. Can anyone point me to the right place or person? Here are a few examples of said nuggets (4 / 11 nuggets): https://app.flexudy.com/story?mId=W7YKXTS0&dId=FMCSLFZH https://app.flexudy.com/story?mId=W7YKXTS0&dId=X1PL9FZG https://app.flexudy.com/story?mId=W7YKXTS0&dId=AFG8XX5S https://app.flexudy.com/story?mId=W7YKXTS0&dId=UAYHMCSL ### Motivation Simple. To make HF Transformers even more accessible to the general public. ### Your contribution Sure thing!
06-18-2022 05:18:27
06-18-2022 05:18:27
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,766
closed
How to checkpoint TFAutoModelForSequenceClassification every k batches
I am following the Huggingface Tensorflow Notebook to [train a BERT model on MNLI](https://github.com/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb) (the task itself is irrelevant). I am interested in checkpointing models locally every k batches, so that I end with 20 or 30 intermediate checkpoints over the course of training. The notebook uses model.fit() to train the model, but my understanding is that using a [callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint) to save intermediate checkpoints saves a SavedModel file. I am currently not saving intermediate checkpoints, only the final model, and use model.save_pretrained(ckpt). However, I now want the model to be checkpointed as a part of training. If I try loading a checkpoint directory in the SavedModel format using `model = tf.keras.models.load_model()`, the expected input format is not compatible with how I feed in inputs for `model = TFAutoModelForSequenceClassification.from_pretrained()`. I get the following message (32 is batch size, I think 109 is the input token length for the longest example in the batch). <img width="686" alt="Screen Shot 2022-06-17 at 7 20 14 PM" src="https://user-images.githubusercontent.com/60128552/174418985-4331fca5-4b1b-4486-90f2-4f98b98a35d1.png"> My question is how to checkpoint a `TFAutoModelForSequenceClassification` model with a specified frequency (every k batches). Either I have to change how I am feeding the input data into `model = tf.keras.models.load_model()` or change how I am saving the intermediate checkpoints, but I am having trouble figuring out what to do and how to proceed. Note: I am not interested in pushing models to the hub, only saving models locally.
06-18-2022 02:27:03
06-18-2022 02:27:03
Hey @preethiseshadri518 πŸ‘‹ We have an outstanding set of issues related to the `SavedModel` format, resulting in the errors you see. There are ways to work around it, by manually specifying portions of the graph at save time -- you can find them if you search in closed issues here :) A much simpler workaround, if it suits your problem, is to store/load the weights with [`save_weights`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights) and [`load_weights`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#load_weights), which is what our API uses. Let us know if it sorted your problem!<|||||>Hi Joao! Yes, this solves the issue. If I use `save_weights` and save them in the h5 format, then I can use `TFAutoModelForSequenceClassification.from_pretrained()`. Thanks for your comment!
transformers
17,765
closed
Enable torchdynamo with torch_tensorrt(fx path)
# What does this PR do? Adding support for TorchDynamo with torch_tensor(fx2trt module). Detailed context available at #17724 This diff is about adding extra inference backend based on #17308 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## To reproduce and set up the environment ``` # install torch-nightly conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch-nightly # install functorch (and reinstall after `git pull` later if need to sync up) git clone https://github.com/pytorch/functorch cd functorch rm -rf build pip install -e .[aot] cd .. git clone https://github.com/pytorch/torchdynamo cd torchdynamo pip install -r requirements.txt python setup.py develop # install TensorRT pip install nvidia-pyindex pip install nvidia-tensorrt==8.2.4.2 # install torch_tensorrt (fx path) cd .. git clone https://github.com/pytorch/TensorRT.git cd TensorRT/py python setup.py install --fx-only ``` cc HF @stas00 cc Meta @yinghai @Chillee cc NV @ncomly-nvidia @narendasan
06-18-2022 00:36:11
06-18-2022 00:36:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @stas00 just a friendly ping. I updated the installation part and it will be easy to repro if needed.<|||||>So I followed your instructions except I used the .deb package installer. (oh and please link to https://docs.nvidia.com/deeplearning/tensorrt/archives/index.html so that the user will know how to install tensorrt) why do I get: ``` Traceback (most recent call last): File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/optimizations/backends.py", line 45, in inner return fn(model, **kwargs) File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/optimizations/backends.py", line 313, in fx2trt from torch_tensorrt.fx.fx2trt import InputTensorSpec ModuleNotFoundError: No module named 'torch_tensorrt' ``` Is it the module from `tensorrt-8.2.5.1-cp38-none-linux_x86_64.whl` oh I see it failed to build: ``` /hf/00pytorch/TensorRT/py [pytorch/TensorRT|master]> python setup.py install --fx-only Could not find bazel in PATH ``` installed `bazel` and it still fails: ``` pip install bazel Collecting bazel Downloading bazel-0.0.0.20200723.tar.gz (1.4 kB) Building wheels for collected packages: bazel Building wheel for bazel (setup.py) ... done Created wheel for bazel: filename=bazel-0.0.0.20200723-py3-none-any.whl size=1708 sha256=518429e9ce158eb7e4ffc2cefa782eb7935d39d317d67801c5ae9b7346af0500 Stored in directory: /home/stas/.cache/pip/wheels/9b/80/e4/8d16b3eeeda264ac8105dd7fa29a124431113b2f1f5dd703bc Successfully built bazel Installing collected packages: bazel Successfully installed bazel-0.0.0.20200723 (py38-pt112) /hf/00pytorch/TensorRT/py [pytorch/TensorRT|master]> python setup.py install --fx-only Could not find bazel in PATH ``` so it's not a python package that it wants but a system-wide `bazel`? there is no apt package - probably need to add a new apt repo? this doc appears to be really outdated https://docs.bazel.build/versions/main/install-ubuntu.html In any case this obviously requires explicit instructions. I will wait for your instructions before proceeding. <|||||>Thanks for your time and efforts @stas00 ! 1. Yes, the TRT seems that bring the new user some troubles when they try their first time to install. I just found a way to install python version of TRT so you do not need to download TRT tarball and unzip the stuffs (this python installation will install all the dependent libs like tensorRT lib and cuDNN lib). I added this instructions to our doc as a PR. https://github.com/pytorch/TensorRT/pull/1145/ ``` $ pip3 install nvidia-pyindex $ pip3 install nvidia-tensorrt==8.2.4.2 ``` 2. I am having a PR to **disable** the bazel check https://github.com/pytorch/TensorRT/pull/1147. (merged) But that is a bit weird for bazel installation. I am on centOS and conda envrioment. Here is command `conda install -c conda-forge bazel`. It looks like your bazel installation location is not added to $PATH but `which bazel` can help check. Now, with my diff [1147](https://github.com/pytorch/TensorRT/pull/1147) (merged), we should not need bazel. Now below instruction is the complete instruction about install TRT, pytorch, torch_tensorrt.fx which I just verified work. ``` $ conda create --name python_env python=3.8 $ conda activate python_env # Recommend to install PyTorch 1.12 and later $ conda install pytorch torchvision torchtext cudatoolkit=11.3 -c pytorch-nightly # Install TensorRT python package $ pip3 install nvidia-pyindex $ pip3 install nvidia-tensorrt==8.2.4.2 $ git clone https://github.com/pytorch/TensorRT.git $ cd TensorRT/py && python setup.py install --fx-only && cd .. # check torch_tensorrt.fx is installed $ python -c "import torch_tensorrt.fx" ``` Hope it solves your problem. <|||||>`conda install -c conda-forge bazel` did the trick, The same with pip was giving nothing with `which bazel` - not a PATH issue, but a package issue I think, but probably related --------------- ``` $ pip3 install nvidia-pyindex $ pip3 install nvidia-tensorrt==8.2.4.2 ``` That did the trick. The tests have run successfully. so let's update the OP with the above 2 fixes.<|||||>ah, one more user-facing documentation nit - if you want users to use your magic code you will want to provide some enticement. A small benchmark table that shows what these features do usually goes a long way to get a user excited to try them. So this is something else to consider. It's not a show stopper, but as you can see if the docs aren't added right away they never get added, so it's best to do it in one go. It's still a recommendation and I'm fine merging it as is, it's just not going to be used much w/o enticing docs. <|||||>> ah, one more user-facing documentation nit - if you want users to use your magic code you will want to provide some enticement. A small benchmark table that shows what these features do usually goes a long way to get a user excited to try them. So this is something else to consider. It's not a show stopper, but as you can see if the docs aren't added right away they never get added, so it's best to do it in one go. It's still a recommendation and I'm fine merging it as is, it's just not going to be used much w/o enticing docs. I will try to add the doc there. But it is better to have @anijain2305 to include the AOT part.:-)<|||||>> I will try to add the doc there. But it is better to have @anijain2305 to include the AOT part.:-) Yeah, I was hoping that you'd only need to add the incremental part relevant for this PR.<|||||>re: CI - yes and it's complicated basically the live CI that you see reporting in this PR runs only CPU tests since CircleCI doesn't have gpus. then we have another set of CI workflows that runs on our machine via github actions and that's where we test all the complex/slow cases. And yes, I completely forgot that part of this PR we need to setup our CI to install all these packages as well so that these tests will be run. So once we polished this let's not forget that part. We will have to run all those instructions on our pt-nightly docker image - but actually there is a problem with this idea - how will the docker builder be able to download tensorRT packages if they require an NVIDIA user account?<|||||>re: CI Actually, circleCI has gpu resource to use(V100, T4, P4). I just added to our project :-) https://github.com/pytorch/TensorRT/pull/1137 These 2 commands are our saver ``` $ pip3 install nvidia-pyindex $ pip3 install nvidia-tensorrt==8.2.4.2 ``` Do you think we need to have @require_torchtensorrt.fx ? So it will help us to check if torch_tensorrt.fx is installed in the test?<|||||> > Actually, circleCI has gpu resource to use(V100, T4, P4). I just added to our project :-) [pytorch/TensorRT#1137](https://github.com/pytorch/TensorRT/pull/1137) That's great to know - thank you very much - I will pass this info on > These 2 commands are our saver > > ``` > $ pip3 install nvidia-pyindex > $ pip3 install nvidia-tensorrt==8.2.4.2 > ``` Ah, right! so no need for nvidia user account! super - let's use that in the instructions then. > Do you think we need to have @require_torchtensorrt.fx ? So it will help us to check if torch_tensorrt.fx is installed in the test? Absolutely, yes!<|||||>@stas00 , just wondering if the circleci is flaky? Some tests errors are not related. For ex. run_example_torch, check_code_quanlity<|||||>It appears that the CI is very broken at the moment, I asked and will know more tomorrow morning. Thank you for the heads up, @frank-wei - it doesn't look like any of the failures are related to your work. Especially since the live CI won't run any of your tests.<|||||>ok, so for the quality one - please rebase this PR on main. Thank you. The other issue I don't have an answer for yet. **update: I rebased - let's see with the update.**<|||||>ok, so to fix `check_code_quality` you need to run `make style` and push after rebasing most of the CI failures are now coming from this PR: ``` ==================================== ERRORS ==================================== ______________ ERROR collecting tests/deepspeed/test_deepspeed.py ______________ ImportError while importing test module '/home/circleci/transformers/tests/deepspeed/test_deepspeed.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/local/lib/python3.7/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/deepspeed/test_deepspeed.py:26: in <module> from tests.trainer.test_trainer import TrainerIntegrationCommon # noqa tests/trainer/test_trainer.py:586: in <module> class TrainerIntegrationTest(TestCasePlus, TrainerIntegrationCommon): tests/trainer/test_trainer.py:1803: in TrainerIntegrationTest @require_torch_tensorrt_fx src/transformers/testing_utils.py:499: in require_torch_tensorrt_fx return unittest.skipUnless(is_torch_tensorrt_fx_available(), "test requires Torch-TensorRT FX")(test_case) src/transformers/utils/import_utils.py:421: in is_torch_tensorrt_fx_available return importlib.util.find_spec("torch_tensorrt.fx") is not None /usr/local/lib/python3.7/importlib/util.py:94: in find_spec parent = __import__(parent_name, fromlist=['__path__']) E ModuleNotFoundError: No module named 'torch_tensorrt' ``` Let me know if you need help with sorting it out.<|||||>Thanks @stas00 , I fixed the import check. <|||||>Is there a simple way to support dynamic shape on fx2trt on torch Dynamo? If not yet, may be you want to specify it in the doc? If yes you may want to say how we provide the Tensorrt "profiles" ? In rapid experiments I did on HF + dynamo + fx2trt, even by increasing the dynamo cache, when I pushed plenty of different input sizes, at some point it raised plenty of OOM exceptions and stopped working. May be trt profiles would have worked.<|||||>> Is there a simple way to support dynamic shape on fx2trt on torch Dynamo? If not yet, may be you want to specify it in the doc? If yes you may want to say how we provide the Tensorrt "profiles" ? > > In rapid experiments I did on HF + dynamo + fx2trt, even by increasing the dynamo cache, when I pushed plenty of different input sizes, at some point it raised plenty of OOM exceptions and stopped working. May be trt profiles would have worked. It is not supported yet for dynamic shape in my implementation. I plan to support dynamic batch size in the next step(probably set it to default). <|||||>One of the tests is failing for me: ``` $ CUDA_VISIBLE_DEVICES=0 RUN_SLOW=1 pyt tests/trainer/test_trainer.py -k test_torchdynamo_memory -sv # AOT Autograd recomputaion and nvfuser recomputation optimization # aggressively fuses the operations and reduce the memory footprint. > self.assertGreater(orig_peak_mem, peak_mem * 2) E AssertionError: 100664832 not greater than 201330688 ``` let me know what details you need - this is on A100. oh, it actually crashed before that: ``` ========== TorchDynamo Stack Trace ========== Traceback (most recent call last): File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/convert_frame.py", line 295, in _convert_frame_assert code = transform_code_object(frame.f_code, transform) File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/bytecode_transformation.py", line 338, in transform_code_object transformations(instructions, code_options) File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/convert_frame.py", line 261, in transform tracer = InstructionTranslator( File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/symbolic_convert.py", line 1220, in __init__ self.symbolic_locals = collections.OrderedDict( File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/symbolic_convert.py", line 1221, in <genexpr> (k, VariableBuilder(self, LocalSource(k))(f_locals[k])) File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/builder.py", line 104, in __call__ return self._wrap(value).clone(**self.options()) File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/builder.py", line 130, in _wrap return self.wrap_tensor(value) File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/builder.py", line 327, in wrap_tensor tensor_variable = TensorVariable.create( File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/tensor.py", line 121, in create cls.wrap_to_fake_tensor, fake_mode=tx.fake_mode File "/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/symbolic_convert.py", line 1136, in fake_mode return self._fake_mode AttributeError: 'InstructionTranslator' object has no attribute '_fake_mode' ``` This is not great, shouldn't the test have failed here and not in a misleading later place of comparison?<|||||>The failure may due to the torchdynamo outdated? Could you install the newest torchdynamo? Here are the command to install it: ``` git clone https://github.com/pytorch/functorch cd functorch rm -rf build pip install -e .[aot] cd .. git clone https://github.com/pytorch/torchdynamo cd torchdynamo pip install -r requirements.txt python setup.py develop ``` It looks good from my testing: ``` (mypy38-fx-only) [[email protected] /data/users/wwei6/Work/transformers] CUDA_VISIBLE_DEVICES=6 pytest tests/trainer/test_trainer.py -k test_torchdynamo_memory -sv ===================================================================================== test session starts ===================================================================================== platform linux -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0 -- /data/users/wwei6/miniconda3/envs/mypy38-fx-only/bin/python cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/data/users/wwei6/Work/transformers/.hypothesis/examples') rootdir: /data/users/wwei6/Work/transformers, configfile: setup.cfg plugins: benchmark-3.4.1, hydra-core-1.1.2, hypothesis-6.49.1 collected 70 items / 69 deselected / 1 selected tests/trainer/test_trainer.py::TrainerIntegrationTest::test_torchdynamo_memory PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). PASSED ====================================================================================== warnings summary ======================================================================================= ../../miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:4 /data/users/wwei6/miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if not hasattr(tensorboard, "__version__") or LooseVersion( ../../miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:6 /data/users/wwei6/miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:6: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. ) < LooseVersion("1.15"): tests/trainer/test_trainer.py::TrainerIntegrationTest::test_torchdynamo_memory /data/users/wwei6/miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/nn/utils/_stateless.py:5: DeprecationWarning: The `torch.nn.utils._stateless` code is deprecated now that it is publicly available. Please use `torch.nn.utils.stateless instead. warnings.warn("The `torch.nn.utils._stateless` code is deprecated now that " -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ======================================================================== 1 passed, 69 deselected, 3 warnings in 7.51s ========================================================================= ```<|||||>I suspected that was the case, but what I was trying to say is that the test should have failed on the torchdynamo error and not the mismatch in values, i.e. something is trapping the real error and the user could be not the wiser that their torchdynamo is broken - e.g. when there are a lot of logs. It needs to assert on the actual error. Does it make sense?<|||||>> I suspected that was the case, but what I was trying to say is that the test should have failed on the torchdynamo error and not the mismatch in values, i.e. something is trapping the real error and the user could be not the wiser that their torchdynamo is broken - e.g. when there are a lot of logs. > > It needs to assert on the actual error. Does it make sense? hm.. that is something out of my expertise as it relates with torchdynamo. If it is torch_tensorrt related, I'd love to help. For the CI test error, it seems that test is flaky? I did not find useful any information. Could you help guide/triage that? Thanks. <|||||>> Hi @frank-wei, I had to rebuild the whole environment against pt-nightly and now everything works. > > I think it'd be good to save the instructions in the OP somewhere so that it's easier for the user and us to be able to rebuild the environment. > > Would you like to maintain a section or a file on your side that contains the instructions in the OP and we could point to it? > > Other than that, I will just ask Sylvain to have a quick review and we can merge this. > > Thank you for your patience. > Hi @frank-wei, I had to rebuild the whole environment against pt-nightly and now everything works. > > I think it'd be good to save the instructions in the OP somewhere so that it's easier for the user and us to be able to rebuild the environment. > > Would you like to maintain a section or a file on your side that contains the instructions in the OP and we could point to it? > > Other than that, I will just ask Sylvain to have a quick review and we can merge this. > > Thank you for your patience. Thanks @stas00 , do you think I can add a 3 pointers for installations of torchdynamo, functorch, torch_tensorrt in docs/source/en/perf_train_gpu_one.mdx ? Torchdynamo: https://github.com/pytorch/torchdynamo#requirements-and-setup Functorch:https://github.com/pytorch/functorch#install Torch-TensorRT(FX):https://github.com/pytorch/TensorRT/blob/master/docsrc/tutorials/getting_started_with_fx_path.rst#installation<|||||>I think that works, @frank-wei <|||||>> I think that works, @frank-wei Cool. Update finished.<|||||>@stas00 @sgugger please check the change. The failed test seems flaky and not related.<|||||>@stas00 Are you good with this last iteration (as long as all tests pass?)<|||||>Let me run the tests.<|||||>All tests pass. Good to merge once the CI is green. I created a new task https://github.com/huggingface/transformers/issues/18127 to handle the CI requirements.
transformers
17,764
closed
Fix cache for GPT-Neo-X
# What does this PR do? As pointed out on #17745, there is a problem with the logic in the cache for GPT-Neo-X. Can confirm I can use it for generation after this PR, but not before. Fixes #17745
06-17-2022 18:12:44
06-17-2022 18:12:44
Thanks a lot! If it's not too much work could you maybe try running this test: https://github.com/huggingface/transformers/blob/522a9ece4baeb5abfec8953ef76469a530e987d5/tests/models/gpt_neox/test_modeling_gpt_neox.py#L144 I think right now it's not run (seems like the test function was removed)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I can also do it also otherwise :-)<|||||>Added tests that were removed and had a corresponding `check` function. Thanks for flagging this!
transformers
17,763
closed
Add type hints Yoso Pytorch
# What does this PR do? Add missing Type Hints for Yoso pytorch #16059 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Rocketknight1
06-17-2022 17:28:55
06-17-2022 17:28:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @Rocketknight1 I finished with yoso pytorch. I don't know why this pull request contain commits from previous pull request <|||||>I updated the file after running make fixup and now it passed all checks<|||||>@F02934 Thanks for this! Don't panic about the other files being changed - if you want, you should be able to fix that by pulling the latest version of main, then rebasing your PR branch onto main and finally force-pushing. I don't think it should cause any problems if you don't, though, except for cosmetic ones in the Github interface. Also, your type hints look good, but would you be willing to annotate the other model classes in the file too (the ones starting with `YosoFor...`)?<|||||>@Rocketknight1 thank you! I will just leave as it is because I'm afraid to mess up. I will finish yoso tomorrow!<|||||>Hi @Rocketknight1 I checked the `YosoFor` but they were already done. On [this Colab notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) you shared I checked the missing type hints for Yoso pytorch and only "YosoForQuestionAnswering" and "YosoForTokenClassification" were missing which I added in this PR. So I think everything done. But correct me if I'm wrong!<|||||>Sorry, you're completely right! The type hints for Yoso are ready to go. I investigated the extra Italian documentation added by this PR, though - I think the problem there is that your PR branch was created as a branch of an existing PR branch, which was probably working on translation fixups. As a result, it sort of carries changes from both branches! The simplest way to fix this would be to close this PR, make a new branch starting from `main` this time, and then just copy the changes in `modeling_yoso.py` to that branch, and finally open a PR from that new branch. Is that okay? I'll try to review it quickly if you do, since I've already checked your type hints, lol<|||||>@Rocketknight1 alright. I will do it right now!
transformers
17,762
closed
feat: pipeline registry for supporting custom pipelines
### Feature request I propose a simple registry abstraction to allow users to dynamically register custom pipelines to transformers. ```python audio_classification_tmpl = { "impl": AudioClassificationPipeline, "tf": (), "pt": (AutoModelForAudioClassification,) if is_torch_available() else (), "default": {"model": {"pt": "superb/wav2vec2-base-superb-ks"}}, "type": "audio", } PipelineRegistry.register_pipeline("audio-classification", audio_classification_tmpl) ``` A pseudo example for `PipelineRegistry` implementation: ```python class PipelineRegistry: SUPPORTED_TASKS: dict[str, dict[str, PiplineBase | dict[str, Any]] @classmethod def registry_pipeline(cls, task: str, task_metadata: dict[str, Any]: cls.SUPPORTED_TASKS[task] = task_metadata ``` For any custom pipeline user can simply do ```python from transformers.pipelines import PipelineRegistry my_custom_task_tmpl = { "impl": CustomPipeline, "tf": (), "pt": (AutoModelForAudioClassification,) if is_torch_available() else (), "default": {"model": {"pt": "my_custom_wav2vec"}}, "type": "custom", } PipelineRegistry.register_pipeline("custom-task", my_custom_task_tmpl) ``` ### Motivation Currently, pipelines abstraction provides users with quick and easy way to run any given tasks. However, it is very difficult to create and adding support for custom pipelines. According to [docs](https://huggingface.co/docs/transformers/add_new_pipeline#adding-it-to-the-list-of-supported-tasks), If users want to add a new pipeline, one would have to come in and modify `transformers` source code. This is often less than ideal. It would be nice for pipeline to have a "registry" abstraction where transformers can allow users to register their custom pipeline to transformers without the hassle of editing the source code. ### Your contribution #17905
06-17-2022 17:04:44
06-17-2022 17:04:44
transformers
17,761
closed
[WIP] Flax BLOOM implementation + demo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR will add a Flax implementation of BLOOM, and also I'd be happy to help contribute a tutorial / showcase of how to fine-tune BLOOM as well as discussed in #17703 :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. --> linked above - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). --> documentation in progress - [ ] Did you write any new necessary tests? --> will add once code is closer to completion ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten and @sanchit-gandhi @patil-suraj I believe were interested in collaborating. happy to discuss how best to do this.
06-17-2022 16:17:40
06-17-2022 16:17:40
A note on the initial status of this PR: - This first commit contains much of the code and structure of the `modeling_flax_bloom.py` file, copied from the gpt-neo Flax implementation and edited in many places already to better match the PyTorch Bloom implementation. - There are many TODOs I've left in this file that I still need to get to. The code is still not in a runnable/finished state, Next steps: - Finish implementing all methods, in particular the FlaxBloomAttention `__call__` method, until code runs (see other TODOs in file for other things that need tweaking/fixing) - Determine how to deal with alibi tensors and how to deal with Bloom not having any hardcoded max length - Once code is working, start testing whether the implementation is the same as PyTorch - Make sure tensor parallelism is working correctly / accounted for properly (see issue #17653 , this still seems to be an open issue on how best to deal with it, but bigscience/bloom-350m has TP=1 so it can be used for testing at first without worrying about TP) Later on: - Add unit tests once the code is working at least reasonably well! - Make sure all functions are stateless / code works fine with `jit` - I'm relatively new to Flax/Jax so I definitely need to confirm correctness of code on this end<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17761). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for the helpful comments, @sanchit-gandhi ! I'll do another revision through the code fixing these and adding some more things as soon as I have time. I think that the main thing that would need to be discussed soon is how to handle [AliBi](https://arxiv.org/abs/2108.12409) for position information, since it means that there is no specific max length for BLOOM inputs. I'm not too sure yet how to account for this given things like line 176, where the causal mask is made at the max length of the model and then sliced to get the mask for shorter sequences. (One idea I had was selecting a reasonable "starting max length", then if the model gets a longer input sequence the causal mask is extended either permanently or just for that forward pass).<|||||>Would there be any issues in implementing it in the first of the two ways proposed (set to `max_length`, slicing as required)? The problem I envision with the latter of the two approaches is that once the function is jit'd, providing a new input length to the model would result in XLA having to recompile. Each time XLA sees a new input shape, it needs to trace out (compile) the function again. So if we provide a new input shape for each forward pass, XLA will recompile every time (very slow)! The performance benefits of jit'ing a function come when we re-use a function that has already been jit'd, meaning we should try and use fixed input shapes where possible.<|||||>Yeah, the recompilation is definitely something to try to avoid! But the issue is that [the bigscience/bloom config](https://huggingface.co/bigscience/bloom/blob/main/config.json) doesn't have any seq_length attribute ([but bigscience/bloom-1b3 does--4096](https://huggingface.co/bigscience/bloom-1b3/blob/main/config.json)) and we want BLOOM to be able to handle sequences as long as a user wants since AliBi allows generalization to longer sequences. We could maybe just choose a reasonable default `max_length`, and then if the user passes a sequence that's too long, permanently double the size of the causal mask--this would allow for fewer recompilations, hopefully. But I think we should keep the possibility open to using the model on very long sequences without problems--I don't know if any other models in Transformers use AliBi embeddings yet so that's a unique benefit of this model.<|||||>Let's go with that to start - we can iterate and find an optimal solution as we progress. There's also the option of asking on one of the JAX/Flax Forums to see if the framework authors have any ideas if we're stuck! You're right, this will be the first JAX/Flax model in Transformers to use AliBi embeddings! Will be very cool having a model with no theoretical `max_len`!<|||||>Actually, I don't see a big problem with computing the `position_ids` for the embeddings on the fly if they depend only on the input length of `input_ids` In general whenever the user passes a different input length of `input_ids` to the model will have to be recompiled it anyways so I don't see an issue with generating the position_ids and the causal_mask from the `input_ids` either no? <|||||>Or am I misunderstanding something here?<|||||>If just generating the causal mask at every forward pass is acceptable and wouldn't incur a speed penalty, then that should work fine! And yes, I don't think that we need to pass position_ids into the model, and we can just compute the alibi embedding within the forward pass (the pytorch implementation does this.) sorry for the delay on this--I'll work on it in the next 2 days.<|||||>> If just generating the causal mask at every forward pass is acceptable and wouldn't incur a speed penalty, then that should work fine! > > And yes, I don't think that we need to pass position_ids into the model, and we can just compute the alibi embedding within the forward pass (the pytorch implementation does this.) > > sorry for the delay on this--I'll work on it in the next 2 days. Great! Yeah, I just talked to @sanchit-gandhi offline - I think what we want to do here to only recompile when the model has to be recompiled anyways which translates into doing the folowing: Allow `position_ids` to be passed but default them to `None` . If `None` they will be computed on the fly depending on the shape of `input_ids` and the values of `attention_mask` (the same would hold true for the causal_mask). Let me know if this doesn't make sense @haileyschoelkopf or if you have any other questions, more than happy to help :-) <|||||>Hey @haileyschoelkopf! This looks good with regards to the fused key-query-value matmuls in https://github.com/huggingface/transformers/pull/17761/commits/faddb8d446bfc8db0c1c77f29d9847a25f70b5a2! Just as a heads-up, for gradient checkpointing, you can follow the PR at https://github.com/huggingface/transformers/pull/17843. Feel free to reach out if there's anything you wish to discuss, very happy to help with any questions!<|||||>Added gradient checkpointing, thanks for the pointer @sanchit-gandhi ! Sorry that I haven't been able to push things forward on this PR faster, ended up being busier the past few weeks than expected... EDIT: saw the other PR. @younesbelkada , FYI, there is gradient checkpointing code on this PR now if you need it.<|||||>Thank you @haileyschoelkopf for jumping on this so quickly and getting the structure for the model in place! This PR was completed in https://github.com/huggingface/transformers/pull/18022 Let me know if there's anything else you'd like to have a go at adding in JAX/Flax! Or if you'd like to have a go at porting another model to JAX/Flax I can make some suggestions!<|||||>Thanks so much for all the helpful comments @sanchit-gandhi on this PR and apologies I wasn't able to iterate quicker on it! If I have more time to add another JAX model I'll ping you for sure :) <|||||>Very sorry that we rushed this PR so much @haileyschoelkopf! Very much looking forward to other PRs if you'd like :-)<|||||>Of course, will ping you if so :)
transformers
17,760
closed
Flax sharded
# What does this PR do? Adds support for FLAX sharding checkpoints
06-17-2022 15:19:24
06-17-2022 15:19:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>The sharding tests are run for every model which should be avoided<|||||>Will merge `flax` before `tf` as the TF one still needs a few modification (mostly cleaning the documentation)
transformers
17,759
closed
BLOOM enhance alibi creation
# What does this PR do? Thanks to @justheuristic 's contribution alibi tensor is better created/communicated during the forward pass. The tests seem to pass but still stays as an experimental feature. cc @justheuristic This probably will break with accelerate offloading because when initialising alibi tensor we do it only once at the beginning of the forward pass with the device of the first hidden states. In the previous version we used to dynamically change alibi's `device` which was fine for accelerate offloading
06-17-2022 14:48:47
06-17-2022 14:48:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>Great thanks ! I think that you are right :) will merge it as soon as the lights are all green 🟒 <|||||>It looks like a bad rebase happened, moved the PR at: #17866
transformers
17,758
closed
BLOOM enhance alibi creation
# What does this PR do? Thanks to @justheuristic 's contribution alibi tensor is better created/communicated during the forward pass. The tests seem to pass but still stays as an experimental feature. cc @justheuristic This probably will break with accelerate offloading but not sure..
06-17-2022 14:45:10
06-17-2022 14:45:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17758). All of your documentation changes will be reflected on that endpoint.