repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
17,957
closed
ERORR: "Missing XLA Configuration" while running the script?
Hi, I was trying to train the clip model on the images and text. And using clip-Italian repository and they are using HFScripts to train the model and got the error related to torch_xla. Please help me to remove the following error. I am trying to train it on GPU device, it seems that error is due to torch_xla which is trying to look TPU. Please help me to train it on GPU. ``` comet_ml is installed but `COMET_API_KEY` is not set. Traceback (most recent call last): File "run_hybrid_clip.py", line 832, in <module> main() File "run_hybrid_clip.py", line 472, in main ) = parser.parse_args_into_dataclasses() File "/opt/conda/lib/python3.7/site-packages/transformers/hf_argparser.py", line 214, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 101, in __init__ File "/opt/conda/lib/python3.7/site-packages/transformers/training_args.py", line 1066, in __post_init__ and (self.device.type != "cuda") File "/opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 829, in wrapper return func(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/training_args.py", line 1357, in device return self._setup_devices File "/opt/conda/lib/python3.7/site-packages/transformers/utils/generic.py", line 49, in __get__ cached = self.fget(obj) File "/opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 829, in wrapper return func(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/training_args.py", line 1299, in _setup_devices device = xm.xla_device() File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 232, in xla_device devkind=devkind if devkind is not None else None) File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 137, in get_xla_supported_devices xla_devices = _DEVICES.value File "/opt/conda/lib/python3.7/site-packages/torch_xla/utils/utils.py", line 32, in value self._value = self._gen_fn() File "/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py", line 19, in <lambda> _DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices()) RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:273 : Missing XLA configuration ```
06-30-2022 05:50:33
06-30-2022 05:50:33
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,956
closed
`dlopen: cannot load any more object with static TLS` after installing sentencepiece
### System Info - `transformers` version: 4.19.2 (tried 4.20 / 4.21dev) - Platform: Linux-3.10.0_3-0-0-12-x86_64-with-centos-6.3-Final - Python version: 3.7.11 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Y - Using distributed or parallel set-up in script?: N ### Who can help? @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer model_name_or_path="./t5-v1_1-base" # `path to t5-v1_1-base` tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=None, use_fast=True) ``` ### Expected behavior Expected error: ```bash File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 573, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1791, in from_pretrained **kwargs, File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1929, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 141, in __init__ **kwargs, File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 119, in __init__ "Couldn't instantiate the backend tokenizer from one of: \n" ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. ``` Then I try to install sentencepiece 0.1.96 via `pip install sentencepiece` ```bash Installing collected packages: sentencepiece Successfully installed sentencepiece-0.1.96 ``` But the OSError occurs. ```bash Traceback (most recent call last): File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 872, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 24, in <module> import torch File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/__init__.py", line 189, in <module> _load_global_deps() File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/__init__.py", line 142, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/ctypes/__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen: cannot load any more object with static TLS The above exception was the direct cause of the following exception: Traceback (most recent call last): File "t5_mlm/run_t5_mlm.py", line 35, in <module> from transformers import ( File "<frozen importlib._bootstrap>", line 1032, in _handle_fromlist File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 863, in __getattr__ value = getattr(module, name) File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 862, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/cyk/anaconda3/envs/pt1.7/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 876, in _get_module ) from e RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback): dlopen: cannot load any more object with static TLS ```
06-30-2022 03:12:08
06-30-2022 03:12:08
Hi @cyk1337, `"./t5-v1_1-base"` looks like a local path, could you share its content with us so that we can reproduce the error please?<|||||>> Hi @cyk1337, > > `"./t5-v1_1-base"` looks like a local path, could you share its content with us so that we can reproduce the error please? Hi @SaulLu , please refer to [https://huggingface.co/google/t5-v1_1-base/tree/main](https://huggingface.co/google/t5-v1_1-base/tree/main) for tokenizer files.<|||||>Thanks, unfortunately I didn't succeed in reproducing your error. I see in your stack trace the mention of `t5_mlm/run_t5_mlm.py`, are you running this code? If yes, can you try to just run the snippet you shared with me? :smile: <|||||>I have rerun the provided snippet separately but find it works. The whole script seems not work due to some dependency conflicts. I just tried to adjust their import orders and temporarily resolved it. I suspect it results from some conflicts from common dependencies that the libraries require. Thank you for your help and will reopen it if it reoccurs.🤝
transformers
17,955
closed
tune save checkpoint throwing error due to float32
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no, although Ray tune runs in parallel ### Who can help? @richardliaw @amogkam ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I run a hyperparameter search with ray[tune] which consists of these parts: ```python hp_space = { "num_train_epochs": tune.choice([1, 2, 3, 4]), } scheduler = PB2( metric="eval_f1", mode="max", hyperparam_bounds={ "weight_decay": [0.0, 0.3], # default (in transformers): 0. "learning_rate": [1e-4, 1e-5], "gradient_accumulation_steps": [4, 8], "adam_epsilon": [1e-07, 1e-9], # default: 1e-8 "adam_beta1": [0.85, 0.9999], # default: 0.9 "adam_beta2": [0.95, 0.9999], # default: 0.999 }, ) resources_per_trial = {"cpu": min(4, (os.cpu_count() - 1) // device_count), "gpu": 1} best_params = trainer.hyperparameter_search(hp_space=lambda _: hpspace, backend="ray", n_trials=8, resources_per_trial=resources_per_trial, keep_checkpoints_num=1, scheduler=scheduler, compute_objective=compute_objective) ``` However, after the processes have been running for a long time (async 4x V100), I get the following error trace: ``` ray::ImplicitFunc.train()ESC[39m (pid=3354338, ip=157.193.228.18, repr=_objective) File "/home/bram/.local/share/virtualenvs/transformers-finetuner-rzmJjOSV/lib/python3.8/site-packages/ray/tune/trainable.py", line 360, in train result = self.step() File "/home/bram/.local/share/virtualenvs/transformers-finetuner-rzmJjOSV/lib/python3.8/site-packages/ray/tune/function_runner.py", line 404, in step self._report_thread_runner_error(block=True) File "/home/bram/.local/share/virtualenvs/transformers-finetuner-rzmJjOSV/lib/python3.8/site-packages/ray/tune/function_runner.py", line 574, in _report_thread_runner_error raise e File "/home/bram/.local/share/virtualenvs/transformers-finetuner-rzmJjOSV/lib/python3.8/site-packages/ray/tune/function_runner.py", line 277, in run self._entrypoint() File "/home/bram/.local/share/virtualenvs/transformers-finetuner-rzmJjOSV/lib/python3.8/site-packages/ray/tune/function_runner.py", line 349, in entrypoint return self._trainable_func( File "/home/bram/.local/share/virtualenvs/transformers-finetuner-rzmJjOSV/lib/python3.8/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func output = fn() File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/integrations.py", line 288, in dynamic_modules_import_trainable return trainable(*args, **kwargs) File "/home/bram/.local/share/virtualenvs/transformers-finetuner-rzmJjOSV/lib/python3.8/site-packages/ray/tune/utils/trainable.py", line 410, in inner trainable(config, **fn_kwargs) File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/integrations.py", line 189, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/trainer.py", line 1410, in train return inner_training_loop( File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/trainer.py", line 1729, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/trainer.py", line 1914, in _maybe_log_save_evaluate self._report_to_hp_search(trial, epoch, metrics) File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/trainer.py", line 1153, in _report_to_hp_search self._tune_save_checkpoint() File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/trainer.py", line 1165, in _tune_save_checkpoint self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME)) File "/home/bram/Python/projects/transformers-finetuner/transformers/src/transformers/trainer_callback.py", line 97, in save_to_json json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n" File "/home/bram/.pyenv/versions/3.8.10/lib/python3.8/json/__init__.py", line 234, in dumps return cls( File "/home/bram/.pyenv/versions/3.8.10/lib/python3.8/json/encoder.py", line 201, in encode chunks = list(chunks) File "/home/bram/.pyenv/versions/3.8.10/lib/python3.8/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/home/bram/.pyenv/versions/3.8.10/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/home/bram/.pyenv/versions/3.8.10/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/home/bram/.pyenv/versions/3.8.10/lib/python3.8/json/encoder.py", line 438, in _iterencode o = _default(o) File "/home/bram/.pyenv/versions/3.8.10/lib/python3.8/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type float32 is not JSON serializable ``` This occurs at around step 1840/2500. I do not know if it is relevant, but I am also running in `fp16`. If I had to guess, I'd think that during the duping of the [TrainerState](https://github.com/huggingface/transformers/blob/692e61e91a0b83f5b847902ed619b7c74c0a5dda/src/transformers/trainer_callback.py#L97), one of the [trial_params](https://github.com/huggingface/transformers/blob/692e61e91a0b83f5b847902ed619b7c74c0a5dda/src/transformers/trainer_callback.py#L89) was a np/torch float32 rather than a Python primitive, which could not be serialized. It is unclear to me why this would only happen already far into the training, though. Maybe it's a nan, or another kind of overflow of some kind? ### Expected behavior No errors. It would also be nice if the error message could tell us which key is causing this issue, but I am not sure how feasible that is.
06-29-2022 23:57:11
06-29-2022 23:57:11
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Bump<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am getting the same error almost a year later. It seems that noone is using PB2 with transformers...<|||||>> I am getting the same error almost a year later. It seems that noone is using PB2 with transformers... me too
transformers
17,954
closed
codegen-16B-mono (Salesforce) fails to load tokenizer and model
### System Info - `transformers` version: 4.20.1 - Platform: Linux-4.18.0-193.19.1.el8_2.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <N/A> - Using distributed or parallel set-up in script?: <N/A> ### Who can help? Per https://huggingface.co/Salesforce/codegen-16B-mono?text=What+is+projection+matrix, I should be able to load the codegen tokenizer and model by doing tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-mono") @SaulLu When I do tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono"), I got this error: "...huggingface_py3.9/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 576, in from_pretrained raise ValueError( ValueError: Tokenizer class CodeGenTokenizer does not exist or is not currently imported. @LysandreJik When I do model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-mono"), I got this error: "... huggingface_py3.9/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 725, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/Volume0/userhomes/weiz/venvs/huggingface_py3.9/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 432, in __getitem__ raise KeyError(key) KeyError: 'codegen' " Thanks! Wei ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Follow the model card at https://huggingface.co/Salesforce/codegen-16B-mono?text=What+is+projection+matrix from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-mono") I then got the Errors: "...huggingface_py3.9/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 576, in from_pretrained raise ValueError( ValueError: Tokenizer class CodeGenTokenizer does not exist or is not currently imported. "... huggingface_py3.9/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 725, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/Volume0/userhomes/weiz/venvs/huggingface_py3.9/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 432, in __getitem__ raise KeyError(key) KeyError: 'codegen' " ### Expected behavior The tokenizer and Model should be loaded successfully.
06-29-2022 20:32:04
06-29-2022 20:32:04
Hi @weidotwisc , You get these errors because CodeGen was only merged onto the master branch of the repo 5 days ago (PR https://github.com/huggingface/transformers/pull/17443) and therefore has not been released yet. :smile: If you want to use it now without waiting for a release, you can install a transformers version on master. For example with pip by running `pip install git+https://github.com/huggingface/transformers.git` <|||||>@SaulLu Thanks for the help! I am now able to load its tokenizer and model and follow through the model card example. Thanks, Wei
transformers
17,953
closed
Add ONNX support for LayoutLMv3
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds ONNX support for LayoutLMv3. Linked to #16308. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-29-2022 19:00:39
06-29-2022 19:00:39
@NielsRogge The supported tasks are question answering, token classification and sequence classification. Is there any other use case that should be supported? Also, the order of input arguments for the `forward` method of `LayoutLMv3ForSequenceClassification` and `LayoutLMv3ForQuestionAnswering` is different from `LayoutLMv3ForTokenClassification` and `LayoutLMv3Model`. This is taken care of in the ONNX config because I guess modifying it in `modeling_layoutlmv3.py` is not an option since it would break backwards compatibility right?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@lewtun All slow tests passed<|||||>CI fails because of the following error: ```shell Traceback (most recent call last): File "utils/check_repo.py", line 768, in <module> check_repo_quality() File "utils/check_repo.py", line 762, in check_repo_quality check_all_objects_are_documented() File "utils/check_repo.py", line 675, in check_all_objects_are_documented + "\n - ".join(undocumented_objs) Exception: The following objects are in the public init so should be documented: - OptionalDependencyNotAvailable - dummy_scatter_objects - sys ``` It seems to come from the following line in `configuration_layoutlmv3.py`: ```python from ...processing_utils import ProcessorMixin ```<|||||>Wow thanks a lot @sgugger for the clear explanation, it makes complete sense!<|||||>CI and slow tests all passed. It should be ready now @sgugger @lewtun <|||||>Thanks!<|||||>@regisss Thank you for your great work, when convert layoutxlm LayoutLMv2ForRelationExtraction to onnx, we are blocked by relation extraction layer for some reasons, can you try to export LayoutLMv2ForRelationExtraction model to onnx and give us for some help? gret thanks for you!<|||||>@NielsRogge Thanks for your great work, when I convert LayoutLMv2ForRelationExtraction to onnx, I can not export relation extraction layer to onnx, can you help me to solve it? because the deadline is coming for the project, I hope you can help me, Thank you very much.<|||||>> @regisss Thank you for your great work, when convert layoutxlm LayoutLMv2ForRelationExtraction to onnx, we are blocked by relation extraction layer for some reasons, can you try to export LayoutLMv2ForRelationExtraction model to onnx and give us for some help? gret thanks for you! @githublsk Where did you find `LayoutLMv2ForRelationExtraction`? I cannot find it in Transformers<|||||>@regisss it is just in microsoft/unlim,please refer to the link: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/models/layoutlmv2/modeling_layoutlmv2.py ![image](https://user-images.githubusercontent.com/77612906/176890500-ff03d43f-8140-44c6-aadc-4c746f4627f6.png) it is useful for relation extraction, but when running onnx,some question occur as below: ![image](https://user-images.githubusercontent.com/77612906/176890726-94f76fa4-d928-4657-93d7-ccb5b1c80111.png) the onnx graph is as below: ![image](https://user-images.githubusercontent.com/77612906/176890859-51f745fc-4c81-4e96-8759-e4c7fafb1f91.png) I can not find any reason, which confused me, I meet the deadline for my project, it is so urgent....<|||||>@regisss if you have time, please help us, I am a newer to it, great thanks for you!<|||||>> @regisss if you have time, please help us, I am a newer to it, great thanks for you! @githublsk Open an issue because it is not related to this PR. And provide the command/script you ran with the complete error message please, screenshots are not very helpful. <|||||>@regisss Thank you for your great help, I open an issue in the link, can you help me? because the deadline is coming, it bothers me a lot, we hope you can help us to reslove it, great thanks. https://github.com/huggingface/transformers/issues/17999<|||||>@githublsk how did you solve onnx convert error: **Exporting the operator bilinear to ONNX opset version 13 is not supported** super(BiaffineAttention, self).__init__() self.in_features = in_features self.out_features = out_features **self.bilinear = torch.nn.Bilinear(in_features, in_features, out_features, bias=False)**<|||||>@gjj123 replace torch.nn.Bilinear with this one ``` class Bilinear(nn.Module): def __init__(self, in1_features, in2_features, out_features): super(Bilinear, self).__init__() self.weight = torch.nn.Parameter(torch.zeros((in1_features, in2_features, out_features))) self.bias = torch.nn.Parameter(torch.zeros((out_features))) nn.init.xavier_uniform_(self.weight) def forward(self, x, y): t = x @ self.weight.permute(2,0,1) output = (t * y).sum(dim=2).t() return output ```
transformers
17,952
closed
Trainer.predict multiple progress bars
### System Info - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (False) - Tensorflow version (GPU?): 2.8.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") trainer = Trainer(model) trainer.predict([{"input_ids": torch.zeros(20).long()} for _ in range(48)]) # [6/6 00:03] trainer.predict([{"input_ids": torch.zeros(20).long()} for _ in range(96)]) # [6/6 00:12] despite having more examples ``` ### Expected behavior Two progress bars, one of length 6 and the other of length 12.
06-29-2022 18:55:59
06-29-2022 18:55:59
![screenshot demonstrating issue](https://user-images.githubusercontent.com/46641404/176514686-1e47aa34-0e81-4110-b8d3-f886c19b7dfe.png) The cause of the issue is that `Trainer.predict()` calls `on_prediction_step` but not `on_evaluate` for `predict()`, so every prediction run after the first one will reuse the progress bar object because `on_evaluate` is the callback responsible for destroying it.<|||||>A simple fix would be to add an `on_predict` method to the `ProgressCallback`. Alternatively, `Trainer.predict` could just call `on_evaluate` in the end.<|||||>There is no `on_predict` event, but I guess we can reuse `on_evaluate` here. Do you want to make a PR?<|||||>I wrote a draft, but it breaks in Jupyter because `NotebookProgressBar` adds [custom logic](https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/utils/notebook.py#L318) to `on_evaluate`. Creating `on_predict` might be necessary.<|||||>Feel free to create it then!
transformers
17,951
closed
Fix number of examples for iterable dataset in distributed training
# What does this PR do? As pointed out in #17913, when training in distributed mode with iterable datasets, the number of examples displayed is wrong. This is because we need to go grab the length of the underlying dataset of the `IterableDatasetShard`, not the length of the `IterableDatasetShard` itself. Fixes #17913
06-29-2022 18:11:34
06-29-2022 18:11:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,950
closed
Fix for prepare_tf_dataset when drop_remainder is not supplied
Super-minor fix for an oversight that causes a crash when `drop_remainder` is left at the default `None`!
06-29-2022 18:00:45
06-29-2022 18:00:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>It's the same behaviour as Keras and `to_tf_dataset()`, so I think people will expect it!
transformers
17,949
closed
PyTorch 1.12.0 for scheduled CI
# What does this PR do? After the slack discussion, using PyTorch 1.12.0 for scheduled CI is a better idea, so we can see what to fix. Another reason is that this (messy) block https://github.com/huggingface/transformers/blob/39dad9768e75460d8bf92fc27d407562eaeb6bd0/docker/transformers-pytorch-gpu/Dockerfile#L19-L22 will change torch 1.11 (if specified) back to 1.12, as torchvision and torchaudio are installed separately from torch without specifying versions. It's better to avoid such situations. Regarding the torch/torchvision/torchaudio correspondence, I have better approach in my past-ci PR.
06-29-2022 17:06:29
06-29-2022 17:06:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,948
closed
PyTorch 1.12.0
# What does this PR do? Change to PyTorch 1.12.0 for scheduled CI.
06-29-2022 17:05:20
06-29-2022 17:05:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,947
closed
Consider adding "middle" option for tokenizer truncation_side argument
### Feature request At the moment, thanks to this PR https://github.com/huggingface/transformers/pull/14947 the option to truncate the text from the left instead of just from the right has been added. However, for some NLP tasks like summarization of long documents, it might also be advantageous to truncate the middle part of the document instead. For example if our sequence length is 512 tokens and a document exceeds this length, we might want to keep the first 256 and the last 256 tokens of the document, and truncate everything in between. Therefore this issue is to request implementation of this option. ### Motivation The reason this feature might be helpful is is because when dealing in particular with long documents (for example for longformer summarization tasks), depending on the documents domain, the start of the document might set out relevant information, and the end of the document might contain a useful recap of the main points discussed, therefore both can be very relevant and valuable to keep, whereas the text in the middle may not be as important. Therefore adding an option `truncation_side="middle"`, allowing retention of the first 256 and the last 256 tokens, might be very helpful for certain use cases. ### Your contribution I have limited bandwidth right now, but might consider contributing if this can be done as a quick fix and someone from HuggingFace can provide overview.
06-29-2022 16:59:37
06-29-2022 16:59:37
WDYT @SaulLu @Narsil ?<|||||>Hi @AndreaSottana, Thank you very much for sharing a feature proposal! :hugs: I understand your use case, my feeling is that for the moment I will not push for the addition of this feature. My feeling is that at the moment it is something that can be implemented on-top of transformers and touches a problem where a user may want many different variants depending on their specific use case. Of course, if this is a feature for which there is a lot of demand, I will gladly come back to my opinion! (so please if you are passing by feel free to share what you think :smiley:) In terms of implementation, my opinion is that it is not a very simple addition because it will affect all tokenizers (and some are really particular like those of LayoutLM-like models) whether they are slow or fast. This also means that it would require a new feature in the rust tokenizers library. I'm also very curious to know what you think @Narsil !<|||||>Ok that's fine, thanks a lot for getting back to me @SaulLu Let's see if there is more appetite, if not we can leave it here for now. I can always implement the truncation myself for my specific model and tokenizer, I just thought it may be a helpful feature to have, but as you said we'd need to see how much demand there is. Feel free to close the issue if appropriate<|||||>100% agree with @SaulLu . There might be a use case, but it doesn't seem as a blatant missing feature (and we try to focus on those). Future reader, make yourself heard so that we can revisit our opinion :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>It is needed indeed! :) To add the motivation to this, take a look at the article "How to Fine-Tune BERT for Text Classification?": https://arxiv.org/pdf/1905.05583.pdf They show that using head+tail achieved the best results. I think that the case when the most important content is in the beginning and\or the end is relevant to a lot of fields, including sentiment detection, hate-speech detection and more.
transformers
17,946
closed
Decision Transformer Position Embedding Incorrect Implementation
### System Info Not necessary (source code issue) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction From the code for decision transformer: https://github.com/huggingface/transformers/blob/9fe2403bc52e342022ed132561655f84a6b6b7f3/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L821-L822 But the actual implementation did not remove the position embedding from the GPT2 model https://github.com/huggingface/transformers/blob/9fe2403bc52e342022ed132561655f84a6b6b7f3/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L497 https://github.com/huggingface/transformers/blob/9fe2403bc52e342022ed132561655f84a6b6b7f3/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L610-L611 ### Expected behavior The expected implementation should be https://github.com/kzl/decision-transformer/blob/e2d82e68f330c00f763507b3b01d774740bee53f/gym/decision_transformer/models/trajectory_gpt2.py#L680-L681 from the official decision transformer repo
06-29-2022 16:27:04
06-29-2022 16:27:04
cc @edbeeching @simoninithomas <|||||>Thanks for highlighting this, we set the position_ids to all zeros in the forward pass of the Decision Transformer Model: https://github.com/huggingface/transformers/blob/9fe2403bc52e342022ed132561655f84a6b6b7f3/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L935-L942 In additional, the weights of this layer are loaded with zeros. This is equivalent to not using the position embeddings. I just went through the Decision Transformer models on the hub to ensure that the model.encoder.wpe weights are indeed zeros and that is the case. We left position embeddings in the implementation in case researchers wish to experiment with the inclusion of position embeddings. Please let us know if you find any other examples of potential bugs or require further clarification.
transformers
17,945
closed
Unable to fine-tune WMT model
### System Info - `transformers` version: 4.20.1 - Platform: Darwin-21.5.0-x86_64-i386-64bit - Python version: 3.7.2 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @stas00 @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi! I'm trying to fine-tune WMT model on my dataset, but running into strange behaviour. The code was taken from official notebook listed on website https://github.com/huggingface/notebooks/blob/main/examples/translation.ipynb Data: https://www.kaggle.com/datasets/nltkdata/wmt15-eval Code to reproduce: ```python import pandas as pd from datasets import Dataset, load_metric import transformers from transformers import AutoTokenizer from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer import numpy as np with open('newstest-2015-100sents.en-ru.ref.ru') as f: en = f.read() with open('newstest-2015-100sents.en-ru.src.en') as f: ru = f.read() en = en.split('\n') ru = ru.split('\n') df_all = pd.DataFrame({'en': en, 'ru': ru}) df = Dataset.from_pandas(df_all) metric = load_metric("sacrebleu") dataset_splitted = df.shuffle(1337).train_test_split(0.1) model_checkpoint = 'facebook/wmt19-en-ru' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) max_input_length = 128 max_target_length = 128 def preprocess_function(examples): inputs = [ex for ex in examples["en"]] targets = [ex for ex in examples["ru"]] model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs tokenized_datasets = dataset_splitted.map(preprocess_function, batched=True) model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) batch_size = 16 model_name = model_checkpoint.split("/")[-1] args = Seq2SeqTrainingArguments( "./tmp", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, save_total_limit=3, num_train_epochs=1, predict_with_generate=True ) def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [[label.strip()] for label in labels] return preds, labels def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Some simple post-processing decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels) result = {"bleu": result["score"]} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result["gen_len"] = np.mean(prediction_lens) result = {k: round(v, 4) for k, v in result.items()} return result data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() ``` The traceback I get: ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /var/folders/cv/dmhc689x3gn9vgg44b67yl2c0000gq/T/ipykernel_29677/4032920361.py in <module> ----> 1 trainer.train() ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1411 resume_from_checkpoint=resume_from_checkpoint, 1412 trial=trial, -> 1413 ignore_keys_for_eval=ignore_keys_for_eval, 1414 ) 1415 ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1649 tr_loss_step = self.training_step(model, inputs) 1650 else: -> 1651 tr_loss_step = self.training_step(model, inputs) 1652 1653 if ( ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs) 2343 2344 with self.compute_loss_context_manager(): -> 2345 loss = self.compute_loss(model, inputs) 2346 2347 if self.args.n_gpu > 1: ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 2375 else: 2376 labels = None -> 2377 outputs = model(**inputs) 2378 # Save past state if it exists 2379 # TODO: this needs to be fixed and made cleaner later. ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/transformers/models/fsmt/modeling_fsmt.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1175 output_attentions=output_attentions, 1176 output_hidden_states=output_hidden_states, -> 1177 return_dict=return_dict, 1178 ) 1179 lm_logits = outputs[0] ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/transformers/models/fsmt/modeling_fsmt.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 1079 output_attentions=output_attentions, 1080 output_hidden_states=output_hidden_states, -> 1081 return_dict=return_dict, 1082 ) 1083 ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/transformers/models/fsmt/modeling_fsmt.py in forward(self, input_ids, encoder_hidden_states, encoder_padding_mask, decoder_padding_mask, decoder_causal_mask, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 722 # assert input_ids.ne(self.padding_idx).any() 723 --> 724 x = self.embed_tokens(input_ids) * self.embed_scale 725 x += positions 726 x = nn.functional.dropout(x, p=self.dropout, training=self.training) ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input) 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 162 def extra_repr(self) -> str: ~/pet_projects/fairseq_experiments/venv/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2197 # remove once script supports set_grad_enabled 2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2200 2201 IndexError: index out of range in self ``` ### Expected behavior Could you please help me figure out what's wrong with the trainer?
06-29-2022 15:57:04
06-29-2022 15:57:04
Hi Tatiana, I am just a user and just came across your issue. I have experienced similar things and I finally found out it was due to incompatibility between tokenizer's vocabulary and the vocabulary sizes of the encoder & decoder (check in the config). Not sure if the same reason, but maybe you could check it, too :)<|||||>Apologies I wasn't able to attend to many things transformers as of recently due to BLOOM training. I had a quick look and the problem stems from the padding id being `-100` here for some reason which is the wrong negative index and `torch.embedding` fails to look it up as its keys are all positive indices. Will try to find some time hopefully in the next few days to dive deeper and resolve this.<|||||>I apologize again for taking so long. Please try with this PR: https://github.com/huggingface/transformers/pull/18592 <|||||>Thanks
transformers
17,944
closed
[Bigscience] Non-causal Decoder Generation
@thomasw21 @Muennighoff As we discussed, here's a quick hack to try out Prefix-LM on BLOOM via swapping out the mask for one that always attends to the first `prefix_length` tokens (as in this figure from the pretraining objectives paper). ![image](https://user-images.githubusercontent.com/65563625/176474220-0332f2f1-9f74-4111-92b9-f1d96cf65b51.png) EDIT: I extended this for cleaner interface + drop-in support for `bigscience/lm-eval-harness` [Minimal script I was using to test outputs using this code](https://gist.github.com/haileyschoelkopf/33b9e41d07b9222e995c6cce155724de)
06-29-2022 15:29:48
06-29-2022 15:29:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17944). All of your documentation changes will be reflected on that endpoint.<|||||>Demonstration still here: https://gist.github.com/haileyschoelkopf/33b9e41d07b9222e995c6cce155724de PR now updated to include `BloomForPrefixLM` and passing in a mask tensor of size `[batch_size, 1, input_length, input_length]` Running the bigscience lm-eval-harness fork with Bloom Prefix LM should be as simple as just swapping the ("bloom", "BloomForCausalLM") entry in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES to ("bloom", "BloomForPrefixLM").<|||||>I proposed some adjustments here: https://github.com/haileyschoelkopf/transformers/pull/1 @haileyschoelkopf @thomasw21 <|||||>You should create a test that shows that whatever invariances your model has, it should be tested in the test: - changing a value in the input part changes all the input logits as well as target. - changing a value in the targer only changes logies before and not after (same as language modeling) - `generate` actually works with this function.<|||||>Thanks for the input. Will make above into tests. <|||||>Thanks for the edits and comments Lintang! I may not get to testing these today because of the holiday, but I will do so tomorrow if not. @lintangsutawika I'm happy to also turn your `tests/models/bloom/test_noncausal_attention_bloom.py` test script into unit tests in huggingface/transformers if you'd like.<|||||>@younesbelkada since you're the author of the architecture in `transformers`<|||||>Update on the status of this: I started looking at Lintang's edits today. Commit [0f692c07788a536fec103d83c8acdd5565cccb05](https://github.com/huggingface/transformers/pull/17944/commits/0f692c07788a536fec103d83c8acdd5565cccb05) from me is the last one I had checked for generation; I will continue editing Lintang's changes as necessary tomorrow. If necessary, happy to discuss further the merits of using `prefix_length` vs. passing `prefix_mask` to the model as a design choice--Lintang switched it back to `prefix_length`, and I think I agree with this because it'd be easier to just feed a prefix length to the model when training it on a batch of concatenated input-target sequences using a prefix LM objective.<|||||>I think we'd be much happier is the `attention_mask` was explicitely passed to the model. Typically one of the thing we're using in MTF right now is packing. I don't think packing exists as a technique under current bloom implementation.<|||||>Is the goal for the Transformers version of Bloom to also support packing? My understanding is that even transformers version of T5 doesn't have packing. Having to input an attention mask means there would be two attention masks to input? The regular attention masks that the PretrainedModel object requires and the NonCausal Attention mask? I figured it would be easier for users to just declare the length of the prefix and let the model build the mask? I suppose a compromise would be to have a NonCausalAttentionMask argument but allow the model to accept prefix length as well of the former is not provided.<|||||>I don't have a strong opinion but setting an attention mask explictely vs having `n` different mechanism to update an attention_mask is annoying. Today we're handling prefix, tomorrow we're handling another weird thing. One use-case I can think of is like `{input} <pad> <pad> <pad> {target} <pad> <pad> <pad>` I think this case can happen when you generate. Also @patrickvonplaten if you have any input about this. For context, we're going to train a prefix lm and so naturally we have bidirectional attention in the input, and casaul in the target. The current solution are: - pass a prefix length and update the attention mask - pass an explicit attention mask everytime.<|||||>If there is another attention formula, we could always add another BloomFor<Attention Type>LM? In the case for prefix LM, it seems easier to have a prefix length and modify the attention mask. <|||||>I think there's a way to do both. We can allow `prefix_length` as an extra argument to `BloomForPrefixLM`, which will take care of creating the noncausal attention mask if no mask is given as an argument, and pass this mask to the base `BloomModel` forward and use it for attention computation in the forward pass. (`prefix_length` will never be passed to the `BloomModel` this way) I'll start implementing this if there aren't any objections. <|||||>Nice, I would suggest to do it differently though: - Just have one additional kwarg in all forward funcs called `causal_mask` next to attention_mask - If `causal_mask` is None set it to the `torch.tril` default directly in `BloomPreTrainedModel` (i.e. dont recreate it in every layer) - Have one test checking that if it's set with prefixes it's different than the default like `test_equivalence_prefix_causal_lm` I think this is all we would need & then the user just creates its own causal mask which has prefixes. There's no need for separate models to allow users to pass prefix masks or bidirectional masks with skipping like `[1,0,1]` or any other mechanism one may come up with like @thomasw21 said. What do you think? <|||||>I like the idea of having a single `BloomForCausalLM` that can handle the causal_masks be it causal or non causal. I just think that handling the non-causal mask could be done automatically inside the model especially during generation when the intention is to process the input prompt with non-causal mask. <|||||>What if `causal_mask` has 3 possible input options? - `None`: a causal mask is generated in the model - `torch.Tensor` with shape same as input_ids: a manual causal mask (which can also be non-causal) is used. - `torch.Tensor` with shape `batch size * 1`: is a prefix length matrix, a non-causal attention mask is generated in the model<|||||>Hi all, I just went through the comments of this PR but did not checked the modifications yet. We are currently trying to refactor the modeling code in this PR: https://github.com/huggingface/transformers/pull/17866 - one thing that we are doing is to create the mask only once at the `_prepare_attn_mask` function that we pass to all submodules. It might be easier in your use case. I cannot give a fixed timeline on when the refactoring PR will get merged but if you merge this PR before I will take care of refactoring a bit your code ;) ! <|||||>Thanks @Muennighoff for the comments! They make sense and I’ll work on addressing them, but I agree with @lintangsutawika that we should have some way of automatically creating a PrefixLM non-causal mask. If we’re planning on users using Bloom0++ for generation then I think there should be some way of setting causal_mask to a non-causal mask without manually creating it, whether that’s a PrefixLM class or just a flag passed to the CausalLM forward or similar.<|||||>Addressed points 1 and 2 from @Muennighoff ! I left the BloomForPrefixLM class for now. The tests I added (including slow tests locally) pass—it’s unrelated tests that fail currently.<|||||>@thomasw21 @Muennighoff FYI, I'm in the middle of refactoring this PR after pulling from main. So for now, if you could use commit `6538564ee1e3f1689ab71b01866aa7771b82edc7` that's the last one that still works. <|||||>> Nice, I would suggest to do it differently though: > > * Just have one additional kwarg in all forward funcs called `causal_mask` next to attention_mask > * If `causal_mask` is None set it to the `torch.tril` default directly in `BloomPreTrainedModel` (i.e. dont recreate it in every layer) > * Have one test checking that if it's set with prefixes it's different than the default like `test_equivalence_prefix_causal_lm` > > I think this is all we would need & then the user just creates its own causal mask which has prefixes. There's no need for separate models to allow users to pass prefix masks or bidirectional masks with skipping like `[1,0,1]` or any other mechanism one may come up with like @thomasw21 said. What do you think? Very much agree with your idea here @Muennighoff ! Think this is the way to go which will be 100% backwards compatible (we could btw also add this to GPT2 etc...) Following @Muennighoff thoughts' here I think we should make the behavior crystal clear in the docstring: ``` If 'causal_mask' is set to 'None' the model will automatically create the conventional causal (unidirectional) attention mask to prevent past tokens to attend to future tokens. If you would like to overwrite this behavior, *e.g.* to create a Prefix-LM architecture, please pass a tensor different to 'None' ``` Also cc @sgugger @LysandreJik here to hear their input on this (since this functionality might be extended to GPT2)<|||||>I also agree that @Muennighoff options sounds better than adding a new model which is very very similar to the causal LM architecture. It also will make the functionality more accessible (since users often use the auto-classes to load their models).<|||||>Thanks for the feedback! Especially since we probably aren't going with Prefix-LM for Bloom-T0, removing the class and just making this an optional argument sounds like a good idea. I'll ping you all after I get around to making these changes!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,943
closed
fix regexes with escape sequence
This PR fixes: ``` src/transformers/dynamic_module_utils.py:81 /workspace/transformers/src/transformers/dynamic_module_utils.py:81: DeprecationWarning: invalid escape sequence \s relative_imports = re.findall("^\s*import\s+\.(\S+)\s*$", content, flags=re.MULTILINE) src/transformers/dynamic_module_utils.py:83 /workspace/transformers/src/transformers/dynamic_module_utils.py:83: DeprecationWarning: invalid escape sequence \s relative_imports += re.findall("^\s*from\s+\.(\S+)\s+import", content, flags=re.MULTILINE) src/transformers/dynamic_module_utils.py:125 /workspace/transformers/src/transformers/dynamic_module_utils.py:125: DeprecationWarning: invalid escape sequence \s imports = re.findall("^\s*import\s+(\S+)\s*$", content, flags=re.MULTILINE) src/transformers/dynamic_module_utils.py:127 /workspace/transformers/src/transformers/dynamic_module_utils.py:127: DeprecationWarning: invalid escape sequence \s imports += re.findall("^\s*from\s+(\S+)\s+import", content, flags=re.MULTILINE) src/transformers/modeling_utils.py:222 /workspace/transformers/src/transformers/modeling_utils.py:222: DeprecationWarning: invalid escape sequence \d bit_search = re.search("[^\d](\d+)$", str(dtype)) ``` @sgugger
06-29-2022 14:59:06
06-29-2022 14:59:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,942
closed
Use explicit torch version in deepspeed CI
# What does this PR do? Use explicit torch version in DeepSpeed CI docker file, as we do in https://github.com/huggingface/transformers/blob/d49c43e93fedc1ff7d58a3617fc8a3532af054ba/docker/transformers-all-latest-gpu/Dockerfile#L12
06-29-2022 14:04:03
06-29-2022 14:04:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>OK for me to use 1.12. We might see more test failures (for scheduled daily CI). Current CircleCI pings 1.11 to have green CI.
transformers
17,941
closed
Getting only <|endoftext|> token in GPT-NEOX-20B model
### System Info Transformer Version: 4.20.1 Python : 3.8 ubuntu : 18.04 ### Who can help? @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction - We are creating a streaming app using HF transformer generate() but during the token decoding we are only getting " <|endoftext|> " . when an input prompt is passed, No tokens are generated from the model. The model we are using is GPT-NEOX-20B. - We have tried 3 Tokenizers, GPTNeoXTokenizerFast, AutoTokenizer, and GPT2TokenizerFast but all of them returned the same output. `` - Below is the generate parameters. ``` output_sequences = model.generate( input_ids=input_ids, max_length=200, temperature=temperature, top_k=k, top_p=p, repetition_penalty=repetition_penalty, do_sample=False, num_return_sequences=num_return_sequences, filename=filename, tokenizer=tokenizer, num_beams=1, use_cache=False ) ``` - where we are receiving Empty tokens, is https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py line 1740, We are trying to decode the next tokens. # finished sentences should have their next token be a padding token ``` if eos_token_id is not None: if pad_token_id is None: raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.") next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences) tensor_text = tokenizer.decode(next_tokens, clean_up_tokenization_spaces=True) ``` - A confirmation for the code in file https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py and line 146. what logic should be there. if use or if not? `present = None if use_cache else (key, value)` ### Expected behavior The above code works well with all the models of the GPT-NEO but with GPT-NEOX we are facing the <|endoftext|> tokens issues. Instead of <|endoftext|> tokens. we need to generate the correct tokens.
06-29-2022 13:50:35
06-29-2022 13:50:35
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,940
closed
fix `bias` keyword argument in TFDebertaEmbeddings
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes an issue (caused by a typo) that occurs when attempting to create TF Deberta models (v1 and v2) where the `embedding_size` and `hidden_size` are different. here is a link to a colab demo that highlights the issue and checks if it was fixed. https://colab.research.google.com/drive/1dScSEBeaBnvgV9504MG0OKUokeDVgBia?usp=sharing ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? No test was needed ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-29-2022 13:49:56
06-29-2022 13:49:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>Can you review and approve this PR ? @sgugger @LysandreJik Thank you
transformers
17,939
closed
Fix img seg tests (load checkpoints from `hf-internal-testing`)
# What does this PR do? Tests use `tiny-detr` checkpoints from `hf-internal-testing` org from hub <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-29-2022 13:41:47
06-29-2022 13:41:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger please feel free to merge it 😊 (the merge btn is not available)
transformers
17,938
closed
Add OWL-ViT model for zero-shot object detection
# What does this PR do? - Adds OwlViT model for open-vocabulary object detection. Model takes in one or multiple text queries per image as input. Original repo: https://github.com/google-research/scenic/tree/a41d24676f64a2158bfcd7cb79b0a87673aa875b/scenic/projects/owl_vit Test notebook: https://colab.research.google.com/drive/1IMPWZcnlMy-tdnTDrUcOZU3oiGg-hTem?usp=sharing @sgugger could you review my draft PR, please?
06-29-2022 13:35:08
06-29-2022 13:35:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Any plan to extend it for TensorFlow version? There seems to be [conversion script](https://github.com/google-research/scenic/tree/a41d24676f64a2158bfcd7cb79b0a87673aa875b/scenic/projects/owl_vit#conversion-to-tensorflow) officially. <|||||>Hi @innat. Yes, @alaradirik is already working on it! The PR is here: https://github.com/huggingface/transformers/pull/18450 You can find out which models are being implemented by searching the open issues and PRs [for example](https://github.com/huggingface/transformers/pulls?q=is%3Apr+is%3Aopen+owlvit)
transformers
17,937
closed
Avoid nan during sampling in generate()
# What does this PR do? Fix CI test error ```bash # sample probs = nn.functional.softmax(next_token_scores, dim=-1) > next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) E RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 ``` in https://github.com/huggingface/transformers/runs/6959698965?check_suite_focus=true The test `test_sample_generate` may still fail at https://github.com/huggingface/transformers/blob/8f400775fc5bc1011a2674dcfd5408d30d69f678/tests/generation/test_generation_utils.py#L711 for some unknown reason. I think it is better to investigate this in another PR.
06-29-2022 13:20:36
06-29-2022 13:20:36
I have some doubts here, as this will make all tokens having equal probability to be sampled. But with all `-inf`, nothing could be sampled which leads to error. I feel there is no well-defined expected results in such edge cases.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, that happens only when all `-inf` along the vocab dim. I will close this PR, and we have to maybe create a doc with all possible flaky tests :-)
transformers
17,936
closed
Fix all is_torch_tpu_available issues
# What does this PR do? This PR should fix up all `torch_xla` initialization caused on import with the new check for if a TPU is available by following the same structure as [accelerate](https://github.com/huggingface/accelerate/pull/469) Fixes # (issue) Fixes #17752 Fixes #17900 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-29-2022 13:05:28
06-29-2022 13:05:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,935
open
TF: XLA generation not working properly in some models
This issue is used to track TensorFlow XLA generation issues, arising from #17857. There are three categories of issues, sorted in descending order by severity: ### Key model issues These are heavily-used models, whose quality should be prioritized. - [x] T5 -- The quality of the results decreases with `max_length`. See [here](https://github.com/huggingface/transformers/pull/17857/files#r906702367). - [x] GPT-J -- fails simple generate tests with numerical issues ### Models failing basic tests These models are failing `test_xla_generate_fast` -- a short greedy generation. - [ ] LED - [ ] Speech2Text - [ ] XLNet - [ ] XGLM ### Models failing complex tests These are models failing `test_xla_generate_slow` -- a long beam search generation. - [x] Bart - [x] Blenderbot - [x] Marian - [x] mbart - [x] OPT - [x] Pegasus
06-29-2022 12:00:15
06-29-2022 12:00:15
@gante do you require any help with this issue? Happy to contribute<|||||>Hi @anmolsjoshi 👋 If you are comfortable with debugging XLA, absolutely :) My recommendation would be to pick a model from "Models failing complex tests" (the others might require significant architecture changes), and to start debugging. The number 1 suspect is always the position embeddings, which may not be handling the case where `past` is padded. Let me know if you are up to it, and which model would you like to take! <|||||>Hi @gante, I did have a bit of a poke around. I think the complex tests all fail for the same reason: those models have a setting `max_position_embeddings` that is set to 20 by default during testing and which is too short for the “slow” tests. Here’s a simple fix for those: https://github.com/dsuess/transformers/commit/4a3e27164ae941fcd649b8565d7d92a4552d689f. I’ll give the other ones a shot now<|||||>Hello @gante, may I ask if there is anything that I can contribute? <|||||>Hi JuheonChu 👋 Actually yes! I have a few unchecked models at the top, but I wouldn't recommend spending time there unless you plan to use those architectures -- they are infrequently used. However, two popular models are currently failing their XLA tests with beam search: - Marian - OPT You can see the failing test if you install from `main` (`pip install --upgrade git+https://github.com/huggingface/transformers.git`) and run it e.g. for OPT `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test -vv tests/models/opt/test_modeling_tf_opt.py::TFOPTModelTest::test_xla_generate_slow` I haven't dived in yet, so I don't know what's the cause for the failure. You'll have to hop into debug mode and see what is breaking :)<|||||>Can @katiele47 and I try working on them? <|||||>@JuheonChu of course!<|||||>> @JuheonChu of course! @gante Are we figuring out the cause of the testing failures based on the clues as follows? ![Error 1](https://user-images.githubusercontent.com/35699839/219711119-9459c1d9-22c4-4673-9b49-2e4815515a96.png) ![Error 2](https://user-images.githubusercontent.com/35699839/219711143-135b5069-62ca-4622-823a-6df3fe572318.png) ![Error 3](https://user-images.githubusercontent.com/35699839/219711156-00580d72-2cc2-4acc-a5a0-7bccf7392097.png) <|||||>@JuheonChu yes. My suggestion would be to attempt to find where the numerical differences start from (between the XLA and the non-XLA version), using a debugger. Please note that you can't print variables with `jit_compile=True`, so you should set it to `False`. From there, the root cause is typically apparent. Be warned, these sort of tasks sometimes are very time-consuming to complete :)<|||||>> @JuheonChu yes. My suggestion would be to attempt to find where the numerical differences start from (between the XLA and the non-XLA version), using a debugger. Please note that you can't print variables with `jit_compile=True`, so you should set it to `False`. From there, the root cause is typically apparent. > > Be warned, these sort of tasks sometimes are very time-consuming to complete :) Thank you very much for your valuable guidance! We will try and keep you updated!<|||||>Hi @gante, I've attempted to reproduce the failed XLA test on the OPT model using your suggested commands. The cause of error I had was somehow different from @JuheonChu's. Would you be able to verify if the following is the expected failing test output? If not, I assume it could be due to my local repo. Thanks! <img width="1015" alt="Screen Shot 2023-02-21 at 11 20 44 PM" src="https://user-images.githubusercontent.com/54815905/220521074-6ab355c4-fe0b-42b5-88fa-cbd15de82b8a.png"> <img width="1125" alt="Screen Shot 2023-02-21 at 11 21 24 PM" src="https://user-images.githubusercontent.com/54815905/220521160-946a03db-cf80-4fbc-818c-3be337df3983.png"> <img width="1135" alt="Screen Shot 2023-02-21 at 11 21 43 PM" src="https://user-images.githubusercontent.com/54815905/220521210-981d6099-e093-4b8b-9913-2dafe5bef905.png"> <|||||>@gante working on XLNet
transformers
17,934
closed
Unifying training argument type annotations
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes the incorrect type annotations in the `TrainingArguments` class. Some arguments can handle complex types like `evaluation_strategy: IntervalStrategy`. However, when calling from CLI, they can be initialized using a `str` as well which is not reflected in the type annotations. **Solution**: Fix the type annotations to `Union[ComplexType, str]`. Note that this PR simply ensures consistency between the docstrings and the annotated types. E.g., the docstring for `evaluation_strategy` is already: ```txt evaluation_strategy (`str` or [`~trainer_utils.IntervalStrategy`], *optional*, defaults to `"no"`): The evaluation strategy... ``` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Please have a look @sgugger! <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-29-2022 11:03:23
06-29-2022 11:03:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>I understand that the tests are failing because of: ```txt ValueError: Only `Union[X, NoneType]` (i.e., `Optional[X]`) is allowed for `Union` because the argument parser only supports one type per argument. Problem encountered in field 'evaluation_strategy'. ``` Could this issue of not supporting multiple types be fixed in HF itself? I guess no since it's introduced upstream by `arparse.add_argument` not allowing multiple types? @sgugger Ultimately, this is inconsistency between the type annotations and the actually possible types is caused by HF. I think it's quite problematic because it's a systematic inconsistency that makes things appear more complex as they are. If the annotation has to be a single type, then should it not be the simplest type that can actually be used, in this case `str`? In that way, HF would be minimally invasive wrt downstream packages that indeed have stricter type annotation requirements.<|||||>The type is not perfectly exact, but note that: 1. the argument will be converted to that type in the post-init, so while the init of the dataclass accepts both `str` and `IntervalStrategy` (or other enum types), the attribute will always be of the enum type. 2. having the enum has main type allows us to properly fill the `choices` part of the parser for CLI help. So to be able to accept the change in type, we would need some custom code in `HfArgumentParser` to not only stop erroring on those types, but also properly fill the `choices` part. If you're interested in exploring this further, those are the missing steps we would need in order to merge this PR.<|||||>Looking at this from the surface, it seems that this PR is (partially?) covered by the PR that was merged earlier today? https://github.com/huggingface/transformers/pull/17933 There: - "Complex" enumtypes like IntervalStrategy (all that subclass `ExplicitEnum`) now also subclass `str`. As a consequence, the argparse equivalents also accept any string value - `HfArgumentParser._parse_dataclass_field` (that gives you the Union error) has been updated to allow Union's that include a `str`, because the `str` type is never an issue for argparse (as its the default)<|||||>We don't have the error anymore, but we are still losing the autofill of "choices" and all the custom logic we had for enums [here](https://github.com/huggingface/transformers/blob/fbc7598babd06a49797db7142016f0029cdc41b2/src/transformers/hf_argparser.py#L105).<|||||>Thanks a lot @BramVanroy, that's a nice coincidence!! @sgugger: Could we move up that logic about the autofill to an `elif` starting at L94?<|||||>I think there should be an if at line 94 that replaces the `field.dtype` by the `field.type.__args__` which is not `str` (like we replace the `field.dtype` that is not None below line 95 for `Optional`), then line 105 and the test for enums will be triggered properly. Basically something like: ```py if type(None) not in field.type.__args__: # filter `str` in Union field.type = field.type.__args__[0] if field.type.__args__[1] == str else field.type.__args__[1] origin_type = getattr(field.type, "__origin__", field.type) elif bool not in field.type.__args__: ``` before and replacing the line ```py if bool not in field.type.__args__: ``` <|||||>Just did that!<|||||>Thanks! Will play a bit with it tomorrow morning to triple-check nothing breaks then it should be good to merge!<|||||>All good in my tests, thanks again for your work on this!
transformers
17,933
closed
ExplicitEnum subclass str (JSON dump compatible)
I found that when I wanted to write the parsed dataclasses that I get from `HfArgumentParser.parse_args_into_dataclasses()` to JSON, that I would get JSON errors. The reason being that `TypeError: Object of type IntervalStrategy is not JSON serializable`. While this is understandable (Enum members are not serializable), this is not ideal within `transformers`. I checked all items in `transformers` that subclass `ExplicitEnum` and it seems that they are all `str`-only Enums. That would allow us to have them inherit from `str`, too, which solves the JSON issue. JSON can then make use of its `str` class for serialization. Below is a minimal - but full - example to show how this would work: ``` from enum import Enum from json import dump, loads from pathlib import Path class ExplicitEnum(str, Enum): # If you remove `str` you'll get a serialization error """ Enum with more explicit error message for missing values. """ @classmethod def _missing_(cls, value): raise ValueError( f"{value} is not a valid {cls.__name__}, please select one of {list(cls._value2member_map_.keys())}" ) class IntervalStrategy(ExplicitEnum): NO = "no" STEPS = "steps" EPOCH = "epoch" if __name__ == "__main__": strat = IntervalStrategy("no") print(strat) p = Path("strat_dump.json") with p.open("w", encoding="utf-8") as out: dump({"strategy": strat}, out, indent=4, sort_keys=True) loaded = loads(p.read_text(encoding="utf-8")) strat = IntervalStrategy(loaded["strategy"]) print(strat) ``` A consequence is that now these ExplicitEnums will have a Union type, which originally lead to issues when using `HfArgumentParser._parse_dataclass_field`. Therefore, I added an exception to `_parse_dataclass_field` to allow for a Union if one of the types is `str`, assuming that a given string value to the argparser will be resolved correctly, because it is one of the accepted types. ## Who can review? @sgugger
06-29-2022 10:30:58
06-29-2022 10:30:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>The following tests are failing but that seems unrelated: tests/pipelines/test_pipelines_object_detection.py::ObjectDetectionPipelineTests::test_small_model_pt tests/pipelines/test_pipelines_image_segmentation.py::ImageSegmentationPipelineTests::test_small_model_pt <|||||>Yes, I skipped those tests on main for now. Let me play a little bit with this, it seems like a good idea but I want to make sure it doesn't break anything before merging.<|||||>Tested and it all looks good, thanks a lot!
transformers
17,932
closed
Fix LayoutLMv3 documentation
# What does this PR do? - Fixes documentation of LayoutLMv3Model and some other typos. Fixes # (issue) #17833 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge
06-29-2022 09:10:34
06-29-2022 09:10:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>Applied your suggestions!<|||||>Thanks, could you run `make style` and `make quality` from the root of the repo? This ensures the code quality check will pass. <|||||>Applied and all checks are passing!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger I change into two versions. Could you re-open this pull request?
transformers
17,931
closed
Large model loading: add link to existing documentation
The documentation for large model loading is in two different places. This adds a link from one to the other, showing the auto device map.
06-29-2022 08:27:06
06-29-2022 08:27:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hmmm yes, I agree, but this is still linking to `from_pretrained`, just a part of the documentation that contains more information about the `low_cpu_mem_usage`. The part I have removed glosses over it quickly, whereas the part I link to has extensive documentation covering both `low_cpu_mem_usage` and the `device_map` argument to pass to `from_pretrained`. Reading the documentation right now, if we're interested in big models and we open the "Instantiating a big model" page in the toctree, there are no mention of the `device_map`. This is what this PR aims to fix.
transformers
17,930
closed
Fix typo
# What does this PR do? Only a typo, but it can lead to confusion ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
06-29-2022 07:03:02
06-29-2022 07:03:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,929
closed
"zero-shot-image-classification" pipeline with `VisionTextDualEncoderModel` needs manual feature_extractor and tokenizer input
### System Info ```shell transformers: 4.20.1 platform: windows 11, google colab ``` ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python # works from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="openai/clip-vit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" pipe(images=url, candidate_labels=["a photo of one cat", "a photo of two cats"], hypothesis_template="{}") ``` ```python # error from transformers import pipeline pipe2 = pipeline("zero-shot-image-classification", model="Bingsu/vitB32_bert_ko_small_clip") url = "http://images.cocodataset.org/val2017/000000039769.jpg" pipe2(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리"], hypothesis_template="{}") ``` ```sh TypeError Traceback (most recent call last) [<ipython-input-8-c1bcb0faaf45>](https://localhost:8080/#) in <module>() ----> 1 pipe2(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리"], hypothesis_template="{}") 3 frames [/usr/local/lib/python3.7/dist-packages/transformers/pipelines/zero_shot_image_classification.py](https://localhost:8080/#) in preprocess(self, image, candidate_labels, hypothesis_template) 90 for i, candidate_label in enumerate(candidate_labels): 91 image = load_image(image) ---> 92 images = self.feature_extractor(images=[image], return_tensors=self.framework) 93 sequence = hypothesis_template.format(candidate_label) 94 inputs = self.tokenizer(sequence, return_tensors=self.framework) TypeError: 'NoneType' object is not callable ``` [Colab](https://colab.research.google.com/drive/1CHrjJ7f7JcyMrEIcK18ieUHvS_1xKJKm?usp=sharing) Currently I'm using it like this: ```python from transformers import AutoModel, AutoProcessor, pipeline model = AutoModel.from_pretrained("Bingsu/vitB32_bert_ko_small_clip") processor = AutoProcessor.from_pretrained("Bingsu/vitB32_bert_ko_small_clip") pipe = pipeline("zero-shot-image-classification", model=model, feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer) ``` ### Expected behavior work with `pipeline("zero-shot-image-classification", model="Bingsu/vitB32_bert_ko_small_clip")`
06-29-2022 06:29:23
06-29-2022 06:29:23
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>hi sorry for long reply, didn't see this until today: The model `https://huggingface.co/Bingsu/vitB32_bert_ko_small_clip` is a VisionTextDualEncoder, but it's not defined within the `AutoFeatureExtractor` meta class (@NielsRogge ) so the pipeline doesn't know about it and cannot load the `feature_extractor` that's why passing it manually works. Basically the issue lies in transformers when we added this model, it wasn't properly configured. Cheers.<|||||>@NielsRogge I also cannot find a small testing model here: https://huggingface.co/hf-internal-testing For this dual model, is that normal ?<|||||>> Basically the issue lies in transformers when we added this model, it wasn't properly configured. This was an incorrect assumption on my end. Those types of models are more generic, so they don't provide and `AutoFeatureExtractor`/`AutoTokenizer` property, so it's **normal** for them to fail. Will update the pipeline loading to make it work<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Fixed per #18392
transformers
17,928
closed
Fix trainer seq2seq qa.py evaluate log
# What does this PR do? <!-- This PR fix the If eval tries to log eval logs with prediction_loss_only and logging_dir, it will not be logged, so I changed it so that logs will be saved. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @patil-suraj
06-29-2022 04:30:13
06-29-2022 04:30:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,927
closed
fix: eval logs is not saved
# What does this PR do? <!-- This PR fix the If eval tries to log eval logs with prediction_loss_only and logging_dir, it will not be logged, so I changed it so that logs will be saved. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @patil-suraj
06-29-2022 04:07:29
06-29-2022 04:07:29
Sorry, I forgot to ping reviewers @patil-suraj @sgugger<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
17,926
closed
Remove imports and use forward references in ONNX feature
# What does this PR do? Type annotations should not be responsible for imports, so moving the pretrained models import in the onnx feature file inside a TYPE_CHECKING block and using fast-forward references instead.
06-28-2022 19:41:41
06-28-2022 19:41:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,925
closed
Fix compatibility with 1.12
# What does this PR do? Fixes the scatter tests by installing torch_scatter wheels for PyTorch 1.12.
06-28-2022 19:17:16
06-28-2022 19:17:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>We'll need a fix for all Wav2Vec2-like models it seems. Opened an issue here: https://github.com/pytorch/pytorch/issues/80569<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Should this be merged or closed @sgugger?<|||||>We're still not supporting PyTorch 1.12, so this shouldn't be closed.<|||||>Good news: we can go for `torch 1.12.1`. But FYI: `https://pytorch-geometric.com/whl/torch-1.12.1+cpu.html` page doesn't exist, so I keep it as `1.12.0`
transformers
17,924
closed
Add ViltForTokenClassification e.g. for Named-Entity-Recognition (NER)
# What does this PR do? Adding ViltForTokenClassification in order to be able to fine-tune ViLT for a token classification task (e.g. as Named-Entity-Recognition). This allows leveraging both image and text for token classification tasks using ViLT. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @xhluca , @LysandreJik, @NielsRogge, @ydshieh
06-28-2022 13:03:43
06-28-2022 13:03:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @gilad19, the implementation and internals look good to me! I'll defer to @NielsRogge regarding the forward call implementation and documentation.<|||||>Regarding the use case - exactly, apply NER on a piece of text for which you also have a visual information. <|||||>Hi @NielsRogge - a gentle reminder :) <|||||>Feel free to merge when satisfied @NielsRogge
transformers
17,923
closed
skip some gpt_neox tests that require 80G RAM
# What does this PR do? GPT-NeoX requires ~80G RAM to run. Our CI runners have only 60G RAM. Skip a few tests for now. Do you think it's better to use something like ```python @unittest.skipUnless(psutil.virtual_memory().total / 1024 ** 3 > 80, "GPT-NeoX requires 80G RAM for testing") ``` The problem is that `psutil` is not in the requirements.
06-28-2022 12:23:05
06-28-2022 12:23:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>What Sylvain said and I'd ask an even more different question - why are we running the same test on many identical models of different sizes. The purpose of our test suite is not to test models on the hub, it's to test the model's code. So such tests should never be there in the first place. - 99% of the time the tests should be run against tiny random models, most of which reside under https://huggingface.co/hf-internal-testing - these are functional tests. - 1% of tests should be against the smallest non-random model to test the quality of the results. And typically these are `@slow` tests. Of course, the % breakdown is symbolic, the point I was trying to convey is that most tests should be really fast in download and execution. --------------- If there is a need to test models on the hub, there should be another CI that all it does is loading the models and performs some basic test on them. That CI would need to have a ton of CPU and GPU memory and # of GPUs for obvious reasons - e.g. t5-11b and other huge models.<|||||>Hi @stas00 The related tests here are decorated with `@slow` and run in the daily scheduled CI, not push CI. And only one size is tested `GPT_NEOX_PRETRAINED_MODEL_ARCHIVE_LIST[:1]`. For `test_model_from_pretrained`, I think we can use tiny random models in `hf-internal-testing` for `GPTNeoX` if we want to keep the test. However, we always have integration tests (like `GPTNeoXModelIntegrationTest`) which are important to have. Note that on scheduled CI, we use a cache server (FileStore on GCP), so there is no real downloading (e.g. the downloading is very fast, happening between GCP's network). They also have 16 vCPUs and 60G RAM.<|||||>Ah, good point, I missed `[:1]` - why then there is a loop then? ``` for model_name in GPT_NEOX_PRETRAINED_MODEL_ARCHIVE_LIST[:1]: ``` probably should write out explicitly the desired smallest real model then and perhaps it's small enough to fit? <|||||>The main point is that GPT-Neo-X does not come with a smaller pretrained model, there is only the 20B version.<|||||>> Ah, good point, I missed `[:1]` - why then there is a loop then? > > ``` > for model_name in GPT_NEOX_PRETRAINED_MODEL_ARCHIVE_LIST[:1]: > ``` > > probably should write out explicitly the desired smallest real model then and perhaps it's small enough to fit? I think this is from old code. We don't want to maintain `...PRETRAINED_MODEL_ARCHIVE_LIST` anymore, and for some models, we do use the explicit checkpoint name. I will just remove the 2 tests here.<|||||>Removed. Will rebase on main later to see if tests all pass<|||||>I am ready for the merge :-)
transformers
17,922
closed
fixing fsdp autowrap functionality
### What does this PR do? 1. PyTorch has renamed default_auto_wrap_policy to size_based_auto_wrap_policy. This PR updates the same. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
06-28-2022 11:43:09
06-28-2022 11:43:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for the fix. I think we should proceed differently to still support the previous nightly builds, and import the old name as the new name then. Hello, I have updated the version requirement to the stable torch version `1.12.0` and this version has the updated name for the function. The support in `1.11.0` couldn't allow for saving model when using FSDP.
transformers
17,921
closed
Update notification service
# What does this PR do? ~~**Let me run a dummy test before merge**~~ - Fix failure report blocks (the tables) with too long text (might happen for past CI) - similar to #17630 - A complete version of (long) tables are saved as artifacts - Still send successful report if it is not push CI (we are close to 0 failure now) A dummy run with very long blocks https://github.com/huggingface/transformers/runs/7094507618?check_suite_focus=true
06-28-2022 10:02:09
06-28-2022 10:02:09
_The documentation is not available anymore as the PR was closed or merged._<|||||>To make this PR visible again, @LysandreJik . **More context**: this is mainly for past CI - the summary table(s) could be very long as there are many more test failures. **Update**: I will try to save a complete table as an artifacts, so we have it. <|||||>PR ready again for review.<|||||>Perfect!
transformers
17,920
closed
Fix DisjunctiveConstraint edge case and add ConjunctiveDisjunctiveConstraint
# What does this PR do? - implement [`AC automaton`](https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm) to supersede Trie to fix DisjunctiveConstraint edge case - add ConjunctiveDisjunctiveConstraint to handle the complex combinations between multiple conjunctive and disjunctive constraints - update stronger unit tests <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) #17831 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten, @cwkeam <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-28-2022 09:29:41
06-28-2022 09:29:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17920). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @boy2000-007man, Thanks for the fix proposal! @cwkeam could you take a look here as well? :-) @boy2000-007man - it'd be really nice if you could add a test that would have failed without your fix, but will now pass. Thanks a lot for working on this!<|||||>Hey @boy2000-007man, Thanks a lot for the PR - I'm a bit worried about adding so much new code to main transformers to catch an edge case and wonder if it's really worth it. The problem is that this function will quickly become unmaintainable (it already sadly is to some extent) - in your opinion is it absolutely necessary to add this edge case? Also could you maybe provide a "real" generation example that shows how the current implementation fails?<|||||>Hi, @patrickvonplaten, the current code implementation is complex to support both the existing `DisjunctiveConstraint` and newly added `ConjunctiveDisjunctiveConstraint` at the same time. I can add a much-simplified version dedicated to back `DisjunctiveConstraint` only, and the new `ConjunctiveDisjunctiveConstraint` is not used by the library default but requires manual import, so won't break any existing works by chance. Finding a failure case is not that straightforward especially without deep understanding of specific model preference, but I can image some constraints like `the small cat/small cats`, `the united states/united kingdom` may be influenced.<|||||>Hey @boy2000-007man, Sorry to reply so late here. Will gently ask @cwkeam in private if he could take a quick look because he's the most familiar with the current code. if there is no answer, I'll come back to it and dive deeper into the code to be able to better answer here. Also cc @gante if you're feeling curios on complex code ;-)<|||||>Hi @boy2000-007man 👋 I was having a look into this PR, and one thing I noticed was that the objective of the PR was not immediately clear -- it says at the top that it fixes an edge case but... what edge case? We can find the answer to that in the code, especially in the docstring of `ConjunctiveDisjunctiveConstraint`. I do think we should fix the edge case, as the documented behavior does not match the actual behavior. However, adding clear examples (as in #15761) will be extremely useful for our future selves 🙏 It will also helps the reviewers seeing the value of the PR :D <|||||>Hi, @gante , Sorry for the late reply. The edge case is mentioned in the associated bug report, https://github.com/huggingface/transformers/issues/17831. Do you mean to mention it again in the docstring?<|||||>> Do you mean to mention it again in the docstring Yes please, but with input strings (as opposed to tokens). It's hard to justify adding so many new lines of code without a clear example of why it matters :) We have very limited maintenance capacity, so sometimes it's preferable to have an incomplete short solution that we can maintain than a complete long solution that will accumulate bugs as we introduce new features.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,919
closed
Enable Past CI
# What does this PR do? Enable Past CI
06-28-2022 09:13:06
06-28-2022 09:13:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>> merge it ... and monitor failures. I read your comment too quickly. So far the Past CI will be triggered only on pushing to `run-past-ci*` branches. I ran it ~ June 20 however, and I opened #18181 today. I think we can launch past CI monthly or even bimonthly. Please let me know if you have different opinion, @LysandreJik. Thanks.
transformers
17,918
closed
Pin black to 22.3.0 to benefit from a stable --preview flag
Pins black to 22.3.0 in order to benefit from the `--preview` flag continuously. This flag adds reformats for strings, exceptions, logs, and others. The recent black 22.6.0 version's `--preview` flag isn't compatible with the 22.3.0 and results in line changes. This PR pins 22.3.0 as it was deemed the path with the least friction.
06-28-2022 08:27:37
06-28-2022 08:27:37
Merging now as the code quality passes so that as few PRs are impacted as possible.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot!
transformers
17,917
closed
Fix #17893, removed dead code
# What does this PR do? Fixes #17893 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh
06-28-2022 08:16:45
06-28-2022 08:16:45
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @clefourrier! The code quality error comes from a new release from `black`. Rebasing on `main` should solve the issue as you'll benefit from https://github.com/huggingface/transformers/pull/17918.<|||||>Regarding the test from Lysandre on Slack _There was a new release from black that has a slightly different behavior for the --preview flag that we use in the CI._ _If you see failures in the CI for the code quality test, with a large number of file changes (>500), please mention to the author that they just need to rebase on/merge main in order to benefit from the fix._ <|||||>@LysandreJik @ydshieh Should be good now! :smiley: Ty both, I had missed it on the slack<|||||>@sgugger Done :)
transformers
17,916
closed
[M2M100] update conversion script
# What does this PR do? Update the m2m100 conversion script for newer checkpoints.
06-28-2022 08:14:34
06-28-2022 08:14:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,915
closed
Compute min_resolution in prepare_image_inputs
# What does this PR do? If `feature_extract_tester.min_resolution` is specified, the images have to be at least that large, otherwise we will get image width and/or height `0` and it gives error. An error is [here](https://github.com/huggingface/transformers/runs/7071766841?check_suite_focus=true): ``` > return self._new(self.im.resize(size, resample, box)) E ValueError: height and width must be > 0 ``` So far, we have the following in `GLPNFeatureExtractionTester` and other testers ``` min_resolution=30, ... size_divisor=32, ``` issue spotted by @Rocketknight1 , thanks!
06-28-2022 08:10:59
06-28-2022 08:10:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,914
closed
Fix typo in serialization.mdx
# What does this PR do? overriden -> overridden ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-28-2022 07:45:36
06-28-2022 07:45:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17914). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @eltociear! The code quality error comes from a new release from `black`. Rebasing on `main` should solve the issue as you'll benefit from https://github.com/huggingface/transformers/pull/17918. Let us know if we can help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,913
closed
"num_examples" incorrect when using IterableDataset
When using ```torch.utils.data.IterableDataset```, logging ```num_examlpes``` (as in https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1518) is not the actual num_examples, but the number of examples processed on a single process. For example, when I have 2000 training samples and 2 gpus, with and IterableDataset, current output be like: ``` [INFO|trainer.py:1519] 2022-06-28 12:51:44,666 >> ***** Running training ***** [INFO|trainer.py:1520] 2022-06-28 12:51:44,666 >> Num examples = 1000 [INFO|trainer.py:1521] 2022-06-28 12:51:44,666 >> Num Epochs = 1 [INFO|trainer.py:1522] 2022-06-28 12:51:44,666 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1523] 2022-06-28 12:51:44,666 >> Total train batch size (w. parallel, distributed & accumulation) = 16 [INFO|trainer.py:1524] 2022-06-28 12:51:44,666 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1525] 2022-06-28 12:51:44,666 >> Total optimization steps = 125 ``` Here's the possible cause I've found: as defined in https://github.com/huggingface/transformers/blob/e02037b3524686b57c5a861ea49ac751f15568af/src/transformers/trainer.py#L1085 ```num_examples``` is equal to ``` len(dataloader.dataset)```. However, when ```isinstance(dataset, torch.utils.data.IterableDataset)```, the```dataloader.dataset``` is an instance of ```IterableDatasetShard```, which "generate samples for one of the processes only", whose ```__len__``` is the length of dataset on a single process, not the entire project.
06-28-2022 06:54:16
06-28-2022 06:54:16
Thanks for flagging! Could you double-check the PR above fixes your issue?
transformers
17,912
closed
Training loss doesn't decrease on TPU while works fine on GPU
For the summarization task, where I train with an encoder-decoder model on GPU, it works fine and the loss gets lower over iterations. But when I change the device to `device = xm.xla_device()` and optimizer to `xm.optimizer_step(optimizer, barrier=True)` on single-core TPU, the training loss remains nearly unchanged!! **Here's the reproducible code:** https://colab.research.google.com/drive/1pC2CF3ipWt0eJrfXdznwAZD3zs0sX1kd?usp=sharing Is it a bug or I am missing something?
06-28-2022 06:03:09
06-28-2022 06:03:09
Maybe @sgugger has an idea or knows someone who does!<|||||>This is more of a question for the PyTorch XLA folks, since you're not using any of our tools for training.<|||||>> This is more of a question for the PyTorch XLA folks, since you're not using any of our tools for training. Thanks I have asked them and here's the solution: https://github.com/pytorch/xla/issues/3675#issuecomment-1171702988<|||||>Ah yes, I missed it but it's indeed a common mistake on TPUs! (FYI: by using Accelerate to power your training loop, your mistake would have been automatically fixed ;-) )<|||||>Nice to know! Thanks for the information @sgugger. Actually, I started with the HF Trainer but facing [this](https://github.com/huggingface/transformers/issues/14989#issue-1091070983) issue I moved to [this](https://github.com/huggingface/transformers/issues/14989#issuecomment-1003349939) solution which used PyTorch loop instead though I am using v4.18.0.
transformers
17,911
closed
Silero Models License Infringement
### Model description Hi, My name is Alexander, I am writing to you on behalf of Silero, a company maintaining our project [silero-models](https://github.com/snakers4/silero-models). We noticed that our models are rehosted here - https://huggingface.co/spaces/pytorch/silero_tts or here https://huggingface.co/spaces?search=silero. We did not explicitly grant Hugging Face, Inc. any sort of permission to rehost, relicense and profit from our work and models. Moreover it openly disregards our CC BY-NC-SA license. Please immediately remove any of our models from your website and / or any of your resources. Best, Alexander ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
06-28-2022 05:34:56
06-28-2022 05:34:56
closing as duplicate of https://github.com/huggingface/hub-docs/issues/216 let's address the issue there
transformers
17,910
closed
[SegFormer] TensorFlow port
This PR adds the SegFormer model in TensorFlow (probably the first Transformer-based segmentation model in TensorFlow for which we have PT weights available). ## TODOs - [x] Write the foundation components - [x] Write the image classification layer - [x] Write components w.r.t semantic segmentation - [x] Write the semantic segmentation layer - [x] Add code examples after `call()` methods where relevant - [x] Write tests - [x] Modify other related utilities - [x] Create Space to allow users to try out the models (preferably with ONNX to reduce the time?) - [x] Create a Colab Notebook The final two points are unrelated to the merging of this PR. Cc: @deep-diver
06-28-2022 05:25:46
06-28-2022 05:25:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @sayakpaul! The code quality error comes from a new release from `black`. Rebasing on `main` should solve the issue as you'll benefit from https://github.com/huggingface/transformers/pull/17918.<|||||>@Rocketknight1 @sgugger this PR is now ready for your review. Some things to note: * This is the first segmentation model on the TF side that has pre-trained segmentation checkpoints available. Hopefully, it serves as a good foundation for devs contributing TF segmentation models in the future. * @deep-diver and I will work on creating a Space and Colab notebook (for off-the-shelf inference and fine-tuning) to allow users to take advantage of a state-of-the-art segmentation model like this one in TF via `transformers`. ~@NielsRogge even though the error in the CI is coming from `run_tests_pipelines_tf` it seems like the PT test is what is originating the error. Do you mind taking a look?~<|||||>Yes @Rocketknight1's comments are needed [here](https://github.com/huggingface/transformers/pull/17910#discussion_r910724119) as well. <|||||>Yes, I'm sorry! I went deep on a couple of PRs yesterday and today - one for `datasets`, the other for XLA in `transformers`, and haven't had time to review this properly yet. I'll get to it ASAP, though!<|||||>Thanks, @gante. There were no changes except that. Those were reviewed and approved by @Rocketknight1 and @ydshieh.
transformers
17,909
closed
Wav2vec model further training [RuntimeError: you can only change requires_grad flags of leaf variables]
### System Info ```shell Latest version ``` ### Who can help? @patrickvonplaten, @anton-l I use wav2vec model as part of my own pytorch model. `self.configuration = Wav2Vec2Config() self.wav2vec_feature = Wav2Vec2Model(self.configuration) self.wav2vec_feature = self.wav2vec_feature.train()` However, it raises error when I call wav2vec model in my own forward function. The error is: `File "/home/tiger/.local/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 440, in forward hidden_states.requires_grad = True` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `configuration = Wav2Vec2Config() model = Wav2Vec2Model(configuration) model = model.train()` put this 'model' into a forward function in pytorch. ### Expected behavior ```shell The wav2vec model inserted in my own model is expected to be trainable. ```
06-28-2022 04:18:39
06-28-2022 04:18:39
Hey @xinghua-qu, Could you please provide a fully reproducible code snippet here? E.g. something along the lines: ```python from transformers import Wav2Vec2Model .... ``` More than happy to look into solving it then - thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten I am facing the same issue! Here is the snippet: `from transformers import Wav2Vec2ForSequenceClassification model = Wav2Vec2ForSequenceClassification.from_pretrained("facebook/wav2vec2-base-960h")` when I freeze the FE `self.model.freeze_feature_extractor()` training is fine otherwise I get: `RuntimeError: you can only change requires_grad flags of leaf variables.`
transformers
17,908
closed
In group_texts function, drop last block if smaller than block_size
# What does this PR do? Adds one line to both the English and Spanish versions of the [Language modeling task documentation](https://huggingface.co/docs/transformers/tasks/language_modeling) in the `group_texts` function which drops the last block if the block is smaller than `block_size`. The absence of this line causes this exception to be thrown later when training the model: `ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.` This line is present in the related documentation, [Fine-tuning a masked language model](https://huggingface.co/course/chapter7/3?fw=pt). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [# 17882](https://github.com/huggingface/transformers/issues/17882) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @SaulLu, @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-28-2022 00:18:39
06-28-2022 00:18:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,907
closed
[WIP] Add VQA docs
This PR adds visual question answering to the pipeline tutorial (under a more general multimodal header) and the fine-tune section of the guides. It would also be nice to create a VQA Tasks video similar to the other fine-tune guides, but this is not a super high priority right now :) ## TODO - [ ] Create fine-tune guide for VQA.
06-27-2022 21:41:54
06-27-2022 21:41:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17907). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,906
closed
Fixing a regression with `return_all_scores` introduced in #17606
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixing a regression with `return_all_scores` introduced in #17606 - The legacy test actually tested `return_all_scores=False` (the actual default) instead of `return_all_scores=True` (the actual weird case). This commit adds the correct legacy test and fixes it. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-27-2022 21:31:32
06-27-2022 21:31:32
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Do you mind just re-reviewing, I pushed the PR a little early yesterday (forgot WIP tag). The real edge case (present in `4.19.4`) is when there's a single incoming item and `return_all_scores=True`, somehow we are always returning `[CLASS_DICT]` but when using `return_all_scores` we're returning `[[CLASS_DICT, CLASS_DICT]]` . I don't fully remember why the second list popped from the very old legacy code (before the pipeline rework) but that's the reason for the weird return type in the beginning. IMO, we should return ALWAYS a list when classifying a single text ( containing only the top element by default `, which is fully backward compatible). Then, we keep the odd LIST of LIST when using `return_all_scores=True` (BC, + add a warning to move to `top_k`.) Then we change the return when using `top_k=None` or `top_k=n` to contain a single LIST of the classes (so more aligned with the return type without any parameters). WDYT about this solution ? Do you think we should be more conservative and keep LIST of LISTS all the time (even when using the new parameter )?. (The API itself will maintain that return legacy type, while I go look if we can update the widget itself) Also, when sending a LIST of str as an input the output will ALWAYS be a list of list of classes in all scenarios. <|||||>PS: Failing tests seem to be linked with new `black` version so I am going to ignore them and rebase later.<|||||>Mmm I see weird changes in `molideng_utils` and `gpt2` now. I think your solution is sensible.<|||||>Wrong rebase on my end.
transformers
17,905
closed
feat: add pipeline registry abstraction
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - added `PipelineRegistry` abstraction for better supports for custom pipeline. - updates `add_new_pipeline.mdx` (english docs) to reflect the api addition - migrate `check_task` and `get_supported_tasks` from `transformers/pipelines/__init__.py` to ` transformers/pipelines/base.py#PipelineRegistry.{check_task,get_supported_tasks}` Address #17762 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik and @sgugger, would be great if you guys can provide feedback. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-27-2022 21:30:51
06-27-2022 21:30:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Note that this is only to customize preprocessing and/or postprocessing as this still relies on existing auto-model classes. Should I include inside the docs to use an `AutoModel` class for better readability?<|||||>@Narsil CI failed due to the recent `black` versioning locked?<|||||>Super nice PR thanks for this addition ! Left a few NITs about the structure. Feel free to ignore if you don't agree with them<|||||>> @Narsil CI failed due to the recent `black` versioning locked? Seems like `black` released a new version and the CI is not locking it, right @sgugger ? https://pypi.org/project/black/<|||||>Yes, you'll need to rebase on main to fix the tests. Failures are due to new releases of black and PyTorch.<|||||>> Yes, you'll need to rebase on main to fix the tests. Failures are due to new releases of black and PyTorch. understood. Address accordingly. cc @LysandreJik @sgugger @Narsil when you guys have time.<|||||>That's very nice, love this approach! Should make it much much simpler to add custom pipelines. I only have one request: please add tests :smile: <|||||>cc @LysandreJik for tests. I'm thinking to add a test for log capture output, but I'm not too familiar with transformers logging structure.<|||||>If you want to test outputs, we have a util for this called `CaptureStd`. You can see an example of use in [this test](https://github.com/huggingface/transformers/blob/9fe2403bc52e342022ed132561655f84a6b6b7f3/tests/utils/test_cli.py#L30).<|||||>> If you want to test outputs, we have a util for this called `CaptureStd`. You can see an example of use in [this test](https://github.com/huggingface/transformers/blob/9fe2403bc52e342022ed132561655f84a6b6b7f3/tests/utils/test_cli.py#L30). Thanks. Will update accordingly.<|||||>tests are finished. lmk if any additional testing is required. cc @LysandreJik <|||||>Failure is flaky, so merging. Thanks again for your contribution!
transformers
17,904
closed
Add ONNX support for DETR
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds ONNX support for DETR and for object-detection models in general. Linked to #16308 and discussed [here](https://huggingface.co/facebook/detr-resnet-50/discussions/1). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-27-2022 20:56:46
06-27-2022 20:56:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>All slow tests passed @lewtun <|||||>Pinging @sgugger for approval
transformers
17,903
closed
Mrbean/codegen onnx
# What does this PR do? Codegen was added with an ONNX config but not with the model added to the features manager so trying to actually export an ONNX config is failing. ```bash 11497 ± RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "codegen" -v ===================================================================================== test session starts ====================================================================================== platform darwin -- Python 3.9.10, pytest-7.1.2, pluggy-1.0.0 -- /Users/marklar/workspace/transformers/venv/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/marklar/workspace/transformers/.hypothesis/examples') rootdir: /Users/marklar/workspace/transformers, configfile: setup.cfg plugins: xdist-2.5.0, forked-1.4.0, timeout-2.1.0, hypothesis-6.47.0, dash-2.5.0 collected 371 items / 367 deselected / 4 selected tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_codegen_causal_lm PASSED [ 25%] tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_030_codegen_default PASSED [ 50%] tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_codegen_causal_lm PASSED [ 75%] tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_030_codegen_default PASSED [100%] ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patil-suraj @JingyaHuang
06-27-2022 19:53:05
06-27-2022 19:53:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,902
closed
Adding support for `device_map` directly in `pipeline(..)` function.
# What does this PR do? Fixes #17663 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @younes @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-27-2022 19:22:31
06-27-2022 19:22:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,901
closed
`bitsandbytes` - `Linear8bitLt` integration into `transformers` models
# What does this PR do? Adding the `bitsandbytes` - `Linear8bitLt` integration for large language models! 🚀 This feature could reduce the size of the large models by up to 2, without a high loss in precision Paper and main implementations from: @TimDettmers # Usage: Anyone with a GPU that supports mixed 8 bit quantization could load a model using `AutoModel.from_pretrained(xxx, load_in_8bit=True, device_map="auto")` And works like charm. Could work on *any* HF model! ## Requirements Needs the latest version of `bitsandbytes` (that is compiled manually) and `accelerate` ## TODOs: - [x] Add custom tests - [x] Discuss potential improvements - [x] Verify that the weights are still in 8bit after the loading (once there are more advances on Tim's side) - [x] Add documentation (Younes first and then Tim) - [x] Add a demo / few lines to explain how to use it - [ ] Add flag that loads directly to 8bit @TimDettmers Resources: - WIP branch of bitsandbytes: https://github.com/TimDettmers/bitsandbytes/tree/cublaslt Many thanks to @justheuristic and @TimDettmers !! 🎉
06-27-2022 18:36:59
06-27-2022 18:36:59
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc-ing also @michaelbenayoun in case you want to have a look as well ;) <|||||>Nice, thanks for working on it @younesbelkada! Also quite interested in the feature. I'd be particularly interested in seeing a bit of documentation so that we may understand better how it works under the hood and how to use the feature to its best. Thanks!<|||||>Hi all! Just to summarise a bit about what is happening and the solution we came up to implement this! In the previous version, we found out 2 major bugs: 1- the function `set_module_tensor_to_device` seems to overwrite the `Int8Params` modules by `nn.Parameter` modules. 2- `init_empty_weights` seems also to replace the `Int8Params` modules by `nn.Parameter` modules. I see two solutions to this 1- Open a PR in `accelerate` to support the correct overwriting into `Int8Params` class as the following: https://github.com/huggingface/accelerate/compare/main...TimDettmers:accelerate:integration_8bit - only 2 functions are modified and should not break backward compatibility but I am not sure 2- Manually redefine the functions `set_module_tensor_to_device` and `init_empty_weights`as two new function `set_module_8bit_tensor_to_device` and `init_8bit_empty_weights` as proposed in this PR. I personally found the option 1 cleaner but the option 2 might be safer for `accelerate` - Let us know what do you think ! cc @LysandreJik @sgugger @TimDettmers<|||||>Thank you very much for your comments! `has_fp16_weights` comes from the class `bnb.Int8Params` that is currently being developed in a WIP branch that should be merged soon on the main branch of `bitsandbytes`. Basically the logic behind it is that if the module contain this attribute then it has to be a `bnb.Int8Params` module. I will refactor the code with your proposed changes and ask for a second batch of review 🚀 <|||||>I think before merging we need: - [x] Memory footprint benchmarking - [x] Infrence speed benchmarking - [x] `lm-eval` benchmarking for large models (it has been done for small models) - [x] Merging the WIP branch of `bitsandbytes` into `main`<|||||>Added another PR to support int8 quantization + `accelerate` on multi-GPU setup here: https://github.com/huggingface/accelerate/pull/539 ! <|||||>Thanks @sgugger for your review ! Fixed the suggestions ;) I think that we are good to go to merge https://github.com/huggingface/accelerate/pull/539 if you don't mind 🙏 I just need to wait the release of `bitsandbytes` to be more stable (facing some issues when installing the library but should be fixed very soon, I am syncing with @TimDettmers). Once this is fixed I think that we should be good to go for merging 🚀 <|||||>Merged the PR in Accelerate! Don't forget to add some documentation and also setup some tests for this so it doesn't get broken by future PRs :-)<|||||>TODOs: - [x] Have a working colab demo for inference - [x] Add more documentation - [x] Implement tests<|||||>Before moving forward, I would like to have a comment from @michaelbenayoun @mfuntowicz and @echarlaix ## About this PR We replace all the `nn.Linear` modules by the `bnb.Linear8bitLt` modules from the recent release of `bitsandbytes` that proposes a new post-training quantization technique for 0 performance degradation on large-scale models (>1b parameters). With that we have managed to fit BLOOM-176B on 3xA100 80GB instead of 6xA100 GB with no performance degradation. ## About the mixed quantization method in few words In this technique the operations on the outliers are done in `fp16` and the rest of the operations are done in `int8` to achieve 0 performance degradation on large-scale models. ## Usage This does not run on CPU, you will need a GPU that supports 8-bit core tensors operations (T4 and A100) to make it run. Here is a tutorial on Google Colab on how to run the mixed-int8 model: https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4#scrollTo=YJlldexxwnhM <|||||>Can confirm the slow tests that I have designed are passing on my testing machine (2x Tesla T4 15GB). But for now it is not possible to load saved int8 checkpoints because you need to load the quantization statistics that are not saved when doing `model.state_dict()` in `bitsandbytes`. For now I propose to just raise an error message if int8 weights are loaded and tell users that the feature is not supported (as proposed in 1326a42795033410dae6c5a8a07b81f12ee7a41c). No strong opinions but I personally advocate to keep this feature inside `transformers` since the method relies also on `accelerate` + an additional lib (`bitsandbytes`), but I am not the best knowledgable person regarding `optimum` integration that might be a bit different than the `transformers` one. cc @sgugger @mfuntowicz @TimDettmers🙏 <|||||>Thank you for all the work on this PR @younesbelkada, @sgugger, @michaelbenayoun! Regarding the `transformers` vs `optimum` question: From my understanding of the libraries, I think if people want to deploy models or run them with high efficiency `optimum` seems to be the right tool, whereas general purpose "inefficient" access of models is more suitable for `transformers`. As such, I think it's best to keep this feature in `transformers`. I think it fits better into there since it is not meant for fast inference but memory-efficient inference for as many use-cases as possible. <|||||>Forgive me for jumping the gun - On Colab(T4, 12G RAM) I tried: ``` !nvidia-smi | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 58C P8 10W / 70W | 0MiB / 15109MiB | 0% Default | | No running processes found | ``` Then ``` !pip install https://github.com/younesbelkada/transformers/archive/refs/heads/integration-8bit.zip accelerate !pip install -i https://test.pypi.org/simple/ bitsandbytes-cuda112 ``` Loading model with ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono", load_in_8bit=True, device_map="auto") ``` And got this error: ``` RuntimeError Traceback (most recent call last) [<ipython-input-8-40073518cc86>](https://localhost:8080/#) in <module>() ----> 1 model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono", load_in_8bit=True, device_map="auto") 7 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 444 elif type(config) in cls._model_mapping.keys(): 445 model_class = _get_model_class(config, cls._model_mapping) --> 446 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 447 raise ValueError( 448 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" [/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2282 # Dispatch model with hooks on all devices if necessary 2283 if device_map is not None: -> 2284 dispatch_model(model, device_map=device_map, offload_dir=offload_folder) 2285 2286 if output_loading_info: [/usr/local/lib/python3.7/dist-packages/accelerate/big_modeling.py](https://localhost:8080/#) in dispatch_model(model, device_map, main_device, state_dict, offload_dir, offload_buffers, preload_module_classes) 246 offload_buffers=offload_buffers, 247 weights_map=weights_map, --> 248 preload_module_classes=preload_module_classes, 249 ) 250 model.hf_device_map = device_map [/usr/local/lib/python3.7/dist-packages/accelerate/hooks.py](https://localhost:8080/#) in attach_align_device_hook_on_blocks(module, execution_device, offload, weights_map, offload_buffers, module_name, preload_module_classes) 446 place_submodules=True, 447 ) --> 448 add_hook_to_module(module, hook) 449 attach_execution_device_hook(module, execution_device[module_name]) 450 elif module_name in execution_device: [/usr/local/lib/python3.7/dist-packages/accelerate/hooks.py](https://localhost:8080/#) in add_hook_to_module(module, hook) 136 module._old_forward = old_forward 137 --> 138 module = hook.init_hook(module) 139 module._hf_hook = hook 140 [/usr/local/lib/python3.7/dist-packages/accelerate/hooks.py](https://localhost:8080/#) in init_hook(self, module) 219 if not self.offload and self.execution_device is not None: 220 for name, _ in named_module_tensors(module, recurse=self.place_submodules): --> 221 set_module_tensor_to_device(module, name, self.execution_device) 222 elif self.offload: 223 self.original_devices = { [/usr/local/lib/python3.7/dist-packages/accelerate/utils/modeling.py](https://localhost:8080/#) in set_module_tensor_to_device(module, tensor_name, device, value) 128 module._buffers[tensor_name] = new_value 129 else: --> 130 new_value = nn.Parameter(new_value, requires_grad=old_value.requires_grad) 131 module._parameters[tensor_name] = new_value 132 [/usr/local/lib/python3.7/dist-packages/torch/nn/parameter.py](https://localhost:8080/#) in __new__(cls, data, requires_grad) 40 t = data.detach().requires_grad_(requires_grad) 41 if type(t) is not type(data): ---> 42 raise RuntimeError(f"Creating a Parameter from an instance of type {type(data).__name__} " 43 "requires that detach() returns an instance of the same type, but return " 44 f"type {type(t).__name__} was found instead. To use the type as a " RuntimeError: Creating a Parameter from an instance of type Int8Params requires that detach() returns an instance of the same type, but return type Tensor was found instead. To use the type as a Parameter, please correct the detach() semantics defined by its __torch_dispatch__() implementation. ``` Interestingly on AWS Sagemaker(T4, 16G RAM) - ``` !pip install -i https://test.pypi.org/simple/ bitsandbytes-cuda114 import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-mono", load_in_8bit=True, device_map="auto") ``` got me ``` --------------------------------------------------------------------------- NameError Traceback (most recent call last) /tmp/ipykernel_141/3855166932.py in <cell line: 1>() ----> 1 model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-mono", load_in_8bit=True, device_map="auto") ~/.conda/envs/default/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 444 elif type(config) in cls._model_mapping.keys(): 445 model_class = _get_model_class(config, cls._model_mapping) --> 446 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 447 raise ValueError( 448 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" ~/.conda/envs/default/lib/python3.9/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2177 init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts 2178 elif load_in_8bit: -> 2179 init_contexts = [init_empty_weights()] # Force enable init empty weights 2180 logger.info("Detected 8-bit loading: activating 8-bit loading for this model") 2181 elif low_cpu_mem_usage: NameError: name 'init_empty_weights' is not defined ``` I suppose the 2nd case may have something to do with environment setup - but what would trigger the first issue? Thanks,<|||||>Hi @cnbeining ! Thanks for your interest in this feature and happy to see that you are already excited to run it on Codegen! 🚀 Initially your problem is related to `accelerate` that you are installing. Make sure you install the latest version from source using a command like: ``` pip install git+https://github.com/huggingface/accelerate.git@24c28a1adc284db0126b7c17ebef275597ddc6b7 ``` With `24c28a1adc284db0126b7c17ebef275597ddc6b7` being the latest commit hash from accelerate. The most recent release (aka `accelerate` library that you will get from `pip install accelerate`) is not compatible with this PR at the time I wrote this message. Therefore you will need the latest version of it. However, when using `load_in_8bit`, `torch_dtype=torch.float16` is internally called. It happens that there might be a small bug in Codegen when using `torch_dtype=torch.float16` that we propose to fix in https://github.com/huggingface/transformers/pull/18467 . if you are interested to reproduce the issue you can run this small snippet: ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono", device_map="auto", torch_dtype=torch.float16) text = "def quicksort(l):" encoded_input = tokenizer(text, return_tensors='pt') output_sequences = model.generate(input_ids=encoded_input['input_ids'], attention_mask=encoded_input['attention_mask']) print(tokenizer.decode(output_sequences[0], skip_special_tokens=True)) ``` Since this might take time to be merged and as I saw that you wanted to run on Google Colab I made a special branch that you can build from Colab and should work (tested it) here: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1IUAn97Zsfiiz7B1-vAAOSGsWE96yAiNM#scrollTo=1mNklAh5trGY). Just run those cells and everything should work. If you follow the same installation instructions as this Colab I think that everything should work smoothly in SageMaker as well but we never know! Let us know if this helps, and happy to help you again if necessary 💪 Also if you face any other issues, I think that it would be better to move this discussion into an issue! 🐛 Thanks Younes<|||||>@LysandreJik I have a question regarding slow tests for this feature! I prefer to build another Docker image for these tests and run them separately because it happens sometimes that the import of `bitsandbytes` fails on some specific configurations. We found an issue that will be fixed on `bitsandbytes` asap but I think that having a separate image and running the tests independently is safer to not affect other tests. Since `bitsandbytes` is [always being imported](https://github.com/younesbelkada/transformers/blob/31fce94e8a3983dfa65222311b340460ccff05f7/src/transformers/modeling_utils.py#L95) if it is available if the docker image installs it all tests will fail at import time. I can also try to come up with a solution where we import this library only if `load_in_8bit` is triggered. What do you think is the best in this case?<|||||>Slow tests are now [all passing on our docker image](https://github.com/huggingface/transformers/actions/runs/2816407110) with the latest fix of `bitsandbytes` I would love to have a potential final round of review! cc @sgugger @LysandreJik <|||||>Thanks for the review! Going to do a last sanity check - testing with Docker and see if the slow tests passes on our Docker and merge once it's green! 🟢 <|||||>GJ! Non blocking comment: How about incorporating (optional) `bnb.nn.StableEmbedding ` as [recommended by authors](https://github.com/facebookresearch/bitsandbytes#using-the-8-bit-optimizers) or added benefit is limited?<|||||>Thanks @cnbeining ! I think that this can be done in a separate PR since we need to release the beta version of this feature probably ASAP! Also I am not sure how the current implementation will handle tied weights if we replace Embedding layers with StableEmbedding. So this needs further tests/investigations <|||||>Yeah let's get this rolled out to unleash GPT-J-6B and CodeGen to ordinary folks :-) I will continue with my testing with `StableEmbedding` and will report results as they come by. Again thanks so much for all the effort!<|||||>Great that would be awesome! I would be definitely interested in seeing the results and comparison against the current approach (aka without StableEmbedding) Let's maybe keep the results in this thread even after merging the PR<|||||>Ultimate checks are passing: https://github.com/huggingface/transformers/actions/runs/2830688091 Merging!<|||||>Looks nice, will try it out :)<|||||>You can check the Google Colab: https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4#scrollTo=W8tQtyjp75O_ to see how to run it! We will publish that today with the beta release on Twitter <|||||>Great work! Can big models such us models used in the example colab be fine-tuned just loading it as `int8`? Are you thinking about release a colab for fine-tuning a model not just for inference? Thanks in adavace<|||||>Thanks for the remark @mrm8488 ! Indeed it would be very nice to have a fine-tuning demo on colab After discussing with @TimDettmers it appears that the current implementation would support classic `torch` optimizers. Also I think that @justheuristic has some experience with finetuning int8 models using `Linear8bitLt` modules for prompt tuning ;) so I will let him answer on the feasibility of that! 🚀 <|||||>tl;dr soon :) Right now you can fine-tune with bitsandbytes, but it's gonna take up the same memory as 16bit - but we hope this can soon be fixed. @timdettmers and @dbaranchuk are working on memory-efficient fine-tuning on the bitsandbytes side. After they're done, you'll be able to write trainable adapters, soft-prompts and other parameter-efficient methods. There is also a group in BigScience that works on 8-bit finetuning for very large models (think bloom-176B) in colab, but we are still polishing the code. I'll tag you once it becomes public.<|||||>@younesbelkada, thank you for integrating this awesome feature - may I suggest that all these cool features will remain hidden unless we expose them in the docs where users are likely to search for those and not in the API docs. I propose to add a new section at https://huggingface.co/docs/transformers/main/perf_train_gpu_one so that those searching for performance improvement will find it. Thank you!<|||||>Thanks for the comment ! Sounds really good for me 💪 I was planning to open a PR by the beginning of next week to add the link to blogpost + paper, I will most likely use this PR to propose your refactoring as well <|||||>> Thanks for the comment ! Sounds really good for me 💪 > I was planning to open a PR by the beginning of next week to add the link to blogpost + paper, I will most likely use this PR to propose your refactoring as well Hi @younesbelkada, I want to try bitsandbytes on an 8x A6000 server (CUDA version 11.3) with BLOOM. Unfortunately, the following error throws out. ` RuntimeError: Creating a Parameter from an instance of type Int8Params requires that detach() returns an instance of the same type, but return type Tensor. was found instead. To use the type as a Parameter, please correct the detach() semantics defined by __torch_dispatch__() implementation. ` I use the code downloaded from Colab using the `model.generate()` way for inference instead of the pipeline from HuggingFace. Do you know how to solve the issue? I installed `bitsandbytes==0.31.8` from https://pypi.org/project/bitsandbytes/; the latest `transformers` package from the master branch also installed the latest `Accelerate` from pip.<|||||>@pai4451 the code has not been released to PyPI [yet](https://pypi.org/project/transformers/#history) - you probably want to use `pip install git+https://github.com/huggingface/transformers.git` to get the `HEAD` that includes this PR.<|||||>> @pai4451 the code has not been released to PyPI [yet](https://pypi.org/project/transformers/#history) - you probably want to use `pip install git+https://github.com/huggingface/transformers.git` to get the `HEAD` that includes this PR. @cnbeining I didn’t install `transformers` from PyPI, instead I installed from this repo.<|||||>Hi @pai4451 ! Thanks a lot for your message! This error is related to `accelerate`, I have run the colab demo this morning and everything seems to work fine. I do think that you most likely didn't installed the correct version of `accelerate` as it happened to @cnbeining before. Could you please share with us the output of `pip list` ? If you see that `accelerate` version is below `0.11.x` then you should re-install it with `pip install --force accelerate` Let us know if this works!<|||||>> Hi @pai4451 ! > Thanks a lot for your message! This error is related to `accelerate`, I have run the colab demo this morning and everything seems to work fine. I do think that you most likely didn't installed the correct version of `accelerate` as it happened to @cnbeining before. > Could you please share with us the output of `pip list` ? If you see that `accelerate` version is below `0.11.x` then you should re-install it with `pip install --force accelerate` > Let us know if this works! @younesbelkada The error is really just related to `accelerate`. After upgrading `accelerate` to `0.12` the issue is solved. Thanks for providing such a wonderful feature.<|||||>@pai4451 No problem at all! I am very happy that you made it run! Let us know if you face into any other issue. <|||||>Hi @younesbelkada, thank you again for `bitandbytes` integration with `transformers` models. I wonder if it is possible to use a similar way for `DeepSpeed` on int8 quantization with the BLOOM model for inference just as `bitandbytes` to transformers? Or is there any chance of loading the model using `bitandbytes` with DeepSpeed? DeepSpeed has advantages in terms of model loading and inference speed. Do you have any thoughts on how I can achieve that? I appreciate any comments you can provide. Currently, I'm trying on the following [DeepSpeed inference script](https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/main/scripts/inference/bloom-ds-inference.py). But DeepSpeed load the model with `deepspeed.init_inference` instead of the `from_pretrained` method in `transformers`.<|||||>deepspeed-inference+int8 is being worked on, please give us a bit of time. As you discovered the ds-inference script, it'll be shortly updated to support int8.<|||||>> deepspeed-inference+int8 is being worked on, please give us a bit of time. > > As you discovered the ds-inference script, it'll be shortly updated to support int8. Looking forward to try it, and maybe "illegal memory access" issue could be resolved by the way because of GPU memory consumption reduction on each.
transformers
17,900
closed
new Transformer update causes an error with TPU XLA implementation
Hi, I notice the new release of the Transformer model (4.20) causes an issue with PyTorch XLA implementation and the new error message says "Cannot replicate if number of devices (1) is different from 8" appears. ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `!pip install cloud-tpu-client==0.10 torch==1.11.0 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl` ``` !pip3 install git+https://github.com/huggingface/transformers !git clone https://github.com/huggingface/transformers ``` `!pip3 install -r /content/transformers/examples/pytorch/question-answering/requirements.txt` ``` !python /content/transformers/examples/pytorch/xla_spawn.py --num_cores=8 /content/transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path sultan/BioM-ELECTRA-Large-Discriminator \ --dataset_name squad_v2 \ --do_train \ --do_eval \ --dataloader_num_workers 4 \ --preprocessing_num_workers 4 \ --version_2_with_negative \ --num_train_epochs 2 \ --learning_rate 5e-5 \ --max_seq_length 384 \ --doc_stride 128 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --logging_steps 500 \ --save_steps 1000 \ --overwrite_output_dir \ --output_dir out ``` To fix the issue we git clone and git+install with 4.19 release: ``` !pip3 install git+https://github.com/huggingface/[email protected] !git clone --depth 1 --branch v4.19.4 https://github.com/huggingface/transformers ``` ### Expected behavior ```shell A new error message says "Cannot replicate if number of devices (1) is different from 8" appears. ```
06-27-2022 17:26:53
06-27-2022 17:26:53
Maybe of interest to @sgugger <|||||>cc @muellerzr It's probably linked to your recent changes for selecting the TPU device.
transformers
17,899
closed
Move logic into pixelshuffle layer
# What does this PR do? Moves logic relating to PixelShuffle layer into layer class. This is to provide a consistent usage wrt the PyTorch pixel shuffle layer and makes sure all necessary logic is ported if any `#Copied from ` statements are used. Also renamed layer `PixelShuffle` -> `TFSwinPixelShuffle` to reflect naming in the rest of the repo. The following was run to make sure the models are still compatible with current weights: ``` from transformers import AutoFeatureExtractor, TFSwinForImageClassification checkpoint = "microsoft/swin-tiny-patch4-window7-224" # relative_position_index isn't updated during training. In TF set as instance param print("\nTFSwinForImageClassification - from PyTorch checkpoint") tf_model = TFSwinForImageClassification.from_pretrained(checkpoint, from_pt=True) print("\nTFSwinForImageClassification - from TF checkpoint") tf_model = TFSwinForImageClassification.from_pretrained(checkpoint) ``` With the following output. Note: `relative_position_index` isn't updated during training and is set as an instance param in the TF model ``` TFSwinForImageClassification - from PyTorch checkpoint Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFSwinForImageClassification: ['swin.encoder.layers.3.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.4.attention.self.relative_position_index', 'swin.encoder.layers.1.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.2.attention.self.relative_position_index', 'swin.encoder.layers.1.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.3.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.5.attention.self.relative_position_index'] - This IS expected if you are initializing TFSwinForImageClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFSwinForImageClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model). All the weights of TFSwinForImageClassification were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFSwinForImageClassification for predictions without further training. TFSwinForImageClassification - from TF checkpoint All model checkpoint layers were used when initializing TFSwinForImageClassification. All the layers of TFSwinForImageClassification were initialized from the model checkpoint at microsoft/swin-tiny-patch4-window7-224. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFSwinForImageClassification for predictions without further training. ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
06-27-2022 15:01:21
06-27-2022 15:01:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,898
closed
Fix loss computation in TFBertForPreTraining
With thanks to @sreyan88 for writing up a clean bug report and reproducer, and to @ydshieh for locating the problematic code! Our `hf_compute_loss()` function for `TFBertForPreTraining` was incorrect. However, it still appeared to work when the number of masked positions was evenly divisible by the batch size. Other, more commonly-used models like `TFBertForMaskedLM` do not have this issue. The problem was incorrect handling of the reduction for the masked loss, so I took the opportunity to rewrite the function in modern TF. All shapes are now static in the rewritten function as well, which means it should now compile with XLA. Fixes #17883
06-27-2022 13:31:25
06-27-2022 13:31:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I believe it's unique to BERT, because I tried searching the codebase for any similar lines and it couldn't find any. I suspect this is how it stayed undetected for so long - it uses the NSP loss and people generally don't train with that anymore.
transformers
17,897
closed
[Issue template] Remove render tags
# What does this PR do? This PR makes sure that people's comments on Github issues regarding "System info" and "Expected behaviour" aren't rendered as shell. This makes them a lot more readable (at least for me).
06-27-2022 13:30:38
06-27-2022 13:30:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,896
closed
Deploying a pytorch-pretrained-bert on mobile
### System Info ```shell I use google colab ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code here https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313 ### Expected behavior ```shell I want to export the model for use in a mobile app (flutter). I am new to this and I just can't figure it out. Have tried many recommendations online, but something seems off. Kindly help ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
06-27-2022 12:55:26
06-27-2022 12:55:26
Hi, Regarding exporting Transformers models, refer to the guide here: https://huggingface.co/docs/transformers/serialization. Also cc'ing @hollance who's an expert on mobile ML.<|||||>Thanks @NielsRogge I will explore it. @hollance, I just sent you an email at [[email protected]]<|||||>It would be helpful to know what mobile platform you're trying to export to, how you've tried to do the export, and what errors you're running into. There are too many unknowns here to give an answer.<|||||>Thanks @hollance for your response, I just saw the message now. I will drop detail of both the code and screenshot shortly <|||||>@hollance below are the codes and errors they threw. What I want to achieve is to use the model on edge device (mobile) developed in flutter. **Output of the training** ![image](https://user-images.githubusercontent.com/23655212/176263865-0e1964a6-460d-47cd-bf13-90cd3866e57e.png) **First Approach I tried** `model_class = OpenAIGPTDoubleHeadsModel model = model_class.from_pretrained('./themodel') Scripted_model = torch.jit.script(model) opt_model = optimize_for_mobile(Scripted_model) opt_model._save_for_lite_interpreter("Mobile_model.ptl")` **Error from Approach 1** ![image](https://user-images.githubusercontent.com/23655212/176264730-60237895-fc57-4607-94c2-f5dbdec9a4c5.png) **Approach 2** `from itertools import chain persona = [["i", "like", "playing", "football", "."], ["i", "am", "from", "NYC", "."]] history = [["hello", "how", "are", "you", "?"], ["i", "am", "fine", "thanks", "."]] reply = ["great", "to", "hear"] bos, eos, speaker1, speaker2 = "<bos>", "<eos>", "<speaker1>", "<speaker2>" def build_inputs(persona, history, reply): sequence = [[bos] + list(chain(*persona))] + history + [reply + [eos]] sequence = [sequence[0]] + [ [speaker2 if (len(sequence)-i) % 2 else speaker1] + s for i, s in enumerate(sequence[1:])] words = list(chain(*sequence)) # word tokens segments = [speaker2 if i % 2 else speaker1 # segment tokens for i, s in enumerate(sequence) for _ in s] position = list(range(len(words))) # position tokens return words, segments, position, sequence words, segments, position, sequence = build_inputs(persona, history, reply) words = tokenizer.convert_tokens_to_ids(words) segments = tokenizer.convert_tokens_to_ids(segments) tokenizer_class, model_class = (OpenAIGPTTokenizer, OpenAIGPTDoubleHeadsModel) tokens_tensor = torch.tensor([words]) segments_tensors = torch.tensor([segments]) model = torch.load('./themodel/pytorch_model.bin') Scripted_model = torch.jit.trace(model,[tokens_tensor, segments_tensors]) opt_model = optimize_for_mobile(Scripted_model) opt_model._save_for_lite_interpreter("Mobile_model.ptl")` **Error of the Second Approach** ![image](https://user-images.githubusercontent.com/23655212/176266493-6d969c7d-bb12-4ba8-b230-4afc818c55ae.png) <|||||>The second approach of tracing the model (rather than scripting it), is what I would prefer. However, you need to load the model using `model = OpenAIGPTDoubleHeadsModel.from_pretrained("themodel", torchscript=True)` instead of `torch.load`.<|||||>**Thank you @hollance, I really appreciate your effort. I tried it, and below were the error I got.** **When I changed the model load, I got the error below:** ![image](https://user-images.githubusercontent.com/23655212/176485673-2c69ea09-dc0a-4e4c-8af7-35a60a366cd7.png) **When I removed the torchscript argument, I got the error below:** ![image](https://user-images.githubusercontent.com/23655212/176486848-6b728116-c1d8-40ba-a333-1e7210a6c401.png) ![image](https://user-images.githubusercontent.com/23655212/176487133-9f825229-4ec3-4fdf-b71a-8f0f22e0573e.png) <|||||>That looks like you're not giving it inputs of the correct size. It's a good idea to using your input tensors first in a normal inference call: ``` with torch.no_grad(): outputs = model(inputs, return_dict=False) ``` where `inputs` are the input tensors this model needs. I expect this to also give an error message, so first make sure that works without problems.<|||||>Thank you @hollance for your support, I tried it and that failed. Below is a link to the google colab file perhaps that will explain the scenario better than I can do, I really appreciate your effort to help a newbie in ML. https://drive.google.com/file/d/1jKjmO0gh94i57zuPV0rvMyTmRgiyBBFG/view?usp=sharing<|||||>Sorry but there's just way too much stuff in that notebook for me to make sense of. Could you create a notebook that has the minimum amount of code in it to reproduce the problem?<|||||>Hi @hollance, kindly find below as requested to recreate the problem: **This is link to the minimized version of the colable file** https://drive.google.com/file/d/1CTVA6wD26BMHXlU6JY2VJ3S5TW5dI2Ac/view?usp=sharing **Below is a link to training output (i.e content of "themodel" folder)** https://drive.google.com/file/d/1ppib_rexC6_XsOlUQeOf3goD-v2uhqad/view?usp=sharing **Below is a link to the dataset (in case you may want to take a look)** https://s3.amazonaws.com/datasets.huggingface.co/personachat/personachat_self_original.json Thank you for your support so far I am grateful. I feel really excited and relieved knowing that I am getting the needed help.<|||||>@CowryCode The problem is that the following code doesn't work: ```python with torch.no_grad(): outputs = model(tokens_tensor, segments_tensors) ``` The error message is "Index tensor must have the same number of dimensions as input tensor". The same thing happens when you try to do a `torch.jit.trace`. That's why the JIT trace fails. Now I'm not sure what `segments_tensors` is supposed to be but I think you mean to pass this into the `token_type_ids` argument. However, writing the following doesn't work: ```python with torch.no_grad(): outputs = model(tokens_tensor, token_type_ids=segments_tensors) ``` This is because the OpenAIGPTDoubleHeadsModel from the pytorch-pretrained-bert package expects there to be a `mc_token_ids` argument. Assuming that you actually meant to use OpenAIGPTDoubleHeadsModel from 🤗 Transformers, the above code does work, so I suggest you use that instead. However, there is another argument, `attention_mask`, in between the `input_ids` and `token_type_ids` arguments. When you call `torch.jit.trace`, you have to supply that attention mask argument too. The easiest way around this is to create a helper class: ```python from torch import nn class Wrapper(nn.Module): def __init__(self, model): super().__init__() self.model = model def forward(self, input_ids, token_type_ids): return self.model(input_ids, None, token_type_ids, return_dict=False) ``` And then call it like so: ```python wrapper = Wrapper(model) traced_model = torch.jit.trace(wrapper, [tokens_tensor, segments_tensors]) ``` This will trace the model into a TorchScript object. Now you can do whatever you need to in order to load it into PyTorch mobile etc. To verify this traced model gives the same outputs as the original, do this: ```python with torch.no_grad(): traced_outputs = traced_model(tokens_tensor, segments_tensors) ``` Then the following should print a very small number (around 1e-6 or 1e-7): ```python torch.max(torch.abs(outputs[0] - traced_outputs[0])) / torch.max(torch.abs(traced_outputs[0])) ``` P.S. Ideally, you should load the original model as follows, with the `torchscript` argument: ```python model = OpenAIGPTDoubleHeadsModel.from_pretrained("openai-gpt", torchscript=True) ``` <|||||>Thank you @hollance, I will explore the solution you gave.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,895
closed
Fix tf pytorch test in auto
# What does this PR do? Reopen #16044 by force push: fix some tests in `TFPTAutoModelTest`. This is probably the last fix to have ` models_rembert` the only test failure (intended) in the CI report.
06-27-2022 12:09:10
06-27-2022 12:09:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,894
closed
Trainer in `run_image_classification.py` removes necessary `"image"` column for evaluation
### System Info ```shell - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.15.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Reproduce: * Run `python examples/pytorch/image-classification/run_image_classification.py --model_name_or_path nateraw/vit-base-beans --dataset_name beans --output_dir ./beans_outputs/ --do_eval` Error: ``` [INFO|trainer.py:661] 2022-06-27 13:15:34,641 >> The following columns in the evaluation set don't have a corresponding argument in `ViTForImageClassification.forward` and have been ignored: image. If image are not expected by `ViTForImageClassification.forward`, you can safely ignore this message. [INFO|trainer.py:2753] 2022-06-27 13:15:34,642 >> ***** Running Evaluation ***** [INFO|trainer.py:2755] 2022-06-27 13:15:34,642 >> Num examples = 133 [INFO|trainer.py:2758] 2022-06-27 13:15:34,642 >> Batch size = 8 Traceback (most recent call last): File "/home/fxmarty/hf_internship/transformers/examples/pytorch/image-classification/run_image_classification.py", line 388, in <module> main() File "/home/fxmarty/hf_internship/transformers/examples/pytorch/image-classification/run_image_classification.py", line 370, in main metrics = trainer.evaluate() File "/home/fxmarty/hf_internship/transformers/src/transformers/trainer.py", line 2621, in evaluate output = eval_loop( File "/home/fxmarty/hf_internship/transformers/src/transformers/trainer.py", line 2788, in evaluation_loop for step, inputs in enumerate(dataloader): File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 530, in __next__ data = self._next_data() File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 570, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2154, in __getitem__ return self._getitem( File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2139, in _getitem formatted_output = format_table( File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 281, in __call__ return self.format_row(pa_table) File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 387, in format_row formatted_batch = self.format_batch(pa_table) File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 418, in format_batch return self.transform(batch) File "/home/fxmarty/hf_internship/transformers/examples/pytorch/image-classification/run_image_classification.py", line 318, in val_transforms example_batch["pixel_values"] = [_val_transforms(pil_img.convert("RGB")) for pil_img in example_batch["image"]] KeyError: 'image' ``` This error is expected because the trainer remove unused columns ( https://github.com/huggingface/transformers/blob/ee0d001de71f0da892f86caa3cf2387020ec9696/src/transformers/trainer.py#L652-L676 ). However, the evaluation dataset (and I reckon training as well) uses `set_transforms`, that requires keeping the `"image"` column: https://github.com/huggingface/transformers/blob/ee0d001de71f0da892f86caa3cf2387020ec9696/examples/pytorch/image-classification/run_image_classification.py#L338 and https://github.com/huggingface/transformers/blob/ee0d001de71f0da892f86caa3cf2387020ec9696/examples/pytorch/image-classification/run_image_classification.py#L315-L318 ### Expected behavior No error. We should somehow be able to tell to the trainer that the `"image"` column is necessary. An alternative is to load all images in memory before calling the trainer so that we have the `pixel_values` column from the very start, but this is costly.
06-27-2022 11:26:53
06-27-2022 11:26:53
Hi, If `set_transform` (or `with_transform`) is used, you have to specificy `--remove_unused_columns False` to the script. Else, the error you show above occurs. Cc @nateraw<|||||>@NielsRogge Thank you for your quick reply, I missed it!<|||||>To quote @nateraw from his [blog post](https://huggingface.co/blog/fine-tune-vit): > What I'm trying to say is that you'll have a bad time if you forget to set remove_unused_columns=False. 😂
transformers
17,893
closed
Ambiguous positional embedding management in LongformerEmbeddings
### System Info ```shell Current main version of the transformer lib (4.20.1?) ``` ### Who can help? @ydshieh ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi! In the code of the LongformerEmbeddings ([here](https://github.com/huggingface/transformers/blob/401fcca6c561d61db6ce25d9b1cebb75325a034f/src/transformers/models/longformer/modeling_longformer.py#L459)), there is unreachable code for the `position_ids` in `forward`. ```python def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None): if position_ids is None: if input_ids is not None: # Create the position ids from the input token ids. Any padded tokens remain padded. position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device) else: position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) ... # Here, code is unreachable # as both create_position_ids_from_input_ids and create_position_ids_from_inputs_embeds always return stg if position_ids is None: position_ids = self.position_ids[:, :seq_length] ``` So, what is the actual expected behavior of this layer for the positional embeddings ids? Is it supposed to use self.position_ids or not? (If yes, then this is indeed a bug, if not, then some code could be removed for clarity)
06-27-2022 09:31:15
06-27-2022 09:31:15
Hi @clefourrier Looking the PR #7352, `LongformerEmbeddings` was originally using (or being) `RobertaEmbeddings` at that time, and both had the line `if position_ids is None:`. Looking current `RobertaEmbeddings`, there is no more such line. So I think we can remove it too for `LongformerEmbeddings` without any doubt (actually, it is already very obvious, but I just to find more evidence 😄 ) Would you like to open a PR?
transformers
17,892
closed
Fix job links in Slack report
# What does this PR do? The current `notification_service.py` doesn't take `artifact_path['gpu']` (`single` or `multi`) into account when storing the `job_link` information, which leads to wrong pages (sometimes) when we click the `GitHub Action Job` button on Slack. This PR fixes this issue.
06-27-2022 07:19:07
06-27-2022 07:19:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,891
closed
Remove DT_DOUBLE from the T5 graph
# What does this PR do? This PR removes DT_DOUBLE aka tf.float64 from T5 TF graph. It comes from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1510 , two operands are int32 so TF casts them to float64 (`_TRUEDIV_TABLE[dtypes.int32] = dtypes.float64`). Some accelerators do not support doubles so it's important to avoid them wherever possible. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten, @LysandreJik
06-27-2022 07:01:02
06-27-2022 07:01:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @szutenberg 👋 Have you confirmed that the slow tests pass after this change? (you can run the slow tests with `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test -vv tests/models/t5/test_modeling_tf_t5.py`) It looks good to me if it passes the tests :)<|||||>@gante - my change passes tests ``` (venv28) msz@G4:~/transformers$ NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 pytest -vv tests/models/t5/test_modeling_tf_t5.py ============================================================================================================================= test session starts ============================================================================================================================= platform linux -- Python 3.8.5, pytest-7.1.2, pluggy-1.0.0 -- /home/msz/venv28/bin/python cachedir: .pytest_cache rootdir: /home/msz/transformers, configfile: setup.cfg plugins: typeguard-2.13.3 collected 86 items tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 1%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 2%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_config PASSED [ 3%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_dataset_conversion <- tests/test_modeling_tf_common.py PASSED [ 4%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 5%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 6%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_generate_with_headmasking PASSED [ 8%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 9%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 10%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 11%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 12%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_int64_inputs <- tests/test_modeling_tf_common.py PASSED [ 13%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_keras_fit <- tests/test_modeling_tf_common.py PASSED [ 15%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_keras_save_load SKIPPED (The inputs of the Main Layer are different.) [ 16%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 17%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 18%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 19%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 20%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 22%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 23%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 24%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_model_common_attributes PASSED [ 25%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_model_from_pretrained PASSED [ 26%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 27%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 29%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 30%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 31%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py SKIPPED (test requires tf2onnx) [ 32%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 33%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_resize_embeddings PASSED [ 34%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 36%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 37%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 38%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_saved_model_creation PASSED [ 39%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_t5_decoder_model_past PASSED [ 40%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_t5_decoder_model_past_large_inputs PASSED [ 41%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_t5_decoder_model_past_with_attn_mask PASSED [ 43%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_t5_model PASSED [ 44%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_t5_model_v1_1 PASSED [ 45%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_t5_model_xla_generate_fast PASSED [ 46%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_with_lm_head PASSED [ 47%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 48%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 50%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_config PASSED [ 51%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_dataset_conversion <- tests/test_modeling_tf_common.py PASSED [ 52%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 53%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 54%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 55%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 56%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 58%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 59%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 60%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_int64_inputs <- tests/test_modeling_tf_common.py PASSED [ 61%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_keras_fit <- tests/test_modeling_tf_common.py PASSED [ 62%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 63%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 65%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 66%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 67%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 68%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 69%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 70%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 72%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_model PASSED [ 73%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_model_common_attributes <- tests/test_modeling_tf_common.py PASSED [ 74%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 75%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 76%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 77%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 79%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py SKIPPED (test requires tf2onnx) [ 80%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 81%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 82%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 83%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 84%] tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_train_pipeline_custom_model PASSED [ 86%] tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_beam_search_generate PASSED [ 87%] tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_greedy_generate PASSED [ 88%] tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_greedy_xla_generate_simple PASSED [ 89%] tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_sample_generate PASSED [ 90%] tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_sample_xla_generate_simple PASSED [ 91%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_small_byt5_integration_test PASSED [ 93%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_small_integration_test PASSED [ 94%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_small_v1_1_integration_test PASSED [ 95%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_summarization PASSED [ 96%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_translation_en_to_de PASSED [ 97%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_translation_en_to_fr PASSED [ 98%] tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_translation_en_to_ro PASSED [100%] ============================================================================================================================== warnings summary =============================================================================================================================== ../venv28/lib/python3.8/site-packages/flatbuffers/compat.py:19 /home/msz/venv28/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:23 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:23: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. 'nearest': pil_image.NEAREST, ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:24 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:24: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. 'bilinear': pil_image.BILINEAR, ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:25 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:25: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. 'bicubic': pil_image.BICUBIC, ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:28 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:28: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. if hasattr(pil_image, 'HAMMING'): ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:29 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:29: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. _PIL_INTERPOLATION_METHODS['hamming'] = pil_image.HAMMING ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:30 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:30: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. if hasattr(pil_image, 'BOX'): ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:31 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:31: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. _PIL_INTERPOLATION_METHODS['box'] = pil_image.BOX ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:33 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:33: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. if hasattr(pil_image, 'LANCZOS'): ../venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:34 /home/msz/venv28/lib/python3.8/site-packages/keras_preprocessing/image/utils.py:34: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. _PIL_INTERPOLATION_METHODS['lanczos'] = pil_image.LANCZOS tests/models/t5/test_modeling_tf_t5.py: 1116 warnings /home/msz/venv28/lib/python3.8/site-packages/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) tests/models/t5/test_modeling_tf_t5.py::TFT5ModelTest::test_resize_embeddings tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_beam_search_generate tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_greedy_generate tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_greedy_xla_generate_simple tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_sample_generate tests/models/t5/test_modeling_tf_t5.py::TFT5GenerationIntegrationTests::test_sample_xla_generate_simple tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_small_integration_test /home/msz/transformers/src/transformers/models/t5/tokenization_t5.py:164: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-small automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_summarization tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_translation_en_to_de tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_translation_en_to_fr tests/models/t5/test_modeling_tf_t5.py::TFT5ModelIntegrationTests::test_translation_en_to_ro /home/msz/transformers/src/transformers/models/t5/tokenization_t5.py:164: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ========================================================================================================== 81 passed, 5 skipped, 1137 warnings in 680.12s (0:11:20) =========================================================================================================== (venv28) msz@G4:~/transformers$ git log -n 2 commit 004812a999675881da07e5ffb253b80c95883941 (HEAD -> remove_float64, origin/remove_float64) Author: Michal Szutenberg <[email protected]> Date: Mon Jun 27 08:52:46 2022 +0200 Remove DT_DOUBLE from the T5 graph commit cc5c061e346365252458126abb699b87cda5dcc0 (origin/master, origin/HEAD, master) Author: Joao Gante <[email protected]> Date: Sat Jun 25 16:17:11 2022 +0100 CLI: handle multimodal inputs (#17839) (venv28) msz@G4:~/transformers$ pip list | grep transformers transformers 4.21.0.dev0 WARNING: You are using pip version 21.1; however, version 22.1.2 is available. You should consider upgrading via the '/home/msz/venv28/bin/python -m pip install --upgrade pip' command. ```<|||||>@szutenberg awesome! Thank you for double-checking the tests -- merging :)
transformers
17,890
closed
Ignore `test_multi_gpu_data_parallel_forward` for `LayoutLMV2`
# What does this PR do? Ignore test_multi_gpu_data_parallel_forward for LayoutLMV2. The reason to skip is the same as in #17864. (The usage of `add_module`)
06-27-2022 06:21:49
06-27-2022 06:21:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok, so it's always advised to avoid using `add_module`?<|||||>@NielsRogge Not really. The test `test_multi_gpu_data_parallel_forward` uses `nn.DataParallel`, but PyTorch recommendes to use `DistributedDataParallel`, see [here](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html`.) However, I don't know if `add_module` works well with `DistributedDataParallel`. It would be good to avoid `add_module` until we decide to remove all `nn.DataParallel`. But in the cases where you really need `add_module`, don't hesitate.<|||||>@NielsRogge I guess I need to add more meaningful commit message, so you don't have to double check when clicking the merge button :-)
transformers
17,889
closed
Fix `test_number_of_steps_in_training_with_ipex`
# What does this PR do? Fix `test_number_of_steps_in_training_with_ipex`. ## Details This test uses `no_cuda=True`, which will change `n_gpu` to `0` (see `_setup_devices`), and `train_batch_size` will be `8` (with the default training args). However this line https://github.com/huggingface/transformers/blob/93f48da2740ab69fd14e6bbb38d53c87b4809eda/tests/trainer/test_trainer.py#L590 is computed (earlier) with GPUs, and therefore the (total) batch size is `16` when 2 GPUs is available. This cause the following error. #### Currently test error ```bash tests/trainer/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training_with_ipex (line 652) AssertionError: 24 != 12.0 ```
06-27-2022 06:19:18
06-27-2022 06:19:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,888
closed
Update expected values in CodeGen tests
# What does this PR do? Update expected values in CodeGen test `test_codegen_sample`. The currently value works for other GPU, but for Nvidia T4, we need the values in this PR. Note that `do_sample` will call `self.sample` (in `generatioin_utils.py`) which uses `torch.multinomial`, which is not 100% reproducible across different accelerators.
06-27-2022 06:17:39
06-27-2022 06:17:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>kindly ping @patil-suraj as I am eager toward 0 test failure 🚀 on CI report
transformers
17,887
closed
Update expected values in constrained beam search tests
# What does this PR do? Update expected values in constrained beam search tests. #17814 changed `generation_utils.py` which gives new expected values in the test. (otherwise test failed - as in the current CI report)
06-27-2022 06:14:44
06-27-2022 06:14:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks the corrections sound better as well :-)
transformers
17,886
closed
Pruning function in T5Attention doesnt affect _relative_position_bucket
### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run pruning function in t5 model, then run inference. ### Expected behavior Relative position head should be pruned too. Here it is https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L355
06-27-2022 00:36:26
06-27-2022 00:36:26
@hadaev8 It is not clear to me about this. `_relative_position_bucket` is a `staticmethod` without using any model weight in it, and IMO there is no need to do anything when pruning a model. cc @patrickvonplaten <|||||>@ydshieh Relative position bias have shape (dim, heads). For example I have 6 heads and pruned one, would be mismatch, (dim, 5) + (dim, 6) Here this line https://github.com/huggingface/transformers/blob/3ccff0d400ffd1b0c5074e15afb2b1f2af0e7b44/src/transformers/models/t5/modeling_t5.py#L529 I realized all layers use same positional bias, so it should be masked in forward, not pruned.<|||||>After looking the 2 blocks below, I think there is indeed a shape issue when we prune the heads. Would you like to try to make a minimal code snippet that could confirm the issue, @hadaev8? https://github.com/huggingface/transformers/blob/3ccff0d400ffd1b0c5074e15afb2b1f2af0e7b44/src/transformers/models/t5/modeling_t5.py#L432 https://github.com/huggingface/transformers/blob/3ccff0d400ffd1b0c5074e15afb2b1f2af0e7b44/src/transformers/models/t5/modeling_t5.py#L351<|||||>@ydshieh Here it is https://colab.research.google.com/drive/1HYu-yzmmbumbskGZExXlOP0WFmDYdgAp?usp=sharing I fixed rel pos bias, but where is some other error<|||||>Hey @hadaev8, This is quite an edge case and I don't think it'll be to find an easy fix here because usually one only prunes some heads of some layers (not of all layers), where as the same `position_bias` is applied to **all** layers. So pruning some heads of only some layers will necessarily lead to problems here. The solution I see it to dynamically discard the superfluous dimensions of `relative_attention_bias`at every attention layer if the corresponding head has been discarded. @hadaev8 would you be interested in opening a PR for this? I won't have the time to dive deeper here for this sadly in the near future, but more than happy to review! <|||||>@patrickvonplaten My fix looks like this and seems to work, but I'm not satisfied with it, idk if it worth adding to codebase. ``` if self.pruned_heads: mask = torch.ones(position_bias.shape[1]) mask[list(self.pruned_heads)] = 0 position_bias_masked = position_bias[:,mask.bool()] else: position_bias_masked = position_bias scores += position_bias_masked ```<|||||>Hey @hadaev8, That's actually quite a smart fix :-) Think I'd be ok with adding this! Do you want to open a PR for it ? :-)<|||||>@patrickvonplaten Okay, if you think its ok, i will do pr tomorrow.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,885
closed
Replicating RoBERTa-base GLUE results
Hello! I had originally posted this on the [forums](https://discuss.huggingface.co/t/replicating-roberta-base-glue-results/19328) but it seems like there's not much foot traffic there, so hoping to get more visibility here. I'm trying to replicate RoBERTa-base GLUE results as reported in the [model card](https://huggingface.co/roberta-base#evaluation-results). The numbers in the model card look like they were copied from the paper. Has anyone made an attempt to actually match these numbers with `run_glue.py`? If so, what configuration was used for the trainer? If I follow the original configs from [fairseq](https://github.com/facebookresearch/fairseq/tree/fcca32258c8e8bcc9f9890bf4714fa2f96b6b3e1/examples/roberta/config/finetuning), I am unable to match the reported numbers for RTE, CoLA, STS-B, and MRPC. Any pointers would be much appreciated, thanks!
06-26-2022 22:44:44
06-26-2022 22:44:44
Another option would be to open a discussion in the community tab in https://huggingface.co/roberta-base/discussions and tag the model authors there<|||||>Thanks for the suggestion, that's a neat feature! I opened a discussion [here](https://huggingface.co/roberta-base/discussions/1) (although, it's not quite clear how to discover the model authors by handle).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,884
closed
Attention gradients for models
### Feature request Hi, I was wondering if there is a way to get the gradients of the attention weights. In PyTorch, we can do this via hooks and it works perfectly for getting embeddings gradients. But, I had an issue doing this for the transformer attention weights. Is there any way we can make this possible? ### Motivation This can help with model interpretability with the scaled attention method. ### Your contribution I can attach my current codebase if you guys would be interested. It works partly but sometimes the gradients are zeroed. I am not sure if this is the correct behavior though.
06-26-2022 21:03:56
06-26-2022 21:03:56
Hi @Rachneet , Model interpretability is indeed interesting and useful! However, I think currently we don't have a plan to integrate the mechanism of getting gradients into `transformers`. There is an library [Captum](https://captum.ai/) which might be useful in this area though. cc @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,883
closed
Exception encountered when calling layer "tf_bert_for_pre_training" (type TFBertForPreTraining)
### System Info ```shell `transformers` version: 4.20.0 - Platform: Linux-5.13.0-48-generic-x86_64-with-glibc2.31 - Python version: 3.7.13 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @Rocketknight1 @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Colab link to reproduce: https://colab.research.google.com/drive/1tusV1pNe7sV2To9y7tep4l2LTPunEd5h?usp=sharing ``` InvalidArgumentError: Exception encountered when calling layer "tf_bert_for_pre_training" (type TFBertForPreTraining). Input to reshape is a tensor with 61 values, but the requested shape requires a multiple of 2 [Op:Reshape] Call arguments received by layer "tf_bert_for_pre_training" (type TFBertForPreTraining): • input_ids={'input_ids': 'tf.Tensor(shape=(2, 512), dtype=int64)', 'token_type_ids': 'tf.Tensor(shape=(2, 512), dtype=int64)', 'attention_mask': 'tf.Tensor(shape=(2, 512), dtype=int64)', 'next_sentence_label': 'tf.Tensor(shape=(2,), dtype=int64)', 'labels': 'tf.Tensor(shape=(2, 512), dtype=int64)'} • attention_mask=None • token_type_ids=None • position_ids=None • head_mask=None • inputs_embeds=None • output_attentions=None • output_hidden_states=None • return_dict=None • labels=None • next_sentence_label=None • training=True ``` ### Expected behavior ```shell The pre-training should start. I suspect the problem occurs when the number of masked tokens is not divisible by the batch size because of the reshape operation here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_tf_bert.py#L146. The solution doesn't look trivial as it would need loss to be reduced before addition, because the current implementation adds loss for each item/sample separately. ```
06-26-2022 18:23:10
06-26-2022 18:23:10
This line seems strange to me https://github.com/huggingface/transformers/blob/401fcca6c561d61db6ce25d9b1cebb75325a034f/src/transformers/models/bert/modeling_tf_bert.py#L146 I don't think it makes sense to reshape `masked_lm_loss ` using `next_sentence_loss`. cc @Rocketknight1 <|||||>@ydshieh This is true. I suspect this is done because 1) During loss calculation no "reduction" is being done 2) Since no reduction is being done the code is trying to add loss sample/instance wise (for example `masked_lm_loss` + `next_sentence_loss` for each sentence in a batch). The fix might be to do reduction but then I see no loss calculation with reduction anywhere in HF.<|||||>Investigating this now - I think this bug is real, but does not occur for most of our models, and might be specific to `BertForPreTraining()` and the next sentence prediction loss. As a workaround for now, you can use a language model that doesn't have a next sentence prediction loss, like `TFBertForMaskedLM` or `TFRobertaForMaskedLM` - the current consensus is that this loss isn't that helpful for training a language model anyway, and models more recent than BERT generally don't use it.<|||||>Hi @Rocketknight1 , Thank You for your reply. Actually, I am planning to make a community notebook for Tensorflow BERT pre-training (it's been a problem to figure out according to the [discussion](https://discuss.huggingface.co/t/pre-train-bert-from-scratch/1245/30)) since BERT still serves as a baseline for a lot of the research and very recent [work](https://arxiv.org/pdf/2203.15827.pdf) also shows variations of NSP to help in pre-training. So I thought this might be a nice feature to have. Thank You for the help!<|||||>Bug post-mortem: The bug is in the line that @ydshieh identified. The code here is quite old and was obviously trying to reshape the masked LM loss before reduction so that a per-sample loss tensor would result. However, the loss vector does not reshape cleanly after masking, because random positions are removed from each sample. I rewrote everything with static shapes to fix the issue, and add XLA compilation as a bonus!<|||||>@Sreyan88 We have now pushed a fix, so you can try installing from `main` and see if this fixes your problem. If it doesn't, please feel free to post the new error and reopen this issue!<|||||>Hi @Rocketknight1 , The code works perfectly fine on colab now! However, in my personal server, it's giving me `nan` loss since the beginning of training. Do you think there is a reason for this? I have the same tf version (2.8.0) on both and the same hf version too. The only difference is GPUs (Tesla T4 on Colab and RTX 3090 on person system). Any clues?<|||||>Hi @Sreyan88, we're in the process of rewriting some loss functions in preparation for our next release, so things are changing quite quickly on `main`. Can you try updating to the most recent commit on your personal server and let me know if you still get the error? Use `pip install --upgrade git+https://github.com/huggingface/transformers.git`<|||||>The problem persists :( . Is there anything more I should do beyond checking that both are on `4.21.0.dev0`?<|||||>It finally worked, I had to force-reinstall. Thank You!<|||||>Hi @Rocketknight1 , Just a question, can `prediction_logits` keyword in [this](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_tf_bert.py#L1247) line be converted to `logits`? a.k.a the `prediction_logits` in `TFBertForPreTrainingOutput` be converted to `logits`. This way the model gets compatible with `pipeline("fill-mask")` which will be useful since Pre-training also has MLM as a task! P.S. - `pipeline("fill-mask")` currently errors out with `TFBertForPreTraining` because it expects `logits`. Thank You! If you suggest doing this would be correct I can create a PR!<|||||>Hi @Sreyan88, I'm not sure - like I said, `TFBertForPreTraining` is mostly not used anymore because the next sentence loss doesn't seem to be helpful! If you'd like to use a model you trained with `TFBertForPreTraining` with the `fill_mask` pipeline, I suggest loading the checkpoint with `TFBertForMaskedLM.from_pretrained()` - this will give you a model without the next sentence prediction head, which `fill_mask` doesn't use anyway.
transformers
17,882
closed
Copied "Fine-tuning a masked language model" tutorial, got error on last step - training
### System Info ```shell - `transformers` version: 4.20.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger, @SaulLu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the PyTorch code provided in the [FIne-tuning a masked language model](https://huggingface.co/course/chapter7/3?fw=pt) tutorial or the linked Colab notebook. I copied the code into [this Colab notebook](https://colab.research.google.com/drive/1Wqjg3gDaSmFCww6ZRkixsCYr-QPjwGgf?usp=sharing) and have experienced the error here and when I run the code locally. The last cell trains the model for a handful of iterations before throwing the following exception: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in convert_to_tensors(self, tensor_type, prepend_batch_axis) 706 if not is_tensor(value): --> 707 tensor = as_tensor(value) 708 ValueError: expected sequence of length 128 at dim 1 (got 28) During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) 10 frames [<ipython-input-13-8bea5af68eb3>](https://localhost:8080/#) in <module>() 17 ) 18 ---> 19 trainer.train() [/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1411 resume_from_checkpoint=resume_from_checkpoint, 1412 trial=trial, -> 1413 ignore_keys_for_eval=ignore_keys_for_eval, 1414 ) 1415 [/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1623 1624 step = -1 -> 1625 for step, inputs in enumerate(epoch_iterator): 1626 1627 # Skip past any already trained steps if resuming training [/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in __next__(self) 528 if self._sampler_iter is None: 529 self._reset() --> 530 data = self._next_data() 531 self._num_yielded += 1 532 if self._dataset_kind == _DatasetKind.Iterable and \ [/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self) 568 def _next_data(self): 569 index = self._next_index() # may raise StopIteration --> 570 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 571 if self._pin_memory: 572 data = _utils.pin_memory.pin_memory(data) [/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index) 50 else: 51 data = self.dataset[possibly_batched_index] ---> 52 return self.collate_fn(data) [/usr/local/lib/python3.7/dist-packages/transformers/data/data_collator.py](https://localhost:8080/#) in __call__(self, features, return_tensors) 40 return self.tf_call(features) 41 elif return_tensors == "pt": ---> 42 return self.torch_call(features) 43 elif return_tensors == "np": 44 return self.numpy_call(features) [/usr/local/lib/python3.7/dist-packages/transformers/data/data_collator.py](https://localhost:8080/#) in torch_call(self, examples) 727 # Handle dict or lists with proper padding and conversion to tensor. 728 if isinstance(examples[0], Mapping): --> 729 batch = self.tokenizer.pad(examples, return_tensors="pt", pad_to_multiple_of=self.pad_to_multiple_of) 730 else: 731 batch = { [/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose) 2892 batch_outputs[key].append(value) 2893 -> 2894 return BatchEncoding(batch_outputs, tensor_type=return_tensors) 2895 2896 def create_token_type_ids_from_sequences( [/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in __init__(self, data, encoding, tensor_type, prepend_batch_axis, n_sequences) 207 self._n_sequences = n_sequences 208 --> 209 self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) 210 211 @property [/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in convert_to_tensors(self, tensor_type, prepend_batch_axis) 722 ) 723 raise ValueError( --> 724 "Unable to create tensor, you should probably activate truncation and/or padding " 725 "with 'padding=True' 'truncation=True' to have batched tensors with the same length." 726 ) ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. ``` ### Expected behavior ```shell The code fine-tunes `distilroberta-base` on the `eli5` dataset without error. ```
06-26-2022 18:19:19
06-26-2022 18:19:19
Hi @billray0259 , I believe that in your notebook you have modified the `group_texts` function a little, in particular by removing the following line: ```python # We drop the last chunk if it's smaller than chunk_size total_length = (total_length // chunk_size) * chunk_size ``` I think the error you are getting is due to the fact that you have kept this last chunk which will not be at the right size. Reintroducing this line should solve your problem. Keep me informed :relaxed: <|||||>Thank you @SaulLu! That line of code solves my issue! When putting together this issue, I made a different mistake; I linked to a similar but different tutorial. [This is the tutorial I was following](https://huggingface.co/docs/transformers/tasks/language_modeling) It appears this tutorial is missing the line that solves the issue. `group_texts` function from tutorial: ```block_size = 128 def group_texts(examples): concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result ``` It is also entirely possible that I have made an error when copying the code. Thank you again for your help, apologies that I didn't realize I was linking to a different page (which happened to contain the solution 😳) <|||||>Ahah, funny! No worries! Do you want to suggest changing in a PR the snippet for the `group_texts` method in the documentation (the page is [here](https://github.com/huggingface/transformers/blob/main/docs/source/en/tasks/language_modeling.mdx)) so that other people testing the guide don't run into the problem you encountered?<|||||>Great idea! I have [submitted a PR](https://github.com/huggingface/transformers/pull/17908) and I'll close this issue.
transformers
17,881
closed
Test fix job link in report
1. > [`**_**#**_**`](url) What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-26-2022 16:16:08
06-26-2022 16:16:08
needed<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@rahulpatil6886 This is a temporary branch to test another PR. This is not meant to be a PR itself.
transformers
17,880
closed
KeyError: 'logits'
### System Info ```shell `transformers` version: 4.16.2 - Platform: Linux-5.13.0-48-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction bert_name = 'bert-base-cased' bert_ = AutoModel.from_pretrained(bert_name) tokenizer_= AutoTokenizer.from_pretrained(bert_name) classifier = pipeline("zero-shot-classification",model=bert_,tokenizer=tokenizer_) for d in tqdm(data_loader): text=d['text'] true_label = d["label"] for i in range(len(text)): tl=c.index(true_label[i]) Ground_Truth.append(tl) output=classifier(text[i],label) print('output',output) high_score=max(output['scores']) Error::: File "/home/kshankar/Desktop/Project/Zero_Shot_updated/Fine-tuning/BBC_distilbert-base-uncased-finetuned-sst-2-english.py", line 187, in eval_model output=classifier(text[i],label) File "/home/kshankar/miniconda3/lib/python3.9/site-packages/transformers/pipelines/zero_shot_classification.py", line 182, in __call__ return super().__call__(sequences, **kwargs) File "/home/kshankar/miniconda3/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1006, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/kshankar/miniconda3/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1030, in run_single outputs = self.postprocess(all_outputs, **postprocess_params) File "/home/kshankar/miniconda3/lib/python3.9/site-packages/transformers/pipelines/zero_shot_classification.py", line 214, in postprocess logits = np.concatenate([output["logits"].numpy() for output in model_outputs]) File "/home/kshankar/miniconda3/lib/python3.9/site-packages/transformers/pipelines/zero_shot_classification.py", line 214, in <listcomp> logits = np.concatenate([output["logits"].numpy() for output in model_outputs]) KeyError: 'logits' ### Expected behavior ```shell logits is assigned before assignment ```
06-25-2022 17:26:26
06-25-2022 17:26:26
Hi, You're loading the pipeline with a `BertModel`, which doesn't include a head on top (like a sequence classification head for instance). Hence, no `logits` are computed. The zero-shot classification pipeline makes use of sequence classifiers fine-tuned on an [NLI task](http://nlpprogress.com/english/natural_language_inference.html) (natural language inference). Hence, you'll need to provide an `xxxForSequenceClassification` model fine-tuned on such a dataset.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hi, > > You're loading the pipeline with a `BertModel`, which doesn't include a head on top (like a sequence classification head for instance). Hence, no `logits` are computed. > > The zero-shot classification pipeline makes use of sequence classifiers fine-tuned on an [NLI task](http://nlpprogress.com/english/natural_language_inference.html) (natural language inference). Hence, you'll need to provide an `xxxForSequenceClassification` model fine-tuned on such a dataset. Thank you for the response! It's the same even if I use the pre-trained zero-shot classification model from Huggingface. Example: bert_name = ‘facebook/bart-large-mnli’ model = AutoModel.from_pretrained(bert_name) tokenizer = AutoTokenizer.from_pretrained(bert_name) classifier = pipeline(“zero-shot-classification”, model = model , tokenizer=tokenizer)<|||||>You need to replace `AutoModel` with `AutoModelForSequenceClassification` and use a model that supports `AutoModelForSequenceClassification`. Or use directly ``` pipe = pipeline(model="facebook/bart-large-mnli") print(pipe("Is this ok?", candidate_labels=["Science", "politics"]) ```<|||||>> You need to replace `AutoModel` with `AutoModelForSequenceClassification` and use a model that supports `AutoModelForSequenceClassification`. > > Or use directly > > ``` > pipe = pipeline(model="facebook/bart-large-mnli") > print(pipe("Is this ok?", candidate_labels=["Science", "politics"]) > ``` Its working. Thanks alot.
transformers
17,879
closed
Wav2Vec2ProcessorWithLM degraded performance when transcribing multiple files
### System Info ```shell - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @patrickvonplaten, @anton-l ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This snippet uses https://github.com/falcaopetri/transformers/commit/4d0d36ef66b1fd52942721096665f8bc9574c2b0 to allow setting a pool in `batch_decode`. Full colab example [here](https://colab.research.google.com/drive/1j4UNdqcafKH8WQUYIr871xc8h2A97B_z?usp=sharing). ```python # based on https://huggingface.co/patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torch from jiwer import wer model_id = "patrickvonplaten/wav2vec2-base-100h-with-lm" ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") model = AutoModelForCTC.from_pretrained(model_id).to("cuda") processor = AutoProcessor.from_pretrained(model_id) def map_to_pred(batch, pool=None): inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt") inputs = {k: v.to("cuda") for k,v in inputs.items()} with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.cpu().numpy(), pool=pool).text[0] batch["transcription"] = transcription return batch # Current implementation. Pool with 2 workers will be created for each dataset instance # Running this a second time to reuse cache does not significantly improve runtime (it's still > 15s) result = ds.map(map_to_pred, remove_columns=["audio"]) print(wer(result["text"], result["transcription"])) # 100% 73/73 [00:29<00:00, 3.48ex/s] # 0.057391304347826085 from multiprocessing import get_context # Alternative implementation. User-managed pool is reused for all instances with get_context("fork").Pool(None) as pool: result = ds.map(map_to_pred, remove_columns=["audio"], fn_kwargs={"pool": pool}) print(wer(result["text"], result["transcription"])) # 100% 73/73 [00:04<00:00, 17.12ex/s] # 0.057391304347826085 ``` ### Expected behavior I'd expect that instantiating a `Wav2Vec2ProcessorWithLM` allowed me to apply it to multiple audio instances, and that increasing `batch_decode`'s `num_processes` would bring performance improvements for all calls. Current implementation of `batch_decode` creates a `multiprocessing.Pool` at every call, leading to an overhead when decoding multiple files and when increasing `num_processes`. ----- https://github.com/falcaopetri/transformers/commit/4d0d36ef66b1fd52942721096665f8bc9574c2b0 implements a POC that allows `Wav2Vec2ProcessorWithLM` to reuse the same `multiprocessing.Pool` across multiple `batch_decode` calls. Performance gains can be checked in the previous Colab link. Allowing the user to manage their own `Pool` is equivalent to how `pyctcdecode` implements [decode_batch](https://github.com/kensho-technologies/pyctcdecode/blob/33478761427b3faad2652ca5b46b158566d88bab/pyctcdecode/decoder.py#L609), but we could also consider having a `Wav2Vec2ProcessorWithLM`-managed pool. For example, as a user I'd expect out-of-the-box performance gains when using more `num_processes`. We should be aware about docs on [multiprocessing.pool.Pool](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool) though: > Note that it is not correct to rely on the garbage collector to destroy the pool as CPython does not assure that the finalizer of the pool will be called (see [object.__del__()](https://docs.python.org/3/reference/datamodel.html#object.__del__) for more information). > ... > A frequent pattern found in other systems (such as Apache, mod_wsgi, etc) to free resources held by workers is to allow a worker within a pool to complete only a set amount of work before being exiting, being cleaned up and a new process spawned to replace the old one. The maxtasksperchild argument to the [Pool](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool) exposes this ability to the end user.
06-25-2022 15:43:00
06-25-2022 15:43:00
Pool creation might be required to be platform-dependent in the future (https://github.com/huggingface/transformers/pull/17070#issuecomment-1117695494), which means this would be users' responsibility if we go with a user-managed pool scenario.<|||||>Hey @falcaopetri, Thanks for the well-explained issue here! I agree that it would be nicer to let the user pass the pool to the function as an argument. Would you be interested in opening a PR for this? :-)<|||||>Sure, I'd be glad to help. I'll add some tests and docs and create a PR over the next few days.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,878
closed
Add type hints for RoFormer models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adding type hints for `RoFormer` model (PyTorch). Issue related: #16059. This is my second PR in the 🤗 Transformers repo, please let me know if any change is required. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? _Task requested in [comment](https://github.com/huggingface/transformers/issues/16059#issuecomment-1165783174) for issue #16059._ - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? _Ran `make fixup` before last commit._ ## Who can review? @Rocketknight1 for review or assign reviewer. Thanks! <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-25-2022 15:41:19
06-25-2022 15:41:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,877
closed
Fix bug in gpt2's (from-scratch) special scaled weight initialization
I randomly noticed a minor bug in the (from-scratch) weight initialization of gpt2, where the same tensor gets re-initialized over and over many times. I don't believe the significantly more common `from_pretrained` is impacted. The original discussion and explanation is here https://github.com/huggingface/transformers/pull/13573#discussion_r906288955 . The simplest reproduction is ```python from transformers import GPT2Model, GPT2Config configuration = GPT2Config() model = GPT2Model(configuration) ``` Then if you insert `print(id(p), name)` inside the if statement you'll see 4 inits of the same tensor, at each layer of the onion, e.g.: ``` 139851709684832 c_proj.weight 139851709684832 attn.c_proj.weight 139851709684832 0.attn.c_proj.weight 139851709684832 h.0.attn.c_proj.weight ``` I verified that original code triggers the `if` statement 96 times, while this version triggers it 24 times, which is correct for a 12-layer model. I also ran `pytest tests/models/gpt2/test_modeling_gpt2.py`, without issues. The code is still not super satisfying (e.g. there is still one layer of overwriting present due the init in code block above, it only happens in the right order because of the way `self.apply` iterates depth-first over children, and we're "hard-coding" variable names present in different modules all the way up, but fixing this would be a bit bigger refactor. cc potential gpt2 reviewers @patrickvonplaten, @LysandreJik and @sgugger, @siddk from original thread
06-25-2022 01:14:13
06-25-2022 01:14:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>Coming to this PR a bit late (sorry!) and this is a nice fix! I think @karpathy raises a good point about maybe factoring out a special post-init (probably not for GPT-2 since folks are used to this API, but for future models others may want to hack on) that prevent the extra initialization calls. FWIW this code does only execute once at the beginning of a from-scratch training run; I could see this becoming a problem if we tried to naively scale to much larger models. I'll see if we can come up with a better fix for other models. Thanks for the PR @karpathy - super excited to see you contributing to `transformers`!
transformers
17,876
closed
Inference API failing: `"Unknown error in run_once : postprocess() got an unexpected keyword argument 'return_all_scores'`
This model doesn't seem to work on the Hub: https://huggingface.co/guidecare/feelings_and_issues_large?text=I+like+you.+I+love+you Note the Unknown Error in red. It worked fine last week. Further, if I try to use the Accelerated Inference API via ```python import requests API_URL = "https://api-inference.huggingface.co/models/guidecare/feelings_and_issues_large" headers = {"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"} # i just put my own in here def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": "I like you. I love you", }) ``` I get this: ``` {'error': 'unknown error', 'warnings': ["Unknown error in run_once : postprocess() got an unexpected keyword argument 'return_all_scores'"]} ``` ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Clearly defined above. ### Expected behavior ```shell No error! ```
06-25-2022 00:05:36
06-25-2022 00:05:36
Hello again. Any ideas here? <|||||>Perhaps this has something to do with the `.bin` file being "unsafe" for some reason? ![image](https://user-images.githubusercontent.com/1874668/176098156-9a5d9413-959c-4e00-bb9d-e10f98f8a9e6.png) <|||||>Looks like the error is different now: ![image](https://user-images.githubusercontent.com/1874668/176098571-c71dbd4f-478b-44a9-8360-a5f76ad0011d.png) <|||||>I am no longer getting an error when running this code: ```python import requests API_URL = "https://api-inference.huggingface.co/models/guidecare/feelings_and_issues_large" headers = {"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"} # i just put my own in here def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": "I like you. I love you", }) ```<|||||>Closing in favor of https://github.com/huggingface/huggingface_hub/issues/932<|||||>Hi @sergeyf , Thanks for reporting this issue, ` 'warnings': ["Unknown error in run_once : postprocess() got an unexpected keyword argument 'return_all_scores'"]}` was fixed yesterday (linked to a small customization in the API in regards with pipelines that had to be updated. (By default the API returns all scores while the pipeline returns only the top score) <|||||>We've been using Inference API in production and it was down for multiple days for us. We didn't see any updates on https://status.huggingface.co under Inference API. How can we get notified of downtimes in the future? Is Inference API recommended for production use?<|||||>> How can we get notified of downtimes in the future? Unfortunately this wasn't picked as downtime since the API was responding correctly. Sometimes errors do happen on models because of configuration issues sometimes and the API just cannot run it, neither can the `pipeline` object within `transformers` which is what powers the API basically). > Is Inference API recommended for production use? Very much so. Breaking like so is definitely not great and I do apologize for this experience. Changes like this in `transformers` are very rare. But we definitely should have caught and fixed it earlier, again apologies here. For production use/issues we also recommend contacting [email protected] (Issues on github do work but it requires some internal routing to make it to the correct person).
transformers
17,875
closed
run_clm with gpt2 and wiki103 throws ValueError: expected sequence of length 1024 at dim 1 (got 1012) during training.
### System Info ```shell transformers 4.20.0.dev0 dev_0 python 3.7.11 h12debd9_0 linux os (docker container). ``` ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I run the following code using `run_clm.py` `python run_clm.py --train_name testing-gpt2 --output_dir ../data/training_outputs/ --model_type gpt2 --dataset_name wikitext --dataset_config_name wikitext-103-v1 --tokenizer_name gpt2 --preprocessing_num_workers 30 --do_train --overwrite_output_dir --save_steps 100000000 --save_total_limit 2 --num_train_epochs 175 --config_overrides n_layer=3` The error is: ``` {'loss': 7.0678, 'learning_rate': 4.998998477686083e-05, 'epoch': 0.04} 0%| | 1000/2496200 [03:17<138:25:20, 5.01it/s]{'loss': 6.2992, 'learning_rate': 4.997996955372166e-05, 'epoch': 0.07} | 0/1 [00:00<?, ?ba/s] 0%| | 1500/2496200 [04:57<138:30:20, 5.00it/s]{'loss': 6.0153, 'learning_rate': 4.9969954330582484e-05, 'epoch': 0.11} 0%| | 2000/2496200 [06:35<137:22:45, 5.04it/s]{'loss': 5.8504, 'learning_rate': 4.995993910744331e-05, 'epoch': 0.14} | 0/1 [00:00<?, ?ba/s] 0%| | 2500/2496200 [08:14<136:41:10, 5.07it/s]{'loss': 5.6997, 'learning_rate': 4.994992388430414e-05, 'epoch': 0.18} {'loss': 5.5797, 'learning_rate': 4.993990866116497e-05, 'epoch': 0.21} 0%| | 3500/2496200 [11:32<135:57:45, 5.09it/s]{'loss': 5.4796, 'learning_rate': 4.99298934380258e-05, 'epoch': 0.25} {'loss': 5.3864, 'learning_rate': 4.991987821488663e-05, 'epoch': 0.28} 0%|▏ | 4500/2496200 [14:50<137:03:49, 5.05it/s]{'loss': 5.3007, 'learning_rate': 4.990986299174746e-05, 'epoch': 0.32} 0%|▏ | 5000/2496200 [16:29<139:10:06, 4.97it/s]{'loss': 5.2367, 'learning_rate': 4.989984776860829e-05, 'epoch': 0.35} 0%|▏ | 5500/2496200 [18:08<137:22:34, 5.04it/s]{'loss': 5.16, 'learning_rate': 4.988983254546912e-05, 'epoch': 0.39} 0%|▏ | 6000/2496200 [19:47<138:31:23, 4.99it/s]{'loss': 5.109, 'learning_rate': 4.987981732232994e-05, 'epoch': 0.42} {'loss': 5.0511, 'learning_rate': 4.986980209919077e-05, 'epoch': 0.46} 0%|▏ | 7000/2496200 [23:06<138:01:44, 5.01it/s]{'loss': 5.0114, 'learning_rate': 4.98597868760516e-05, 'epoch': 0.49} 0%|▏ | 7500/2496200 [24:45<136:05:20, 5.08it/s]{'loss': 4.957, 'learning_rate': 4.984977165291243e-05, 'epoch': 0.53} 0%|▏ | 8000/2496200 [26:24<137:43:44, 5.02it/s]{'loss': 4.8985, 'learning_rate': 4.983975642977326e-05, 'epoch': 0.56} 0%|▎ | 8500/2496200 [28:04<136:25:23, 5.07it/s]{'loss': 4.8615, 'learning_rate': 4.982974120663408e-05, 'epoch': 0.6} 0%|▎ | 9000/2496200 [29:43<137:32:41, 5.02it/s]{'loss': 4.8158, 'learning_rate': 4.981972598349491e-05, 'epoch': 0.63} 0%|▎ | 9259/2496200 [30:35<137:00:18, 5.04it/s]Traceback (most recent call last): File "run_clm.py", line 649, in <module> main() File "run_clm.py", line 597, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/workspace/transformers/src/transformers/trainer.py", line 1327, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/workspace/transformers/src/transformers/trainer.py", line 1539, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/workspace/transformers/src/transformers/data/data_collator.py", line 67, in default_data_collator return torch_default_data_collator(features) File "/workspace/transformers/src/transformers/data/data_collator.py", line 131, in torch_default_data_collator batch[k] = torch.tensor([f[k] for f in features]) ValueError: expected sequence of length 1024 at dim 1 (got 1012) ``` Where every time I am getting this same error. I tried following https://stackoverflow.com/questions/71166789/huggingface-valueerror-expected-sequence-of-length-165-at-dim-1-got-128 by adding `padding='max_length'` to my tokenizer but then I get an error where this tokenizer does not have a padding token. NB. I think this only happens when `--preprocessing_num_workers 30` is not to a power of 2? Using 8, 16 or 128 avoids the problem. ### Expected behavior ```shell Model should keep training and not throw an error at training step: 9259 ```
06-24-2022 21:48:09
06-24-2022 21:48:09
Hi @TrentBrick , I think you can assign a custom padding token to GPT2, or discard the sequences that are too short (i.e. having < 1024 tokens)<|||||>@ydshieh is this common practice for GPT2? And I think the bigger issue is that an error is only thrown if my number of workers is not to the power of 2? <|||||>Hi, I don't know if this is a common practice, but it is a reasonable approach. The important thing is to make sure the attention masks for those (meant to be padded) tokens to have mask value `0` when doing training. Otherwise, you can always discard the short sequences (if it is the rare case). I don't think the issue is coming from the number of workers, it is more about the sequence length. See previous discussion https://github.com/huggingface/transformers/issues/12594 https://github.com/huggingface/transformers/issues/2630<|||||>I'm telling you that empirically when I use `--preprocessing_num_workers n` where n is the power of 2 there is no error that gets thrown. It is only when it is not to a power of 2 that this problem appears. <|||||>@TrentBrick I still believe it is not about `preprocessing_num_workers` being the power of 2 or not. However, this value might indeed have some effect. This method in `clm.py` https://github.com/huggingface/transformers/blob/b424f0b4a301abcbf3c282114159371ee44c3e01/examples/pytorch/language-modeling/run_clm.py#L440 tries to group texts and split them into `block_size` (1024 here). So there should be no shorter sequence. However, there is a condition https://github.com/huggingface/transformers/blob/b424f0b4a301abcbf3c282114159371ee44c3e01/examples/pytorch/language-modeling/run_clm.py#L446 In some cases, it might happen a batched examples has very few number of examples, and `total_length < block_size`, so it is not thrown away. This indeed depends on `preprocessing_num_workers`. I think you can set a breakpoint around this place and use try /except to verify the situation. I could talk to my colleague about this though - probably we can improve this condition here.<|||||>Hi @TrentBrick Could you try if the following change will work? Thanks. Change https://github.com/huggingface/transformers/blob/b424f0b4a301abcbf3c282114159371ee44c3e01/examples/pytorch/language-modeling/run_clm.py#L446-L448 to ```python if total_length >= block_size: total_length = (total_length // block_size) * block_size else: total_length = 0 # Split by chunks of max_len. ```<|||||>Hi @TrentBrick, if you get the chance to verify it works, don't hesitate to open a PR if you would like to :-). Otherwise, I will open a PR later. Thank you for finding this issue! As mentioned, it actually depends on the batches of examples received in the preprocessing function, and I don't try to run with 30 processes.<|||||>Close this issue - see [this comment](https://github.com/huggingface/transformers/pull/18304#pullrequestreview-1051141051)
transformers
17,874
closed
Fix TF GPT2 `test_onnx_runtime_optimize`
# What does this PR do? Fix TF GPT2 `test_onnx_runtime_optimize` by skipping 2 test classes. Current error: ``` tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_onnx_runtime_optimize (line 372) onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. In Node, ("tfgpt2_for_sequence_classification_27/GatherV2", GatherV2, "", -1) : ("tfgpt2_for_sequence_classification_27/score/Tensordot:0": tensor(float),"tfgpt2_for_sequence_classification_27/sub:0": tensor(int32),"tfgpt2_for_sequence_classification_27/GatherV2/axis:0": tensor(int32),) -> ("logits": tensor(float),) , Error No Op registered for GatherV2 with domain_version of 10 ```
06-24-2022 20:11:26
06-24-2022 20:11:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,873
closed
[WIP] Generate docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-24-2022 17:33:41
06-24-2022 17:33:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still having hope of merging this one day :crossed_fingers: <|||||>Ok sadly really not finding the time at the moment :cry: @gante @ArthurZucker @sanchit-gandhi could it maybe be interesting for one of you to take it over? <|||||>Also cc @sgugger just FYI <|||||>You don't need to cc me, I see everything ;-) ![image](https://user-images.githubusercontent.com/35901082/198031971-80e9fc44-0093-4c50-b807-aaa62f0f4117.png) <|||||>I could look into this in a couple of weeks if you want to offload it! Reassuring to know @sgugger has assumed the role of Hugging Face's [Big Brother](https://en.wikipedia.org/wiki/Big_Brother_(Nineteen_Eighty-Four)) 👀<|||||>I think I can take care of it maybe next week 😄 Adding it to my list ! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,872
closed
Fix test_inference_instance_segmentation_head
# What does this PR do? Current `test_inference_instance_segmentation_head` (in `MaskFormerModelIntegrationTest`) failed in CI. The expected slice ```python [[-1.3738, -1.7725, -1.9365], [-1.5978, -1.9869, -2.1524], [-1.5796, -1.9271, -2.0940]] ``` has precision `4` and `atol` argument (`TOLERANCE`) is also `1e-4`, which makes the difference at the boundary. This is **likely** the cause of test failures. Give more precision for the expected values should fix the issue. ```bash (Pdb) diff1 # (with original expected values) 0.0001039505 (Pdb) diff2 # (with more precision) 1.4066696e-05 ``` (However, I am not able to get the test failure with the original setting, launched manually in a GCP VM.)
06-24-2022 16:03:44
06-24-2022 16:03:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,871
closed
[CodeGen] support device_map="auto" for sharded checkpoints
# What does this PR do? This PR adds the `_no_split_modules` attribute in `CodeGenPreTrainedModel` to be able to load the sharded checkpoint with `device_map="auto"` cc @rooa
06-24-2022 15:53:54
06-24-2022 15:53:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,870
closed
Properly get tests deps in test_fetcher
# What does this PR do? With the move of the tests, the test fetcher is now improperly converting relative imports from other tests to the corresponding test files. This PR fixes that problem.
06-24-2022 15:23:34
06-24-2022 15:23:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,869
closed
Fix add new model like frameworks
# What does this PR do? When selecting specific frameworks with `transformers-cli add-new-model-like`, all objects are still added to the main init. This is due to the change in all our inits and the command not being properly adapted. This PR will fix it!
06-24-2022 14:35:52
06-24-2022 14:35:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,868
closed
Calling `generate` on a `T5ForConditionalGeneration` returns `n` tokens but `n-1` scores
### System Info ```shell - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @patrickvonplaten, @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch if __name__ == '__main__': torch.manual_seed(0) tokenizer = AutoTokenizer.from_pretrained('t5-small') model = AutoModelForSeq2SeqLM.from_pretrained('t5-small') input = tokenizer.encode("I enjoy walking with my cute dog", return_tensors='pt') result = model.generate( input, max_new_tokens=15, do_sample=True, return_dict_in_generate=True, output_scores=True, ) print(len(result["scores"])) for sequence in result["sequences"]: print(len(sequence)) print(tokenizer.decode(sequence)) ``` Output: ``` 15 16 <pad> Ich, liebe es, mes lustig beim laufen ``` ### Expected behavior I would have expected to have up to 15 tokens (as `max_new_tokens=15`) and `len(result["scores"]) == len(result["sequences"][0])`. However, the size of the returned sequence of tokens is always `len(result["scores"]) + 1`. In addition, if `max_new_tokens` is reached we have `len(result["sequences"][0]) == max_new_tokens + 1`. When looking at the decoded sequence, there is always a pad token at the beginning. I don't know if this is necessarily a bug but this behaviour is somewhat confusing, especially when trying to compute the probability of the sequence given scores.
06-24-2022 14:30:50
06-24-2022 14:30:50
Hi, @ClementRomac If you look the [config.json](https://huggingface.co/t5-small/blob/main/config.json) file of the `t5-small` model, you will see it uses `pad_token_id` as `decoder_start_token_id` (both are `0`). The `scores` having length `len(sequence) - 1` is expected. Think it this way, ```python generated sequence = [decoder_start_token_id, token_1, token_2] ``` The scores is/are: - score for generating `token_1` while we have `[decoder_start_token_id]` - score for generating `token_2` while we have `[decoder_start_token_id, token_1]` This is also documented in [generation_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py), for example (`SampleEncoderDecoderOutput`) https://github.com/huggingface/transformers/blob/afb71b672679e57449085e4955a321db8e5705b9/src/transformers/generation_utils.py#L172 or (`GreedySearchEncoderDecoderOutput`) https://github.com/huggingface/transformers/blob/afb71b672679e57449085e4955a321db8e5705b9/src/transformers/generation_utils.py#L101 etc.<|||||>Hey @ydshieh, Thanks for your answer, it makes sense! Could we consider documenting it a little bit more somewhere? I don't have any clear idea on where to put it but to be honest this behaviour can appear a bit confusing when looking at the documentation. For instance, in [generation_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py), it is mentioned (both for `SampleEncoderDecoderOutput` and `GreedySearchEncoderDecoderOutput`): 1. that `sequence_length` should be up to `max_length` (however we get `max_length +1` in the above example) 2. that `scores` will have size `max_length-1` (however we get `max_length` scores in the above example) https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/generation_utils.py#L169-L175<|||||>@ClementRomac , I think it is because you use `max_new_tokens=15,` instead of the argument `max_length.` See https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/generation_utils.py#L925-L929 I think it is quite well documented. It is possible to make it even more explicit to include `max_new_tokens` regarding the output format. @patrickvonplaten Do you think we should add this in `GreedySearchEncoderDecoderOutput` etc ..?<|||||>Always happy to make the generate docs more explicit! Also gently pinging @gante here for feedback :-) <|||||>Note: Some docstrings associated with `scores` have ``` `(max_length-1,)`-shaped tuple of `torch.FloatTensor` ``` while others have ``` `(max_length-input_ids.shape[-1],)`-shaped tuple of `torch.FloatTensor` ``` depending on whether the model is an encoder-decoder or a decoder-only (respectively) ______________________ I see two minor problems with the current docstrings: 1. Generation may stop before we generate `max_length` tokens (or `max_new_tokens` new tokens); 2. We are pushing away from `max_length` towards `max_new_tokens`. As such, it would be nice to improve the docs to address these two issues! Since the previous sentence in the docstring contains `(...) at each generation step`, perhaps something like this: ``` Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element per generation step), ``` The complete docstring would be: ``` scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element per generation step), with each tensor of shape `(batch_size, config.vocab_size)`). ``` WDYT?<|||||>@gante Looks good to me, as long as we keep `batch_size*num_return_sequences` instead of `batch_size` wherever it applies.<|||||>Very much agree with @gante here!<|||||>Assigned to me to update the docstring for all three frameworks <|||||>@ClementRomac Hi! I have been trying to calculate the probability of a sequence but am not sure how to do it. As you mentioned calculating the probability, can you please tell me how to do it? I have the scores for each step of the generate method, but not sure how to use them. What I am doing is, given a premise and a hypothesis, I am trying to identify whether they are entailment, contradiction, or, neutral. I am getting the classification correctly, I just don't know how to calculate the probability of the sequence being **entailment** ```python def is_entailment(premise, hypothesis): entailment_premise = premise entailment_hypothesis = hypothesis token_output = tokenizer("mnli premise: " + entailment_premise + " hypothesis: " + entailment_hypothesis, return_tensors="pt", return_length=True) input_ids = token_output.input_ids output = model.generate(input_ids, output_scores=True, return_dict_in_generate=True, max_length=50) entailment_ids = output["sequences"] entailment = tokenizer.decode(entailment_ids[0], skip_special_tokens=True) return entailment ```
transformers
17,867
closed
layoutxlm model can not convert to onnx
### Model description I use layoutxlm to training my data for downstream, when I convert the model which I trained to onnx by huggingface code layoutlmv2-to-onnx, it occurs below problem, can you give me some advice? it seems concat two different types causing this problem, but I do not modify any code just using run xfun for token classification, which confused me so much, I hope you can help us ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation I use layoutxlm to training my data for downstream, when I convert the model which I trained to onnx by huggingface code layoutlmv2-to-onnx, it occurs below problem, can you give me some advice? it seems concat two different types causing this problem, but I do not modify any code just using run xfun for token classification, which confused me so much, I hope you can help us <img width="943" alt="企业微信截图_1656080658142" src="https://user-images.githubusercontent.com/77612906/175556272-af5e91d0-c76e-483f-ad24-95fa7690146e.png">
06-24-2022 14:27:27
06-24-2022 14:27:27
cc @lewtun<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@lewtun friendly ping!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
17,866
closed
Bloom Optimize operations
Moved the original PR: #17759 here to check if the tests pass
06-24-2022 14:24:17
06-24-2022 14:24:17
_The documentation is not available anymore as the PR was closed or merged._<|||||>I won't merge this now since I saw that it broke some slow tests, will investigate that!<|||||>With the two proposed changes, all tests are now passing @younesbelkada :)<|||||>Thanks a lot @NouamaneTazi !! Amazing job 🔥 <|||||>Let's merge this together as some improvements to make the inference faster - [x] create attn mask only once - [x] broadcast alibi only once instead of each time on the attention layer - [x] Remove the contiguous calls and test the model - [ ] Refactor the reshaping (check how it is done in BLOOM Flax) in the attention layer<|||||>Before merging, let's fix the code quality tests...<|||||>All tests should be passing now except for `BloomModelTest::test_simple_generation` It seems the issue with this one comes from the fact the we now use `torch.bmm` instead of `torch.baddbmm` in [this line](https://github.com/younesbelkada/transformers/blob/773d8e780fea41b8a8f77bf2bccbfbeacc91d50d/src/transformers/models/bloom/modeling_bloom.py#L307) And I don't undestand what's happening here: (this only gives different outputs for fp16) ```python b = torch.baddbmm( torch.zeros_like(sliced_alibi, dtype=torch.float16), query_layer.transpose(1, 0), key_layer.transpose(1, 0).transpose(1, 2), beta=1.0, alpha=1.0, ) c = torch.baddbmm( sliced_alibi, query_layer.transpose(1, 0), key_layer.transpose(1, 0).transpose(1, 2), beta=1.0, alpha=1.0, ) - sliced_alibi print(b==c) ``` gives: ``` tensor([[[ True, True, True, True, False, True, True], [ True, True, True, True, False, False, False], [ True, True, True, True, False, True, True], [ True, True, False, False, True, False, False], [ True, True, False, False, True, False, False], [ True, True, True, False, False, True, True], [ True, False, False, False, True, False, True]], [[ True, False, True, True, True, True, False], [ True, True, True, True, True, False, False], [ True, False, True, False, False, True, True], [ True, True, False, True, False, True, False], [ True, True, True, True, True, False, True], [ True, True, False, True, True, True, False], [ True, False, True, True, True, True, True]], [[ True, True, True, False, True, True, True], [ True, True, True, True, False, False, True], [ True, True, True, False, True, False, True], [ True, False, True, True, False, False, True], [ True, False, True, False, True, True, True], [ True, True, True, True, True, True, False], [ True, False, True, False, False, True, True]], [[ True, True, True, False, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, False, True, True], [ True, True, True, False, True, True, False], [ True, True, True, True, False, True, False], [ True, True, True, True, False, True, True], [ True, True, True, True, True, True, False]], [[ True, True, True, True, True, True, True], [ True, True, True, False, True, False, True], [ True, True, True, False, False, True, True], [ True, False, True, True, True, False, True], [ True, True, False, True, False, False, True], [ True, True, True, True, True, False, True], [ True, True, False, True, True, True, False]], [[ True, True, True, True, True, True, True], [ True, True, True, False, True, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, True, True, True]], [[ True, False, True, False, False, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, False, False, True], [ True, True, False, False, False, True, True], [ True, False, True, False, False, False, True], [ True, True, True, False, False, True, True]], [[ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, True, True, True]], [[ True, True, True, False, False, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, True, True, True], [ True, False, True, True, False, True, True], [ True, False, True, True, True, True, True], [ True, True, False, True, True, True, True], [ True, False, False, True, False, True, False]], [[ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, False, True, True, True, False, True], [ True, True, True, True, True, True, True], [ True, False, False, True, True, True, True], [ True, False, False, True, True, True, True], [ True, True, True, True, True, False, True], [ True, True, True, True, False, False, True], [ True, False, True, True, True, True, True]], [[ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, False, True, True, True, True, True], [ True, True, True, True, True, False, True]], [[ True, True, True, True, False, True, True], [ True, True, True, False, True, False, True], [ True, True, True, True, True, True, True], [ True, False, True, True, True, True, True], [ True, True, True, True, False, False, True], [ True, True, True, True, True, False, True], [ True, True, False, False, False, True, False]], [[ True, True, True, False, False, False, False], [ True, True, True, True, True, True, True], [ True, True, False, False, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, True, True, True], [ True, True, False, True, True, True, True], [ True, True, True, True, True, True, False]], [[ True, True, True, True, True, True, True], [ True, False, True, True, False, True, True], [ True, True, True, True, True, True, True], [ True, False, True, True, False, True, True], [ True, True, False, True, False, False, True], [ True, True, True, True, False, True, True], [ True, True, True, True, True, False, False]], [[ True, True, True, True, True, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, False, True], [ True, True, True, True, True, True, True]]], device='cuda:0') ``` And btw this is what we get when print `old_matmul_result == new_matmul_result` ``` tensor([[[ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, False, True, True, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, True, True, False, True, True, True], [ True, False, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, False, True, True]], [[ True, True, True, True, True, True, True], [ True, False, True, True, True, True, False], [ True, True, True, True, True, False, True], [ True, True, False, True, True, True, False], [ True, True, True, True, False, True, True], [ True, False, False, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, True, False, True, True, True, False], [ True, True, False, False, True, True, False], [ True, True, True, True, True, True, False], [ True, True, True, True, True, True, True], [ True, True, False, True, True, True, True], [ True, False, True, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, False, True, True, False, False, True], [ True, True, True, True, True, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, False, True, True], [ True, False, True, False, True, True, True], [ True, True, True, False, True, True, False]], [[ True, True, True, False, True, False, True], [ True, True, True, True, False, True, False], [ True, False, True, True, True, False, False], [ True, True, True, True, True, True, False], [ True, True, True, True, True, False, False], [ True, True, True, True, True, True, True], [ True, True, True, False, False, False, True]], [[ True, True, True, True, True, True, True], [ True, True, True, False, True, True, False], [ True, True, False, True, False, False, True], [ True, True, True, True, True, False, True], [ True, False, True, False, False, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, True, True, True]], [[ True, True, True, False, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, False, True, True, True], [ True, True, True, True, True, True, False], [ True, True, True, True, False, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, True, True, True, True, True, False], [ True, True, False, True, False, True, True], [ True, True, True, True, True, True, False], [ True, True, False, True, True, False, True], [ True, False, True, True, True, False, False], [ True, False, True, True, False, True, False], [ True, False, True, True, False, True, True]], [[ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, True, False, True, True, True, True], [ True, True, True, True, True, True, False], [ True, False, True, True, False, True, False], [ True, True, True, True, True, True, True], [ True, True, True, True, True, False, True], [ True, True, True, True, True, True, True], [ True, True, True, True, False, True, False]], [[ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, True, False, True, True], [ True, True, True, True, True, False, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, False, True, True, False, True, False], [ True, True, True, True, True, True, True], [ True, True, True, False, False, True, True], [ True, False, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, False, True, True, False, True, True], [ True, True, True, True, True, True, True]], [[ True, True, True, True, True, True, True], [ True, False, True, True, True, False, True], [ True, False, True, True, True, False, True], [ True, True, True, False, True, True, False], [ True, True, True, True, True, True, False], [ True, True, True, True, True, True, True], [ True, False, True, False, True, False, True]], [[ True, True, False, True, True, False, False], [ True, False, True, False, True, True, False], [ True, True, True, True, True, True, True], [ True, False, True, False, False, True, True], [ True, True, True, True, False, True, False], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]], [[ True, True, True, True, True, False, True], [ True, True, True, False, True, True, False], [ True, True, True, True, True, False, True], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True], [ True, True, True, False, False, False, True], [ True, True, True, True, True, False, True]]], device='cuda:0') ```<|||||>all tests are passing now ! Is it ok if we merge this @stas00 (since you are working on DS inference just to check if this PR does not conflict anything with you work) ?<|||||>I didn't have a chance to read this PR, but let me at least run a quick test with it. **update:** it looks fine for the 350b model - I'm waiting for the 176 to download and will test with it as well. if in a rush please go ahead and merge and if anything emerges we can fix it after.<|||||>There were lots of changes since you got approval, so please wait for a re-review of @patrickvonplaten and me.<|||||>Thanks a lot @sgugger @patrickvonplaten I think that we will do the tests in fp32 instead, we just need to keep in mind that doing batched generation can be flaky for small models (<=350m) as we have identified it with @NouamaneTazi . We will put a comment on the tests explaining what we have found and I think that we should be good to go!<|||||>All tests are passing now (tested on A100) 🎉 <|||||>Now tests pass on both A100 and Titan RTX 🎉 (because we used `fp32`) (Note that the test `BloomModelTest::test_batch_generation_padd` is still failing on Titan RTX in `fp16` whether for this PR or the `main` branch, because of the issue mentioned above)
transformers
17,865
closed
[tests/VisionEncoderDecoder] import to_2tuple from test utils
# What does this PR do? Import to_2tuple from `testing_utils` as it's removed from `modeling_vit`file.
06-24-2022 12:57:30
06-24-2022 12:57:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,864
closed
Fix Maskformer test
# What does this PR do? Fix Maskformer test `test_multi_gpu_data_parallel_forward`. Ignore the test, as the original workaround will break current checkpoints. ------ Original Attempt I know we probably want to avoid using `nn.DataParallel(model)`. But before doing so, I just tried my best to fix tests. After spending some time debugging, I find using `add_module` instead of `nn.Sequential` causing problems. I am not sure if the change in this PR is what we prefer though. @NielsRogge Is there any reason to use `add_module`? I don't know if the comment regarding `Provide backwards compatibility ...` is really necessary. https://github.com/huggingface/transformers/blob/d88719581b34f301edcc7772d927d8a3e3a77af6/src/transformers/models/maskformer/modeling_maskformer.py#L1986
06-24-2022 12:56:48
06-24-2022 12:56:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>You are right, prev (and checkpoint on Hub): `model.pixel_level_module.decoder.fpn.stem.0.weight` this PR: `model.pixel_level_module.decoder.fpn.stem.layers.1.weight` --> Get extra `layers` attribute I will ignore the test<|||||>But maybe let's use `self.layers = nn.Sequential(...)` for future models in the 1st place?<|||||>Yes, the problem here is that the model inherited from `nn.Sequential`, which was a mistake. And to fix it, we had to go through this hacky way. But it's definitely not the recommended approach!<|||||>Test skipped now.
transformers
17,863
closed
[Flax] Fix incomplete batches in example scripts
# What does this PR do? Currently in our Flax examples scripts, we drop the last incomplete batch during training and inference: https://github.com/huggingface/transformers/blob/09178705101b9803e7b9ea7f79a46b4c242dd4bf/examples/flax/summarization/run_summarization_flax.py#L350 We do this for two reasons: 1. Because XLA is not shape polymorphic, forming the last batch with shape different from the preceding batches triggers a recompilation of the `pmap`'d function . 2. If the batch size is not divisible by the number devices, then the last step must be executed on a single device (or a subset of devices), potentially leading to OOMs. During training, dropping the last batch isn't an issue: since we shuffle the data and train for multiple epochs, all of the training data is eventually used and the effects of dropping the last batch amortised. However, during evaluation and prediction, dropping the last batch leads to incorrect results: since we don't account for the examples in the last batch, we do not evaluate over the whole dataset, and thus have partial results. This PR corrects for the incomplete batches in the relevant Flax training examples. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
06-24-2022 11:36:06
06-24-2022 11:36:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>Just waiting to double check that the slow tests pass from [`test_flax_examples.py`](https://github.com/huggingface/transformers/blob/main/examples/flax/test_flax_examples.py) before merging. Working with @patil-suraj to verify this ✅<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patil-suraj @sanchit-gandhi can we merge this one?<|||||>Just verifying the slow tests from [`test_flax_examples.py`](https://github.com/huggingface/transformers/blob/main/examples/flax/test_flax_examples.py) pass on a <s>v3-8</s> v100 GPU!
transformers
17,862
closed
RegexTokenizer
### Feature request We would like to implement a general RegexTokenizer, which gets a regex as input and tokenizes strings according to this regex. ### Motivation In chemistry, for example, there are line notations like SMILES (http://opensmiles.org/opensmiles.html), which can be used to represent molecules and reactions as strings. In previous work, such as the MolecularTransformer (https://pubs.acs.org/doi/full/10.1021/acscentsci.9b00576, built with OpenNMT) or RXNMapper (https://www.science.org/doi/10.1126/sciadv.abe4166, with huggingface/transformers), we used a regex to split SMILES by atoms/bonds. ``` SMI_REGEX_PATTERN = r"(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|#|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9])" def smi_tokenizer(smi, pattern=SMI_REGEX_PATTERN): """ Tokenize a SMILES molecule or reaction """ import re regex = re.compile(pattern) tokens = [token for token in regex.findall(smi)] assert smi == ''.join(tokens) return ' '.join(tokens) ``` But every time we want to change the transformer model, we have to rewrite the tokenizer and redefine it, to make it work with the model. Would there be a more efficient and general way to do it? We could imagine that also other fields (e.g. proteins) could benefit from a RegexTokenizer. ### Your contribution Happy to help with the PR. The regex for SMILES (chemistry) is ready. We just don't know where to best start.
06-24-2022 08:56:54
06-24-2022 08:56:54
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @pschwllr, I currently write my master thesis dealing with molecules and transformers and came here looking for a SMILES tokenizer to use with Hugging Face transformers. I am neither into molecular biology nor Hugging Face, so proceed with some caution, but maybe this is still useful. If it is, please let me know, especially if you have ideas how to improve it. This code snippet provides a tokenizer that can be used with Hugging Face transformers. It uses a simple Word Level algorithm, which you could easily replace with BPE etc.. ```py from tokenizers import Regex, Tokenizer from tokenizers.models import WordLevel from tokenizers.pre_tokenizers import Split from tokenizers.processors import TemplateProcessing from tokenizers.trainers import WordLevelTrainer from transformers import PreTrainedTokenizerFast SMI_REGEX_PATTERN = r"""(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9])""" BOS_TOKEN = "^" EOS_TOKEN = "&" PAD_TOKEN = " " UNK_TOKEN = "?" MODEL_MAX_LENGTH = 120 smi = "CC(C)(C)c1ccc2occ(CC(=O)Nc3ccccc3F)c2c1" smiles_tokenizer = Tokenizer(WordLevel(unk_token=UNK_TOKEN)) smiles_tokenizer.pre_tokenizer = Split( pattern=Regex(SMI_REGEX_PATTERN), behavior="isolated", invert=False ) smiles_trainer = WordLevelTrainer( special_tokens=[BOS_TOKEN, EOS_TOKEN, PAD_TOKEN, UNK_TOKEN] ) smiles_tokenizer.train_from_iterator(smi, trainer=smiles_trainer) smiles_tokenizer.post_processor = TemplateProcessing( single=BOS_TOKEN + " $A " + EOS_TOKEN, special_tokens=[ (BOS_TOKEN, smiles_tokenizer.token_to_id(BOS_TOKEN)), (EOS_TOKEN, smiles_tokenizer.token_to_id(EOS_TOKEN)), ], ) tokenizer_pretrained = PreTrainedTokenizerFast( tokenizer_object=smiles_tokenizer, model_max_length=MODEL_MAX_LENGTH, padding_side="right", truncation_side="left", bos_token=BOS_TOKEN, eos_token=EOS_TOKEN, pad_token=PAD_TOKEN, unk_token=UNK_TOKEN, ) print(tokenizer_pretrained.encode(smi)) # [0, 5, 5, 6, 5, ..., 4, 8, 1] ```
transformers
17,861
closed
Fix the url mistake
# What does this PR do? Fix the url mistake from `https://huggingface.co/docstransformers/training` to `https://huggingface.co/docs/transformers/training`.
06-24-2022 07:16:01
06-24-2022 07:16:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
17,860
closed
Text generation: Unexpected behavior when input ends with newlines
### System Info ```shell - `transformers` version: 4.15.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.5.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ``` ### Who can help? @patrickvonplaten, @Narsil ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction gen_tokens = model.generate(input_ids, do_sample=specifiedDoSample, temperature=specifiedTemperature, max_length=calculated_max_length, min_length=calculated_min_length, repetition_penalty=specifiedRepetitionPenalty, bad_words_ids=badWordsTokens) gen_text = tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0] # tokenizer.batch_decode(gen_tokens)[0] print(gen_text) As input, use text such as (the end of the text has 2 newlines): ``` This is a line 1 This is a line 2 This is a line 3 ``` Actual behavior: -If the input ends with 1 newline, generating multiple tokens works as expected, but generating just 1 token says the next token should be a newline by itself. -If the input ends with 2 newlines, generate multiple tokens doesn't work as expected, and printing the next top score reveals the next token is some unexpected thing such as another newline or a token beginning with a space. Reason it's a problem: If the prompt had a format like this, there is no way to generate a good result while still specifying newline as one of the bad_words_ids. Say we have some dialogue with multiple people saying things, each separated by 2 newlines. We want the next text to also be separated by 2 newlines, but contain no more newlines (we want it to be a big paragraph). There is no way to generate this correctly. ### Expected behavior Either: If the input ends with 1 newline, then the next token should be a newline followed by a word, such as "\nThis" OR If the input ends with 2 newlines, then the next token should be a word that's not preceded by a space, rather than yet another newline
06-24-2022 05:51:22
06-24-2022 05:51:22
Hey @monsieurpooh, Sorry I can't run: ```py gen_tokens = model.generate(input_ids, do_sample=specifiedDoSample, temperature=specifiedTemperature, max_length=calculated_max_length, min_length=calculated_min_length, repetition_penalty=specifiedRepetitionPenalty, bad_words_ids=badWordsTokens) gen_text = tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0] # tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ``` as `model` is not defined. Can you copy-paste a reproducible code snippet please? :-) Thanks a lot!<|||||>Hi Patrick, here is a code snippet https://paste.ee/p/B2Upc And here is the input I am using, but please make sure there's 2 newlines at the end to repro: https://paste.ee/p/ND8cZ<|||||>Hey @monsieurpooh, Could you maybe try to just copy-paste here in this thread a short, minimal code snippet (sorry we have very limited amount of time to look at issues and a 200 code snippet file with lots of commented out code takes too much time). Can you try to condense the problem into ~5-10 lines of code maybe? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Please use the following minimal repro code snippet and observe the behavior described in the previous comments by modifying the input to tokenizer. ``` from transformers import GPTNeoForCausalLM, GPT2Tokenizer model_name = "EleutherAI/gpt-neo-125M" model = GPTNeoForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, cache_dir='gpt_cache_dir', resume_download=True).half().to("cuda:0") tokenizer = GPT2Tokenizer.from_pretrained(model_name, low_cpu_mem_usage=True, cache_dir='gpt_cache_dir', resume_download=True) input_ids = tokenizer("This is a line 1\n\nThis is a line 2\n\nThis is a line 3\n\n", return_tensors="pt").input_ids.cuda() gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.01, max_length=40, min_length=1, repetition_penalty=1.0) gen_text = "Output: \"" + tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0] + "\"" print(gen_text) ```<|||||>Gently pinging @gante here as well
transformers
17,859
closed
Bad readme for HF model codeparrot/codeparrot-small
### System Info ```shell Not system-dependent https://huggingface.co/codeparrot/codeparrot-small The README is not updated ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("lvwerra/codeparrot-small") model = AutoModelWithLMHead.from_pretrained("lvwerra/codeparrot-small") ### Expected behavior ```shell The model loading should be successful The model card is "codeparrot/codeparrot-small" but in example usage it's "lvwerra/codeparrot-small". A simple update to the model card README will do ```
06-23-2022 22:45:22
06-23-2022 22:45:22
Hi! Thanks for reporting. We now have a [discussion and PR feature](https://huggingface.co/codeparrot/codeparrot-small/discussions) on the hub, meaning that you can directly open an issue on the hub for a particular repository. So feel free to ping them there!<|||||>Thanks for reporting - it is fixed now!<|||||>Closing the issue in that case!
transformers
17,858
closed
Add type hints for gptneox models
# What does this PR do? Adding missing type hints for `GPTNeoxForCausalLM` and `GPTNeoXModel` as referenced in this issue.(https://github.com/huggingface/transformers/issues/16059#issuecomment-1164898772). ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community please feel free to review😄 @Rocketknight1
06-23-2022 22:07:59
06-23-2022 22:07:59
_The documentation is not available anymore as the PR was closed or merged._