repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 23,079 | closed | Trainer doesn't run `compute_metrics` when a `torch.compile` model is passed. | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run training with evaluation that has a `compute_metrics` function defined.
When passing the model to `Trainer`, pass a `torch.compile()` wrapped model.
In `Trainer.__init__()` there's the line `default_label_names = find_labels(self.model.__class__)` but the model class is `torch._dynamo.eval_frame.OptimizedModule` so no labels are assigned and this has the side effect of `compute_metrics` not being run.
It would be great if this checked for this case and got the correct model, or just threw a warning. I'm guessing lots of people are going to come across this as Torch 2.0 gains traction.
I only realised _after_ chasing down the cause of evaluation not running that I can pass `torch_compile=True`, so this bug no longer affects me.
### Expected behavior
Works the same as passing a non-wrapped model. | 04-30-2023 20:14:45 | 04-30-2023 20:14:45 | Another thing I just discovered, it also doesn't complete `save_pretrained()` correctly. Is saves _something_, but there isn't a `config.json` file in there. I'm guessing it's the line in `_save()` starting with `if not isinstance(self.model, PreTrainedModel)...`
Again, I know now I can do `torch_compile`, but I reckon this is going to sting lots of users as they try to pass in a compile model with the understanding that it "just works" without any code changes.<|||||>You shouldn't pass a `torch.compile`-d model to the Trainer, but let the Trainer do the compilation itself.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,078 | closed | Fix `convnext` __init__ | # What does this PR do
Fix
| 04-30-2023 19:20:33 | 04-30-2023 19:20:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,077 | closed | [i18n-<languageCode>] Translating docs to <languageName> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
--> | 04-30-2023 18:03:56 | 04-30-2023 18:03:56 | |
transformers | 23,076 | closed | Unable to compare versions for numpy>=1.17: need=1.17 found=None. | ### System Info
Ubuntu 18.04.6
transformers version : 4.18.0
pytorch version : 2.0.0
numpy version : 1.24.3
conda env
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline
text_classifier = pipeline('text-classification', model='distilbert-base-uncased-finetuned-sst-2-english')
text = "This movie is good!"
result = text_classifier(text)
print(result)
**when I run a code using transformers, there will be an error:**
Traceback (most recent call last):
File "/home/hyx/hhq/hugging_face/test.py", line 1, in <module>
from transformers import pipeline
File "/home/miniconda3/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/home/miniconda3/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 41, in <module>
require_version_core(deps[pkg])
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 123, in require_version_core
return require_version(requirement, hint)
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 117, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 45, in _compare_versions
raise ValueError(
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
### Expected behavior
I have tried to reinstall numpy ,transformers, but it's not work | 04-30-2023 15:02:27 | 04-30-2023 15:02:27 | Updating transformers to the latest version should fix the problem.
You can run:
`pip install --upgrade transformers`
to update Transformers to the latest version.<|||||>> I have tried conda update transformers , but when it finished ,there is no error and the version didn't change. still is 4.18.0.
Then, I also tried the following command,nothing changed
```
~$ conda install transformers==4.28.1
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
$ conda list transformers
# packages in environment at /home/miniconda3:
#
# Name Version Build Channel
sentence-transformers 2.2.2 pypi_0 pypi
transformers 4.18.0 pypi_0 pypi
```<|||||>It seems that the conda channel has not been updated, hence it pulls in the old version.
Can you try running:
`conda install -c huggingface transformers`
Conda environments also support installs using pip, so you could also run:
```bash
conda install pip
pip install --upgrade transformers
``` <|||||>**I have update transformers to latest version**
```
$ conda list transformers
# packages in environment at /home/miniconda3:
#
# Name Version Build Channel
sentence-transformers 2.2.2 pypi_0 pypi
transformers 4.28.1 py_0 huggingface
```
**But the problem still**
```
Traceback (most recent call last):
File "/home/hyx/hhq/hugging_face/test.py", line 1, in <module>
from transformers import pipeline
File "/home/miniconda3/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/home/miniconda3/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 41, in <module>
require_version_core(deps[pkg])
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 123, in require_version_core
return require_version(requirement, hint)
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 117, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/home/miniconda3/lib/python3.9/site-packages/transformers/utils/versions.py", line 45, in _compare_versions
raise ValueError(
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
```<|||||>What version of Numpy are you using? Can you update that as well?
You can use:
`conda update numpy`<|||||>```
$ conda list numpy
# packages in environment at /home/miniconda3:
#
# Name Version Build Channel
numpy 1.24.3 py39h14f4228_0 defaults
numpy-base 1.24.3 py39h31eccc5_0 defaults
numpy-quaternion 2022.4.1 pypi_0 pypi
```
as you can see, numpy is also latest <|||||>Same thing ...
```
% ipython
Python 3.9.16 (main, Mar 8 2023, 04:29:44)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.12.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import transformers
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import transformers
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/__init__.py:26
23 from typing import TYPE_CHECKING
25 # Check the dependencies satisfy the minimal versions required.
---> 26 from . import dependency_versions_check
27 from .utils import (
28 OptionalDependencyNotAvailable,
29 _LazyModule,
(...)
42 logging,
43 )
46 logger = logging.get_logger(__name__) # pylint: disable=invalid-name
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/dependency_versions_check.py:41
38 if not is_tokenizers_available():
39 continue # not required, check version only if installed
---> 41 require_version_core(deps[pkg])
42 else:
43 raise ValueError(f"can't find {pkg} in {deps.keys()}, check dependency_versions_table.py")
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/utils/versions.py:123, in require_version_core(requirement)
121 """require_version wrapper which emits a core-specific hint on failure"""
122 hint = "Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main"
--> 123 return require_version(requirement, hint)
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/utils/versions.py:117, in require_version(requirement, hint)
115 if want_ver is not None:
116 for op, want_ver in wanted.items():
--> 117 _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/transformers/utils/versions.py:45, in _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
43 def _compare_versions(op, got_ver, want_ver, requirement, pkg, hint):
44 if got_ver is None or want_ver is None:
---> 45 raise ValueError(
46 f"Unable to compare versions for {requirement}: need={want_ver} found={got_ver}. This is unusual. Consider"
47 f" reinstalling {pkg}."
48 )
49 if not ops[op](version.parse(got_ver), version.parse(want_ver)):
50 raise ImportError(
51 f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}"
52 )
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
In [2]: quit
(pysr) davidlaxer@bluediamond julia % conda list numpy
# packages in environment at /Users/davidlaxer/anaconda3/envs/pysr:
#
# Name Version Build Channel
numpy 1.24.3 py39he696674_0
numpy-base 1.24.3 py39h9cd3388_0
```<|||||>If there is no way to solve this problem, I would change a env. I will close this issue in few days<|||||>```
% ipython
Python 3.9.16 (main, Mar 8 2023, 04:29:44)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.12.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import importlib_metadata
In [2]: import numpy
In [3]: importlib_metadata.version(numpy)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 importlib_metadata.version(numpy)
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:832, in version(distribution_name)
825 def version(distribution_name):
826 """Get the version string for the named package.
827
828 :param distribution_name: The name of the distribution package to query.
829 :return: The version string for the package as defined in the package's
830 "Version" metadata key.
831 """
--> 832 return distribution(distribution_name).version
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:805, in distribution(distribution_name)
799 def distribution(distribution_name):
800 """Get the ``Distribution`` instance for the named package.
801
802 :param distribution_name: The name of the distribution package as a string.
803 :return: A ``Distribution`` instance (or subclass thereof).
804 """
--> 805 return Distribution.from_name(distribution_name)
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:381, in Distribution.from_name(cls, name)
379 raise ValueError("A distribution name is required.")
380 try:
--> 381 return next(cls.discover(name=name))
382 except StopIteration:
383 raise PackageNotFoundError(name)
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:400, in <genexpr>(.0)
397 raise ValueError("cannot accept context and kwargs")
398 context = context or DistributionFinder.Context(**kwargs)
399 return itertools.chain.from_iterable(
--> 400 resolver(context) for resolver in cls._discover_resolvers()
401 )
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:731, in MetadataPathFinder.find_distributions(self, context)
722 def find_distributions(self, context=DistributionFinder.Context()):
723 """
724 Find distributions.
725
(...)
729 of directories ``context.path``.
730 """
--> 731 found = self._search_paths(context.name, context.path)
732 return map(PathDistribution, found)
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:737, in MetadataPathFinder._search_paths(cls, name, paths)
734 @classmethod
735 def _search_paths(cls, name, paths):
736 """Find metadata directories in paths heuristically."""
--> 737 prepared = Prepared(name)
738 return itertools.chain.from_iterable(
739 path.search(prepared) for path in map(FastPath, paths)
740 )
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:692, in Prepared.__init__(self, name)
690 if name is None:
691 return
--> 692 self.normalized = self.normalize(name)
693 self.legacy_normalized = self.legacy_normalize(name)
File ~/anaconda3/envs/pysr/lib/python3.9/site-packages/importlib_metadata/__init__.py:700, in Prepared.normalize(name)
695 @staticmethod
696 def normalize(name):
697 """
698 PEP 503 normalization plus dashes as underscores.
699 """
--> 700 return re.sub(r"[-_.]+", "-", name).lower().replace('-', '_')
File ~/anaconda3/envs/pysr/lib/python3.9/re.py:210, in sub(pattern, repl, string, count, flags)
203 def sub(pattern, repl, string, count=0, flags=0):
204 """Return the string obtained by replacing the leftmost
205 non-overlapping occurrences of the pattern in string by the
206 replacement repl. repl can be either a string or a callable;
207 if a string, backslash escapes in it are processed. If it is
208 a callable, it's passed the Match object and must return
209 a replacement string to be used."""
--> 210 return _compile(pattern, flags).sub(repl, string, count)
TypeError: expected string or bytes-like object
```<|||||>I created a new Conda virtual environment and installed the requisite packages ... no issue.
So, something was wrong in original Conda virtual environment (which I removed). |
transformers | 23,075 | closed | Fix check for backword_pos | # What does this PR do?
This fixes what I believe the original intention of this line should have been. @raghavanone ??
original PR here: https://github.com/huggingface/transformers/pull/21237
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-30-2023 13:47:15 | 04-30-2023 13:47:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @pacman100 |
transformers | 23,074 | closed | How to use BartEncoder and BartDecoder | ### System Info
I'm a CVer, I want to use Bart as Auto-Encoder in CV task.
Here I have a question, how to use BartEncoder to encode **A** to **z**, and then how to decode **z** to "A"?
I need an example code; please help me, than you very much.
Maybe like this two?
one:
```
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
inputs = A
encoder = model.model.encoder
decoder = model.model.decoder
z= encoder(input_ids = inputs["input_ids"])
A= decoder(z)
```
it will meet following error: ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds.
two:
```
from transformers.models.bart.modeling_bart import BartEncoder
from transformers.models.bart.modeling_bart import BartDecoder
inputs = A
z= BartEncoder(input_ids = inputs["input_ids"])
A= BartDecoder(z)
So, what should I do? I didn't find a document for this; please help me with this, thank you again.
@ArthurZucker @gante @Narsil
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
as I provided code.
### Expected behavior
Write an example code or document | 04-30-2023 11:40:57 | 04-30-2023 11:40:57 | @LRY0111 Even though BART is a Sequence to Sequence Transformer trained with a Denoising Autoencoder objective, it has been trained to reconstruct the original text. So I don't think you can use BART to represent "A" as "z" using the base model.
You can find more information in the model docs [here](https://huggingface.co/docs/transformers/model_doc/bart).
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.<|||||>@awinml I appreciate your help. I mean that I want to get the latent representation z of a sentence A (by Encoder) and make some changes on z to formulate z' ; finally, reverse this process by Decoder to reconstruct the A'.
So I need to know how to do this A-->z-->A with Bart Encoder and Decoder; I want some example code here. Thank you very much.<|||||>cc @gante <|||||>Hey @LRY0111 👋
You were close to the correct usage, but there is a detail you missed in your solution :) The decoder must be used in an auto-regressive fashion, which we conveniently implemented in our `.generate()` method (see [this blog post](https://huggingface.co/blog/how-to-generate)). See the snippet below for an example.
```py
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
inputs = tokenizer(["This is a test. Hello world"], return_tensors="pt")
encoder = model.model.encoder
# z.last_hidden_state has the encoded output. If you manipulate it, you may need to
# rebuild the `BaseModelOutput` data class, which `.generate()` expects
z = encoder(input_ids=inputs["input_ids"])
A = model.generate(encoder_outputs=z, max_new_tokens=20)
print(tokenizer.decode(A[0], skip_special_tokens=True))
```<|||||>Well noted with thanks. I’ll try the solution you provided. Thank you again.
Best regards,
从 Windows 版邮件<https://go.microsoft.com/fwlink/?LinkId=550986>发送
发件人: Joao ***@***.***>
发送时间: 2023年5月3日 21:42
收件人: ***@***.***>
抄送: ***@***.***>; ***@***.***>
主题: Re: [huggingface/transformers] How to use BartEncoder and BartDecoder (Issue #23074)
Hey @LRY0111<https://github.com/LRY0111> 👋
You were close to the correct usage, but there is a detail you missed in your solution :) The decoder must be used in an auto-regressive fashion, which we conveniently implemented in our .generate() method (see this blog post<https://huggingface.co/blog/how-to-generate>). See the snippet below for an example.
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
inputs = tokenizer(["This is a test. Hello world"], return_tensors="pt")
encoder = model.model.encoder
decoder = model.model.decoder
# z.last_hidden_state has the encoded output. If you manipulate it, you may need to
# rebuild the `BaseModelOutput` data class, which `.generate()` expects
z = encoder(input_ids=inputs["input_ids"])
A = model.generate(encoder_outputs=z, max_new_tokens=20)
print(tokenizer.decode(A[0], skip_special_tokens=True))
—
Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/23074#issuecomment-1533053566>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFUZSAVRRMEA7V27JD4OVQTXEJODRANCNFSM6AAAAAAXQ3WM6M>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,073 | closed | Added type hints for `Graphormer` pytorch version | @Rocketknight1 👋
- I added type hint for `graphormer` pytorch
- checked formatting with black and ruff
if some checks on ci/cd do not, please do comment and correct | 04-30-2023 11:36:41 | 04-30-2023 11:36:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This looks pretty good! Is there a reason to use `Union[torch.Tensor, torch.LongTensor]` instead of just `torch.LongTensor`?<|||||>@Rocketknight1 Hi 👋
- `Union[torch.Tensor, torch.LongTensor]` is used because the file had a lot of `nn.embedding` instances which expects either IntTensor or LongTensor
- so to avoid any confusion 😕 i used that
- [nn.embedding docs](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html#torch.nn.Embedding)

*if still changes are required i would be happy to make it*🙂
<|||||>Hi @dewasahu2003, I think in most cases we just annotate those types as `LongTensor`! Your version is probably more correct, but for simplicity just `LongTensor` is fine, since that's what people usually use.<|||||>@Rocketknight1 Hi 👋
- if LongTensor is preferred then i would make changes along
- that would help the code to be 🤒 bloat free
<|||||>Yep, I think replacing with LongTensor is slightly better, and does make the code a bit cleaner too.<|||||>Sure <|||||>Done. Thanks for the PR, we really appreciate it! |
transformers | 23,072 | closed | Register a custom tokenizer with AutoTokenizer | ### System Info
(Possible duplicate: #10256)
I have written a custom tokenizer that builds on top of `BertTokenizer` (returns one extra list of ids that will later be embedded in a custom model). I have pushed it to Hub as well. Now, how can I allow others to use it? The code for the tokenizer is uploaded to Hub along with the code for the model (they are in the same file), but since I cannot register the tokenizer with `AutoTokenizer` like I can do for models (`CustomModel.register_for_auto_class("AutoModel")`), others cannot load this tokenizer, and hence use the model.
Is there a workaround for this?
Version: 4.27.4
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The code for both the tokenizer and model can be found here: https://huggingface.co/mcgill-babylm/bert_ds10M_np512_nh2_nl2_hs128_postags_ungrouped/blob/main/pos_bert.py
I am able to load the model with no problems since I push it after registering it as follows
```
BertForMaskedLMWithPOSEmb.register_for_auto_class("AutoModel")
BertForMaskedLMWithPOSEmb.register_for_auto_class("AutoModelForMaskedLM")
```
### Expected behavior
I should be able to register custom tokenizers with `AutoTokenizer` (which might be a new feature request) or work around it somehow to allow other users to use a custom tokenizer. | 04-29-2023 19:39:00 | 04-29-2023 19:39:00 | You can have a look at the [documentation here](https://huggingface.co/docs/transformers/custom_models) but this is already supported. Just do `CustomTokenizer.register_for_auto_class()` like for the models.<|||||>Duh! I was doing this for the models but didn't make the connection to the tokenizer. Thanks @sgugger!
For someone looking for the complete answer in the future:
```
CustomTokenizer.register_for_auto_class("AutoTokenizer")
``` |
transformers | 23,071 | closed | added type hints for blip_text pytorch model | # What does this PR do?
Added type hints for blip_text pytorch model as tasked in https://github.com/huggingface/transformers/issues/16059
@Rocketknight1 Could you review this? | 04-29-2023 18:44:59 | 04-29-2023 18:44:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,070 | closed | KeyError: 'eval_loss' (LLaMA finetuning) | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.13.3
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes RTX 3090
- Using distributed or parallel set-up in script?: No
I'm running into this issue whenever I use a DatasetDict as the evaluation dataset
```
traceback (most recent call last):
File "/mnt/e/alpaca-lora/finetune.py", line 304, in <module>
fire.Fire(train)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/mnt/e/alpaca-lora/finetune.py", line 294, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2006, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2291, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2394, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_loss'
Traceback (most recent call last):
File "/mnt/e/alpaca-lora/finetune.py", line 304, in <module>
fire.Fire(train)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/coen/.local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/mnt/e/alpaca-lora/finetune.py", line 294, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2006, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2291, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/coen/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2394, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_loss'
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Download [Alpaca-Lora](https://github.com/tloen/alpaca-lora) from the repository
2. Modify the code
```
if val_data_path is not None:
train_data = (
# data.select(range(10)).shuffle().map(generate_and_tokenize_prompt)
data.shuffle().map(generate_and_tokenize_prompt)
)
val_data: DatasetDict = load_from_disk(val_data_path)
val_data = (
val_data.map(generate_and_tokenize_prompt)
)
elif val_set_size > 0:
train_val = data.train_test_split(
test_size=val_set_size, shuffle=True, seed=42
)
train_data = (
train_val["train"].shuffle().map(generate_and_tokenize_prompt)
)
val_data: Dataset = (
train_val["test"].shuffle().map(generate_and_tokenize_prompt)
)
else:
train_data = data["train"].shuffle().map(generate_and_tokenize_prompt)
val_data: None = None
if not ddp and torch.cuda.device_count() > 1:
# keeps Trainer from trying its own DataParallelism when more than 1 gpu is available
model.is_parallelizable = True
model.model_parallel = True
# def compute_metrics(eval_preds):
# metric = evaluate.load("glue", "mrpc")
# logits, labels = eval_preds
# predictions = np.argmax(logits, axis=-1)
# return metric.compute(predictions=predictions, references=labels)
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=transformers.TrainingArguments(
per_device_train_batch_size=micro_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_steps=100,
num_train_epochs=num_epochs,
learning_rate=learning_rate,
fp16=True,
logging_steps=10,
optim="adamw_torch",
evaluation_strategy="steps" if val_set_size > 0 else "no",
save_strategy="steps",
eval_steps=200 if val_set_size > 0 else None,
save_steps=200,
output_dir=output_dir,
save_total_limit=3,
load_best_model_at_end=True if val_set_size > 0 else False,
ddp_find_unused_parameters=False if ddp else None,
group_by_length=group_by_length,
report_to="wandb" if use_wandb else None,
run_name=wandb_run_name if use_wandb else None,
),
data_collator=transformers.DataCollatorForSeq2Seq(
tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True
),
# compute_metrics=compute_metrics
)
model.config.use_cache = False
```
### Expected behavior
Training as normal with seperate evaluation on each Dataset in the dict
[EDIT] the error occurs right after having validated every set. I can see that it starts training again.
Am I doing something wrong?
I really see anything wrong with the evaluation datasets that I'm using
They work when it's just one big evaluation Dataset object
If you need more info, please let me know :) | 04-29-2023 17:34:34 | 04-29-2023 17:34:34 | Not certain, but this may be related to #22885.<|||||>> Not certain, but this may be related to #22885.
Thanks for the reference, however the proposed workaround (`label_names=["labels"]`) did not work.
<|||||>Please post a reproducer we can execute.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,069 | closed | convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py works for data2vec 1.0 checkpoint but not data2vec 2.0 | ### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
### Who can help?
@sanchit-gandhi @sgugger
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
- Download [data2vec 1.0 Large (No fine-tuning) .pt](https://dl.fbaipublicfiles.com/fairseq/data2vec/vox_pretrained.pt) from [fairseq/data2vec](https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec)
- Download config.json from [facebook/data2vec-audio-large](https://huggingface.co/facebook/data2vec-audio-large/blob/main/config.json)
- Run [convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py)
- pytorch_model.bin output
```shell
python scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py \
--pytorch_dump_folder_path converted \
--checkpoint_path vox_pretrained.pt \
--config_path config.json \
--not_finetuned
```
<details><summary>Output</summary>
<p>
2023-04-29 15:56:57 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
loading configuration file config.json
Model config Data2VecAudioConfig {
"_name": "data2vec_audio",
"activation_dropout": 0.1,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"apply_spec_augment": true,
"architectures": [
"Data2VecAudioModel"
],
"attention_dropout": 0.1,
"bos_token_id": 1,
"classifier_proj_size": 256,
"codevector_dim": 768,
"contrastive_logits_temperature": 0.1,
"conv_bias": false,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_pos_kernel_size": 19,
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"diversity_loss_weight": 0.1,
"do_stable_layer_norm": true,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"model_type": "data2vec-audio",
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 5,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"output_hidden_size": 1024,
"pad_token_id": 0,
"proj_codevector_dim": 768,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"torch_dtype": "float32",
"transformers_version": "4.21.3",
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
}
2023-04-29 15:58:39 | WARNING | datasets.builder | Reusing dataset librispeech_asr_dummy (/root/.cache/huggingface/datasets/patrickvonplaten___librispeech_asr_dummy/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc)
It is strongly recommended to pass the ``sampling_rate`` argument to this function. Failing to do so can result in silent errors that might be hard to debug.
torch.Size([4, 666, 1024]) torch.Size([4, 666, 1024])
max_absolute_diff = 8.707307279109955e-05
Do both models output the same tensors? 🔥
Configuration saved in converted/config.json
Model weights saved in converted/pytorch_model.bin
Feature extractor saved in converted/preprocessor_config.json
</p>
</details>
- Download [data2vec 2.0 Large (No fine-tuning) .pt](https://dl.fbaipublicfiles.com/fairseq/data2vec2/large_vox.pt) from [fairseq/data2vec](https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec)
- Download config.json from [facebook/data2vec-audio-large](https://huggingface.co/facebook/data2vec-audio-large/blob/main/config.json)
- Run [convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py)
- KeyError: 'final_proj.0.weight'
```shell
python scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py \
--pytorch_dump_folder_path converted \
--checkpoint_path large_vox.pt \
--config_path config.json \
--not_finetuned
```
<details><summary>Output</summary>
<p>
2023-04-29 15:59:58 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
loading configuration file config.json
Model config Data2VecAudioConfig {
"_name": "data2vec_audio",
"activation_dropout": 0.1,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"apply_spec_augment": true,
"architectures": [
"Data2VecAudioModel"
],
"attention_dropout": 0.1,
"bos_token_id": 1,
"classifier_proj_size": 256,
"codevector_dim": 768,
"contrastive_logits_temperature": 0.1,
"conv_bias": false,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_pos_kernel_size": 19,
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"diversity_loss_weight": 0.1,
"do_stable_layer_norm": true,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.05,
"model_type": "data2vec-audio",
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 5,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"output_hidden_size": 1024,
"pad_token_id": 0,
"proj_codevector_dim": 768,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"torch_dtype": "float32",
"transformers_version": "4.21.3",
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
}
Traceback (most recent call last):
File "/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py", line 287, in <module>
convert_wav2vec2_checkpoint(
File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py", line 213, in convert_wav2vec2_checkpoint
state_dict["model"]["final_proj.weight"] = state_dict["model"].pop("final_proj.0.weight")
KeyError: 'final_proj.0.weight'
</p>
</details>
### Expected behavior
Expected behavior is that model weights are saved in pytorch_model.bin for both data2vec 1.0 and 2.0 checkpoints. | 04-29-2023 16:11:06 | 04-29-2023 16:11:06 | Getting beyond the key error by either commenting [lines 212-213](https://github.com/huggingface/transformers/blob/849367ccf741d8c58aa88ccfe1d52d8636eaf2b7/src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py#L212C50-L213 ) or changing the keys to `modality_encoders.AUDIO.decoder.proj.weight` and `modality_encoders.AUDIO.decoder.proj.bias`, results in a `Could not infer model type from {cfg}` error.
```
Traceback (most recent call last):
File "/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py", line 287, in <module>
convert_wav2vec2_checkpoint(
File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py", line 227, in convert_wav2vec2_checkpoint
model = load_data2vec(converted_ckpt)
File "/notebooks/scripts/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py", line 224, in load_data2vec
model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([path])
File "/notebooks/fairseq/fairseq/checkpoint_utils.py", line 484, in load_model_ensemble_and_task
model = task.build_model(cfg.model, from_checkpoint=True)
File "/notebooks/fairseq/fairseq/tasks/audio_pretraining.py", line 178, in build_model
model = super().build_model(model_cfg, from_checkpoint)
File "/notebooks/fairseq/fairseq/tasks/fairseq_task.py", line 355, in build_model
model = models.build_model(cfg, self, from_checkpoint)
File "/notebooks/fairseq/fairseq/models/__init__.py", line 101, in build_model
f"Could not infer model type from {cfg}. "
KeyError: "'_name'"
``` <|||||>It looks like the architecture of data2vec 2.0 is different from 1.0, so supporting this would require changing the modeling code for Data2Vec in Transformers or adding a new Data2Vec2 model. Patching the existing conversion script likely won't be sufficient.<|||||>Please note that the conversion script are provided as an indication from the model contributor on how they converted the original checkpoint to the Hugging Face format. They are not maintained and not expected to work on other checkpoints.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Leaving this as closed since the issue requires a new conversion script and new modelling code for data2vec2 (rather than being an issue with the existing data2vec code). Feel free to open a feature request if this is something you'd like to see @alanrice! |
transformers | 23,068 | closed | 🌐 [i18n-KO] Translated `tasks/zero_shot_object_detection.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/zero_shot_object_detection.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 04-29-2023 15:57:31 | 04-29-2023 15:57:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can you solve the conflicts so we can merge this PR?<|||||>> Can you solve the conflicts so we can merge this PR?
toctree file of this branch causes a conflict because it's different from the new version.
As shown in [[docs] Doc TOC updates](https://github.com/huggingface/transformers/pull/23049)
Let me fix this after I update korean toctree first!
<|||||>Closed in favor of #23430 |
transformers | 23,067 | closed | added type hints in graphormer | @Rocketknight1 I added type hints for `Graphormers` for pytorch as described in [issue #16059 ](https://github.com/huggingface/transformers/issues/16059)
| 04-29-2023 15:52:35 | 04-29-2023 15:52:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23067). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,066 | closed | Update setup.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-29-2023 14:39:21 | 04-29-2023 14:39:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23066). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,065 | closed | 🌐 [i18n-KO] Translated `tasks/zero_shot_image_classification.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/zero_shot_image_classification.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 04-29-2023 14:10:47 | 04-29-2023 14:10:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM! :-) |
transformers | 23,064 | closed | 🌐 [i18n-KO] docs: ko: Translate `multiple_choice.mdx` | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `multiple_choice.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> | 04-29-2023 13:46:19 | 04-29-2023 13:46:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd<|||||>@sgugger, @ArthurZucker, @eunseojo
May you please review this PR? |
transformers | 23,063 | closed | Flamingo Implementation | # What does this PR do?
Implementation of Flamingo models (https://arxiv.org/abs/2204.14198). Model weights trained by Open Flamingo team can be downloaded [here](https://huggingface.co/openflamingo/OpenFlamingo-9B). Weight conversion script is included.
Weights conversion can be run via:
``` python
python src/transformers/models/flamingo/converting_flamingo_to_hf.py \
--old_ckpt_path /path/to/open/flamingo/weights \
--new_hf_path /output/path
```
Models can then be loaded via:
``` python
model = transformers.FlamingoForConditionalGeneration.from_pretrained("/output/path")
```
Example:
``` python
import requests
import torch
import transformers
from PIL import Image
tokenizer = model.text_tokenizer
image_processor = transformers.CLIPImageProcessor()
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg", stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg", stream=True
).raw
)
vision_x = (
image_processor.preprocess(
[demo_image_one, demo_image_two, query_image], return_tensors="pt"
)["pixel_values"]
.unsqueeze(1)
.unsqueeze(0)
)
model.text_tokenizer.padding_side = "left"
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", model.text_tokenizer.decode(generated_text[0]))
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 04-29-2023 12:44:09 | 04-29-2023 12:44:09 | Respect! Openflamingo needs be built with huggingface transformers for more efficient training and inference.
We have already adapted it in our [Otter model](https://github.com/Luodian/Otter) (an instruction tuned model based on flamingo). We uploaded a converted openflamingo-9b weights at [luodian/openflamingo-9b-hf](https://huggingface.co/luodian/openflamingo-9b-hf).
The model could be loaded via
```python
model = transformers.FlamingoForConditionalGeneration.from_pretrained("luodian/openflamingo-9b-hf")
```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23063). All of your documentation changes will be reflected on that endpoint.<|||||>cc @amyeroberts and @younesbelkada <|||||>Awesome work! Let us know when the PR is ready for review! |
transformers | 23,062 | closed | [docs] broken link in `torchscript.mdx` | ### Description
Broken `serialization#using-torchscript-in-python` should be `torchscript#using-torchscript-in-python`, in line number 201 of `torchscript.mdx`.
### Document / language
`torchscript.mdx` / en, kr
### Suggestion
As is:
```
### Converting a model for AWS Neuron
Convert a model for AWS NEURON using the same code from [Using TorchScript in
Python](serialization#using-torchscript-in-python) to trace a `BertModel`. Import the
`torch.neuron` framework extension to access the components of the Neuron SDK through a
Python API:
```
To be:
```
### Converting a model for AWS Neuron
Convert a model for AWS NEURON using the same code from [Using TorchScript in
Python](torchscript#using-torchscript-in-python) to trace a `BertModel`. Import the
`torch.neuron` framework extension to access the components of the Neuron SDK through a
Python API:
```
Please let me know if I missed something in guideilnes.
Thank you in advance for your attention to it! | 04-29-2023 09:25:23 | 04-29-2023 09:25:23 | Thanks for catching this! Would you like to open a PR with your fix? 🤗<|||||>Hello, @stevhliu !
I opened PR #23060 for translating the document to Korean as well as fixing the issue.
Please let me know if it would be better to open another PR for the fix separately. |
transformers | 23,061 | closed | num_noise_spans should be <= num_items #22246 | Clone of https://github.com/huggingface/transformers/pull/22938 | 04-29-2023 07:44:38 | 04-29-2023 07:44:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23061). All of your documentation changes will be reflected on that endpoint. |
transformers | 23,060 | closed | 🌐 [i18n-KO] Translated `torchscript.mdx` to Korean | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Translated the `torchscript.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
Fixes https://github.com/huggingface/transformers/issues/23062
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@sgugger, @ArthurZucker, @eunseojo
May you please review this PR?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-29-2023 00:51:48 | 04-29-2023 00:51:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Could you review this PR? 😃
@sgugger, @ArthurZucker, @eunseojo |
transformers | 23,059 | closed | GPTNeoXForQuestionAnswering | # What does this PR do?
Adds GPTNeoXForQuestionAnswering.
Includes #23030 and #23057.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
@younesbelkada
| 04-29-2023 00:51:37 | 04-29-2023 00:51:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @younesbelkada as Arthur is on holdiays 👍 <|||||>@younesbelkada @amyeroberts this one is ready for review :-) |
transformers | 23,058 | open | OneFormer processor does not return correctly formatted class_labels tensors | ### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.11.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts
I'm trying to finetune `shi-labs/oneformer_ade20k_swin_tiny` on my own dataset. I've hit two problems, one with the docs and one with the actual library code for image processing.
1. There are no docs on fine-tuning or training the OneFormer model on this page: https://huggingface.co/docs/transformers/model_doc/oneformer . So I relied on investigating this test
https://github.com/huggingface/transformers/blob/main/tests/models/oneformer/test_modeling_oneformer.py#L364
2. The train test doesn't actually use the OneFormer Processor that is used in all of the inference examples in https://huggingface.co/docs/transformers/model_doc/oneformer
I think this is because the OneFormer Processor and the train test produce differently formatted class labels. In the train test, the class_labels are created from scratch here: https://github.com/huggingface/transformers/blob/main/tests/models/oneformer/test_modeling_oneformer.py#L106
When training, it's expected that class_labels is a Tensor shaped like [batch_size, num_classes], where a particular element in a batch would have [1,0,0,0] to represent the 0th class.
But the OneFormer processor returns a list of tensors with values greater than 1: [tensor([0, 3])]
This eventually leads to an error here https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/oneformer/modeling_oneformer.py#L306 where we get index out of bounds but it's a CUDA assert error.
I think this should be rectified by including a training example in the docs and changing the test and OneFormer Processor so that they work when training the model.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. adapt test_training to create class_labels with OneFormerProcessor instead of from scratch
2. run the adapted test_training
I have a failing example at https://github.com/developmentseed/slickformer but it takes quite a bit of setup and the datasets in't public yet.
### Expected behavior
I'd expect OneFormer Processor to return a class_labels entry in the dict that has the same expected output as `class_labels` in `test_training`. To summarize, the class_labels need to be one hot encoded for training, but OneFormerProcessor isn't doing this. | 04-29-2023 00:41:13 | 04-29-2023 00:41:13 | Hi,
I'd recommend taking a look at the MaskFormer/Mask2Former notebooks regarding fine-tuning on custom data: https://github.com/NielsRogge/Transformers-Tutorials/tree/master/MaskFormer. As the API of OneFormer is identical, except that it has one additional `task_inputs` input which you need to prepare as well.<|||||>Hi @NielsRogge @amyeroberts
Actually I was following your MaskFormer and Mask2Former tutorials
and my task was to finetune on Semantic information and have an instance level prediction. Where Mask2Former was performing well.
Still Problems I have mentioned on Closed Issue : https://github.com/huggingface/transformers/issues/21644
My main request was Can you also Write Training Tutorials for OneFormer in MaskFormer, Text input part is creating the problem and the segmentation_maps parameter.
```
779 annotation_classes = label["classes"]
780 annotation_masks = label["masks"]
--> 782 texts = ["a semantic photo"] * self.num_text
783 classes = []
784 masks = []
TypeError: can't multiply sequence by non-int of type 'NoneType'
```
```
preprocessor = OneFormerImageProcessor.from_pretrained(config.MODEL_PATH)
preprocessor.num_text = 2
preprocessor.num_classes = 2
preprocessor.ignore_index=3
preprocessor.do_reduce_labels=False
preprocessor.do_resize=False
preprocessor.from_json_filedo_rescale=False
preprocessor.do_normalize=True
preprocessor.image_mean = config.MEAN
preprocessor.image_std = config.STD
```
after introducing num_text:
```
972 num_class_obj[cls_name] = 0
974 for i, label in enumerate(annotations):
--> 975 task = task_inputs[i]
976 if task == "semantic":
977 classes, masks, texts = self.get_semantic_annotations(label, num_class_obj)
IndexError: list index out of range
```
```
def collate_fn(batch):
inputs = list(zip(*batch))
images = inputs[0]
segmentation_maps = inputs[1]
# this function pads the inputs to the same size,
# and creates a pixel mask
# actually padding isn't required here since we are cropping
batch = preprocessor(
images,
task_inputs=["semantic"],
segmentation_maps=segmentation_maps,
return_tensors="pt",
)
return batch
```<|||||>Any solution? I'm having the same or similar issues when changing the MaskFormerImageProcessor to OneFormerProcessor in the tutorial https://github.com/NielsRogge/Transformers-Tutorials/blob/master/MaskFormer/Fine-tuning/Fine_tuning_MaskFormer_on_a_panoptic_dataset.ipynb |
transformers | 23,057 | closed | GPTNeoForQuestionAnswering | # What does this PR do?
Adds QA support for GPT Neo.
Includes PR #23030.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
@younesbelkada
| 04-29-2023 00:22:27 | 04-29-2023 00:22:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>One more to go (for now).<|||||>@sgugger @younesbelkada - as Arthur is on holdiays 👍 <|||||>@younesbelkada
I merged with main to isolate the GPT Neo specific parts. Now everything seems to work fine.<|||||>rebased - let's see whether it helps<|||||>@younesbelkada merging helped - ready to merge 👍 |
transformers | 23,056 | closed | fix random attention for pytorch's bigbird/pegasus_bigbird | Fixes # (issue)
https://github.com/huggingface/transformers/issues/23055
# What does this PR do?
Add control over usage of random attention of `BigBird` based on current mode (training/eval)
## Who can review?
@sanchit-gandhi @ydshieh
| 04-28-2023 23:43:58 | 04-28-2023 23:43:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @Bearnardd Thank you for the PR.
I have one question: why `def _bigbird_block_rand_mask_with_head` is not modified for this pytorch BigBird file ..?<|||||>Hi @sanchit-gandhi! I have removed the static method as I think it is the best approach. <|||||>> Hi @Bearnardd Thank you for the PR.
>
> I have one question: why `def _bigbird_block_rand_mask_with_head` is not modified for this pytorch BigBird file ..?
Thanks for the comment! To be honest I am not sure If I understand you correctly, since from what I can see this function is updated. Could you elaborate what exactly is missing?<|||||>> > Hi @Bearnardd Thank you for the PR.
> > I have one question: why `def _bigbird_block_rand_mask_with_head` is not modified for this pytorch BigBird file ..?
>
> Thanks for the comment! To be honest I am not sure If I understand you correctly, since from what I can see this function is updated. Could you elaborate what exactly is missing?
Sorry, my bad. You are right :-)<|||||>cc @sgugger <|||||>I have pushed the changes @sgugger :) |
transformers | 23,055 | closed | Pytorch BigBird random attention | ### Reproduction
`Pytorch->Flax` and `Flax->Pytorch` equivalence tests were failing. At the moment they are skipped by https://github.com/huggingface/transformers/pull/23040
### Expected behavior
During working on https://github.com/huggingface/transformers/pull/21023 I have found out that there is a bug in pytorch's implementation of `BigBird`. Namely random attention is used no matter whether we are in training/eval mode. Corect behaviour is that during inference (eval) we should not introduce any randomness, hence we random attention should not be used. | 04-28-2023 23:40:05 | 04-28-2023 23:40:05 | Hi @sanchit-gandhi @ydshieh! I have opened [PR](https://github.com/huggingface/transformers/pull/23056) that fixes failing tests. I am wondering if the changes in the PR are okay (usage of random attention based on current mode) or do we want to have some more control over usage of random attention e.g. add `deterministic` argument for `__call__` of `BigBirdPreTrainedModel`. Secondly I was wondering what is the advantage of marking `_bigbird_block_rand_mask` as a `staticmethod` and then calling it with `self._bigbird_block_rand_mask` and passing it arguments from `self` like `self.max_seqlen` instead of treating it as a regular method. It looks kinda weird to me. Am I missing something?<|||||>Closed via https://github.com/huggingface/transformers/pull/23056. |
transformers | 23,054 | closed | Pipeline(summarization) code example and documentation needs updating | ### System Info
Using Google Colab on Mac OS Ventura 13.2.1
Chrome Version 112.0.5615.137 (Official Build) (x86_64)
Using the install command.
`!pip install transformers`
Which downloads the following:
<img width="1264" alt="Screenshot 2023-04-28 at 5 53 25 PM" src="https://user-images.githubusercontent.com/9907572/235266551-f9c627f9-22db-41c0-89ba-1f9814d72fd5.png">
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
In the documentation for the pipeline summarization [here](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.SummarizationPipeline) the example needs updating. Use the current example below:
`# use bart in pytorch`
`summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
`
Produces the following output in Google Colab.
`Using a pipeline without specifying a model name and revision in production is not recommended.
Your max_length is set to 20, but you input_length is only 11. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=5)
[{'summary_text': ' An apple a day, keeps the doctor away from your doctor away, says Dr.'}]`
The documentation doesn't state what `min_length=` and `max_length=` actually do and the output doesn't tell you either.
1. Is the `max_length` the maximum token length of the output or input?
2. Based on the output from running the code, does the input length affect the output?
Running this code:
`# use t5 in tf`
`summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)`
Produces the following output in Google Colab. .
`Your max_length is set to 20, but you input_length is only 13. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=6)
/usr/local/lib/python3.10/dist-packages/transformers/generation/tf_utils.py:745: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
[{'summary_text': 'an apple a day, keeps the doctor away from the doctor .'}]`
### Expected behavior
1. Show the expected output by using longer text as the input.
2. Provide a clear explanation of what `min_length=` and `max_length=` actually do.
3. Avoid warnings when running example code from documentation or specifying a stable version to use. | 04-28-2023 22:58:13 | 04-28-2023 22:58:13 | 1. I beg to differ. Examples are meant to be simple to read, Having a real long form text just hinders readability imo.
2.
`min_length` and `max_length` are specified here: https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationMixin.greedy_search.max_length
3. @sgugger What do you think here ? I agree examples shouldn't raise warnings, however I feel odd burning the name of a specific model into this example, since users are likely to not understand where to get that model id from.
```
# Fetch summarization models at https://huggingface.co/models?pipeline_tag=summarization&sort=downloads
summarizer = pipeline(model="philschmid/bart-large-cnn-samsum")
```
Something like that. That probably affects ALL examples within pipelines.<|||||>cc @gante The warning somehow needs to be addressed so that users of the `pipeline` function do not see it.<|||||>Hi @TomBerton 👋
The warnings you described were updated in #23128, which should make the pipeline experience more pleasant and self-documenting 🤗 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,053 | closed | Passing a str Enum to `from_pretrained` gives OSError | ### System Info
Python version 3.8
`transformers==4.28.1`
Ubuntu
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using a str Enum (as specified [here](https://docs.python.org/3.10/library/enum.html#others) in the python docs) as input to `AutoTokenizer.from_pretrained`, the model name that gets searched is different from the member value of the Enum. Example to repro:
```
from enum import Enum
from transformers import AutoTokenizer
class Tmp(str, Enum):
BERT = 'bert-base-uncased'
t = AutoTokenizer.from_pretrained(Tmp.BERT)
```
Error:
```
Traceback (most recent call last):
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
response.raise_for_status()
File "/home/ubuntu/test_env/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/Tmp.BERT/resolve/main/tokenizer_config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1541, in get_hf_file_metadata
hf_raise_for_status(r)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 291, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-644c4a27-5bd929b32085d52d1a1b4b30)
Repository Not Found for url: https://huggingface.co/Tmp.BERT/resolve/main/tokenizer_config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 642, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 486, in get_tokenizer_config
resolved_config_file = cached_file(
File "/home/ubuntu/test_env/lib/python3.8/site-packages/transformers/utils/hub.py", line 424, in cached_file
raise EnvironmentError(
OSError: Tmp.BERT is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
### Expected behavior
We should see the model being searched for use the string value of the Enum member, instead of a different value (I haven't dug in to see what is being used instead). | 04-28-2023 22:44:47 | 04-28-2023 22:44:47 | I'm not sure why you think this should be supported. `str(Tmp.BERT)` is `'Tmp.BERT'`, which is not a valid identifier to pass to `from_pretrained`. You need to pass `Tmp.BERT.value`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,052 | closed | Generate: prepare assisted generation for release | # What does this PR do?
This PR makes a few final adjustments to assisted generation before its release, including:
1. Merge previously named step 7 [the forward pass after matching with assistant tokens] into step 6 [slicing variables based on the number of matches] -- the variables are already computed in step 3 [selecting the model's next tokens based on the logits], so the code becomes more concise and it helps me explain what's going on more easily. See the (partially bugged) gif below, which is WIP for the blog post.
2. Swaps the order of step 6 [slicing variables] with step 5 [updating the number of candidates for the next iteration] -- makes more sense that the update step for the next iteration is the last one :)
3. Better variable names and improved comments (so the implementation becomes self-documenting)

| 04-28-2023 19:45:53 | 04-28-2023 19:45:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,051 | closed | Fixed default config for `Pix2Struct` model to set `Pix2StructTextModel` to `is_decoder=True` | Previously, the `Pix2StructTextModel` was configured with `is_decoder=False` by default causing the attention mask used for self-attention to be non-causal and causing fine-tuning to fail.
As a fix, this PR adds `is_decoder=True` default kwarg to the `Pix2StructTextConfig` class in order to correctly configure the text model as a decoder.
Fixes # 22903
@younesbelkada | 04-28-2023 16:59:33 | 04-28-2023 16:59:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Just updated all pix2struct checkpoints that are under `google` org! Thanks again @gbarello-uipath for flagging this<|||||>Shouldn't the matcha and deplot checkpoints be updated as well?<|||||>Good point, will update them now<|||||>Just updated them! Thanks for flagging @RainbowMan1 |
transformers | 23,050 | open | [New model] 🐸TTS advanced Text-to-Speech | ### Model description
🐸TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. 🐸TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
GithHub repo: https://github.com/coqui-ai/TTS
Samples: http://erogol.com/ddc-samples/ | 04-28-2023 13:37:38 | 04-28-2023 13:37:38 | Hi @jozefchutka I would like to work on this issue, I see multiple models under [Implemented Models](https://github.com/coqui-ai/TTS/tree/dev#implemented-models) on your link, do you have any recommendation about which one to start first?<|||||>Hi @susnato , thanks for looking into this. I hope to eventually run TTS in browser via (transformers.js), based on which my recommendation would be to pick a model that would be suitable in terms of performance / size<|||||>Hi @jozefchutka thanks for replying, I was thinking about Speedy-Speech but I didn't see that model inside of `TTS/tts/models` in dev branch, am I looking in wrong branch?<|||||>I have no idea honestly. But I have just discovered github provides very nice code browsing view, including search.

If its nowhere to find, it would be worth to reach out to 🐸TTS team
<|||||>cc @sanchit-gandhi <|||||>Hey @jozefchutka and @susnato - Coqui were previously focused on providing strong open-source TTS checkpoints, however in the last year they pivoted to more end-user services (see https://twitter.com/coqui_ai/status/1638573847296499712). They haven't been open-sourcing these latest models, and as a result their open-source checkpoints have fallen by the wayside a bit compared to the latest TTS research (e.g. VALL-E, Bark, MQTTS). I would say that a year ago it would have been a very exciting addition, but now there are more performant checkpoints that are growing in popularity amongst the open-source community. I would recommend checking out the aforementioned models if you're interested in a TTS model integration! Also see related https://github.com/huggingface/transformers/issues/22487#issuecomment-1496340245<|||||>Hi @sanchit-gandhi thanks for replying! Actually I was going through the same issue and saw your [comment](https://github.com/huggingface/transformers/issues/22487#issuecomment-1496312713) -
>Indeed, a TTS pipeline would be super helpful to run SpeechT5. We're currently planning on waiting till we have 1-2 more TTS models in the library before pushing ahead with a TTS pipeline, in order to verify that the pipeline is generalisable and gives a benefit over loading a single model + processor.
I was hoping to somehow contribute to the TTS pipeline, but now that you said
>They haven't been open-sourcing these latest models, and as a result their open-source checkpoints have fallen by the wayside a bit compared to the latest TTS research (e.g. VALL-E, Bark, MQTTS)
is a TTS pipeiline still in queue or should I focus on others like https://paperswithcode.com/task/text-to-speech-synthesis ?
<|||||>Hi @sanchit-gandhi @susnato thanks for the insights. If there are better alternatives please go for it. <|||||>IMO the TTS pipeline will be worth pursuing once the two ongoing TTS PRs are complete:
* Bark #23375
* FastSpeech2 #23439
=> we'd then have three models on which to base the TTS pipeline!
Right now I think these are probably the most worthwhile TTS models to work on in transformers? There's also MQTTS: https://github.com/b04901014/MQTTS But that hasn't gained much traction. Do you know of any other recent TTS models that are gaining popularity amongst the community that we might have missed?<|||||>The only other bookmark I have is https://github.com/elevenlabs/elevenlabs-python , but that doesnt seem open model, just API? Worth for someone with better understanding in field to research.<|||||>As far as I understand, ElevenLabs is only a paid API @jozefchutka, but definitely a performant low-latency model. Interestingly a new ElevenLabs demo popped-up on the HF Hub: https://huggingface.co/spaces/elevenlabs/tts So potentially they're trying to increase their OS presence?<|||||>My understanding is the same<|||||>Hi @sanchit-gandhi my knowledge about recent TTS models is very limited, but I read about some of them maybe they are worth adding - how about [Tacotron 2](https://arxiv.org/pdf/1712.05884.pdf)(an implementation by NVIDIA [here](https://github.com/NVIDIA/tacotron2)) or [Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling](https://arxiv.org/pdf/2103.14574.pdf) . Also I found some unofficial implementations for [VALL-E](https://arxiv.org/pdf/2301.02111.pdf) - [lifeiteng/vall-e](https://github.com/lifeiteng/vall-e/tree/main) and [enhuiz/vall-e](https://github.com/enhuiz/vall-e) but both are without pretrained weights or [TransformerTTS](https://arxiv.org/pdf/1809.08895.pdf) (an PaddlePaddle implementation [here](https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/paddlespeech/t2s/models/transformer_tts/transformer_tts.py) and weights [here](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/ljspeech/tts1)).
If they are not as interesting then I would like to implement MQTTS. What do you think? |
transformers | 23,049 | closed | [docs] Doc TOC updates | This PR restructures TOC for the documentation. All the previous links remain working (except the two pages that have been removed: migration guide and converting from TF).
Here's the scope of the restructure:
a) TOC is sorted from “beginner” topics to more advanced making it easier to know where an answer to a question might be
b) Some topics have been renamed to be concise with the rest in the same section and (in some cases) more descriptive
c) Task Guides are collapsed by default and now are on on the same level (currently NLP task guides are hidden, and not aligned with other modalities)
d) “General usage” has been renamed to “Developer Guides”
e) Benchmarks, notebooks, and community resources have been moved under Developer Guides
f) “Converting from TensorFlow checkpoints” and "Migrating from previous packages" pages removed | 04-28-2023 12:56:59 | 04-28-2023 12:56:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,048 | closed | 🌐 [i18n-KO] Translated `tasks/image_classification.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/image_classification.mdx` file of the documentation to Korean.
Thank you in advance for your review 😄
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo | 04-28-2023 12:42:06 | 04-28-2023 12:42:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>May you please review this PR? 😄
@sgugger, @ArthurZucker, @eunseojo |
transformers | 23,047 | closed | FLAVA: module 'torch.distributed.nn.functional' has no attribute 'all_gather_with_backprop' | https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/models/flava/modeling_flava.py#L1696
The following error is thrown when running FLAVA with PyTorch 2.0 and `global_backprop_contrastive=True`: `AttributeError: module 'torch.distributed.nn.functional' has no attribute 'all_gather_with_backprop'`. As far as I know, this attribute never existed in PyTorch.
The bug might have to do with the fact that `all_gather` is renamed to `all_gather_with_backprop` in [facebookresearch/multimodal](https://github.com/facebookresearch/multimodal) and this could have been copied over: https://github.com/facebookresearch/multimodal/blob/c6f6e44ec6e0addfdf01695db860a6febeb2d88b/torchmultimodal/utils/distributed.py#L12
A rename to `all_gather` should fix this I think. | 04-28-2023 09:44:54 | 04-28-2023 09:44:54 | cc @younesbelkada and @amyeroberts <|||||>Hi @amariucaitheodor
This sounds like being the correct fix in my opinion. I can't find that method either on the PT documentation, and I guess we never flagged that issue as not many users have run the model in a distributed mode..
Would you mind opening a PR for that? If you can't, happy to do it! |
transformers | 23,046 | closed | [Doctest] Add new checks | # What does this PR do?
Checks to make sure that the examples in any `doc/en/...` are tested
Checks to make sure that any model or config examples are also nightly tested | 04-28-2023 09:29:30 | 04-28-2023 09:29:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Will add all the files to the IGNORE_DOC_NON_TESTED<|||||>Mostly the intent is to make sure new models that are added to the library are doctested (while currently the reviewer has to make sure it is added, but we usually forget)<|||||>Looking at the new `check_pr_documentation_tests` that only checks files that are modified + in the `documentation_test.txt` , my initial goal of making sure that new models are tested is not attained. This PR will adress that, by adding a check to make sure that new model addition are adding the files to test them. cc @ydshieh <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sure! Closing this. |
transformers | 23,045 | closed | Cuda rng_state_all is used when saving in distributed mode so same should also be used when loading | When saving in distributed mode this snippet uses `torch.cuda.random.get_rng_state_all()`.
https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/trainer.py#L2417-L2421
But while loading `torch.cuda.random.set_rng_state_all()` is not being used for distributed causing issues when resuming training.
https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/trainer.py#L2323-L2328
| 04-28-2023 08:22:23 | 04-28-2023 08:22:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,044 | closed | The 3D attention mask in the LongFormer is wrong | ### Feature request
I used a 3d attention mask in the LongFormer, but also failed. I find that the code
attention_mask = nn.functional.pad(
attention_mask, (0, padding_len), value=0
) # no attention on the padding tokens
in line 1626 in modeling_longformer may not support the 3D attention mask. Please correct me if I am wrong
### Motivation
I want to input a 3d attention mask in the LongFormer to conduct the visible field for different tokens.
### Your contribution
posting a code snippet example | 04-28-2023 08:20:22 | 04-28-2023 08:20:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,043 | closed | extend the test files | # What does this PR do?
Extend the test files in cross test CI job.
In #21023, flax bigbird modeling file is changed and the flax test file skips the pt/flax tests. However, the test fetcher is not designed to take into account the corresponding pytorch test file, and we have test failure in `main` on `nightly` run.
This PR extends the test files for `torch_and_tf` and `torch_and_flax` jobs to avoid such situation.
The effect could be seen in this run
https://app.circleci.com/pipelines/github/huggingface/transformers/63246/workflows/84722f3a-1259-4226-973d-267c74ca9aee/jobs/780372 | 04-28-2023 08:07:31 | 04-28-2023 08:07:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I need to check if the new test files exist before adding them to the list. Will update later <|||||>> I have already given my thoughts a thousand times on how such failures should be fixed, but since I'm being ignored again...
@sgugger I don't mean to ignore your suggestion, but the only discussion I remembered is [this slack discussion](https://huggingface.slack.com/archives/C01NE71C4F7/p1678714056980159?thread_ts=1678480555.678359&cid=C01NE71C4F7), where you mentioned (to my previous messages) with
> Mmmm, maybe there is a post-processing function that could do that yes. (Note that the whole thing for the pipeline will disappear with the new test fecther and the new way pipeline tests are designed).
Therefore I assume the approach in this PR (current version) is fine.
I might forget anything you mentioned earlier somewhere else. In this case, please share the link, and I am happy to take a look your suggestion. Otherwise, feel free to drop what you think the best. Thank you.
> This is not my preferred solution, but if we go for this, I'd like the added tests to be logged somewhere, so we can inspect the results. Otherwise we can't debug if there is a failure of the test fetcher, as the generated config is not exactly readable.
If we keep this approach, I am happy to save files of these modified version.
<|||||>Note that I am not really in favor of duplicating the tests. We have these tests both in PT/TF or PT/Flax test files. It's already kind of duplication. And if we copy each version to another framework again, that is duplication of duplication.<|||||>Let's go with logging the modified test files somehow, so we can inspect the result of the code you add then.<|||||>Looks like it didn't touch any new file though.<|||||>> Looks like it didn't touch any new file though.
Yeah, I am bad at getting the correct path. Now it works (if I triggered the test via a change in a file), see
https://app.circleci.com/pipelines/github/huggingface/transformers/63288/workflows/721ab3d6-b1af-4f2d-b036-9bebbdaef2cc/jobs/780955<|||||>@sgugger Could you check the last commit and see if it is OK? ~~(I will run a test to make sure everything works as expected tomorrow before merge, already tired today with some personal meetings)~~.
See [the new run page](https://app.circleci.com/pipelines/github/huggingface/transformers/63300/workflows/6cedeb02-795b-489f-8894-f5320ec64dd1/jobs/781128) and [here](https://app.circleci.com/pipelines/github/huggingface/transformers/63300/workflows/351a027d-9393-4c7d-a128-61cef5786f30/jobs/781146)
One thing to note is that, I put everything regarding cross tests into a single file as well as in the cross test jobs. So `test_modeling_tf_xxx.py` might be in the `tests_torch_and_flax` job and vice versa in some cases. It doesn't really matter as we correctly install the only necessary libraries in the job. |
transformers | 23,042 | closed | Using `inputs_embeds` for generation gives an incorrect warning | I'm trying to use the `inputs_embeds` parameter to run the LLaMA model. This is part of my code.
```python
# INPUT = ...embedding of a sequence, ensuring that there are no pad tokens
output_sequences = LLaMA.generate(
inputs_embeds=INPUT.to(device)
pad_token_id=tokenizer.pad_token_id,
# ... generation parameters, top_p top_k etc.
)
```
I keep getting this warning, and the results are complete gibberish. I know this exact model performs well if I pass `input_ids`.
```
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set padding_side='left' when initializing the tokenizer.
```
After a lot of debugging, I found that this issue is because of the transformers library itself. The generate function checks that the last token ID in every batch should not be the pad token ID. If it is, it displays this warning.
https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/generation/utils.py#L1308-L1315
The `generate` function is expecting the shape `(Batch, Sequence)` where this logic would work.
```python
inputs_tensor[:, -1] == generation_config.pad_token_id
```
Now the problem is that I am passing `inputs_embeds` not IDs. My shape is `(Batch, Sequence, EmbeddingSize)`, so the above statement would be true if there are any zeros in the embedding of the last token. This is obviously incorrect.
That explains the warning but not the incorrect generation.
### Environment
- `transformers==4.28.0`
- Python 3.10.11 | 04-28-2023 07:24:25 | 04-28-2023 07:24:25 | cc @gante <|||||>Hey @zrthxn 👋 Splitting my reply in two parts, the warning and the generation from input embeds.
Warning: agreed, it should check e.g. whether the input tensor has 3 or more dims (and don't emit the warning it that case). Would you like to open a PR to fix it? :) (I think the same issue is present in TF and FLAX as well)
Generation: I've double-checked generation with input embeddings, and everything seems fine. Have a look at the example below
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("huggyllama/llama-7b")
tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b")
text = "Hello world"
input_ids = tokenizer.encode(text, return_tensors="pt")
# Traditional way of generating text
outputs = model.generate(input_ids)
print("\ngenerate + input_ids:", tokenizer.decode(outputs[0], skip_special_tokens=True))
# From inputs_embeds -- exact same output if you also pass `input_ids`. If you don't
# pass `input_ids`, you will get the same generated content but without the prompt
inputs_embeds = model.model.embed_tokens(input_ids)
outputs = model.generate(input_ids, inputs_embeds=inputs_embeds)
print("\ngenerate + inputs_embeds:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```<|||||>@gante I confirmed once again and found that the `input_embeds` works. The problem was something I was doing with my embeddings. And yes, I'll create a PR for the warning.<|||||>> Hey @zrthxn 👋 Splitting my reply in two parts, the warning and the generation from input embeds.
>
> Warning: agreed, it should check e.g. whether the input tensor has 3 or more dims (and don't emit the warning it that case). Would you like to open a PR to fix it? :) (I think the same issue is present in TF and FLAX as well)
>
> Generation: I've double-checked generation with input embeddings, and everything seems fine. Have a look at the example below
>
> ```python
> from transformers import AutoModelForCausalLM, AutoTokenizer
>
> model = AutoModelForCausalLM.from_pretrained("huggyllama/llama-7b")
> tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b")
>
> text = "Hello world"
> input_ids = tokenizer.encode(text, return_tensors="pt")
>
> # Traditional way of generating text
> outputs = model.generate(input_ids)
> print("\ngenerate + input_ids:", tokenizer.decode(outputs[0], skip_special_tokens=True))
>
> # From inputs_embeds -- exact same output if you also pass `input_ids`. If you don't
> # pass `input_ids`, you will get the same generated content but without the prompt
> inputs_embeds = model.model.embed_tokens(input_ids)
> outputs = model.generate(input_ids, inputs_embeds=inputs_embeds)
> print("\ngenerate + inputs_embeds:", tokenizer.decode(outputs[0], skip_special_tokens=True))
> ```
I've tested out your example @gante and everythink works fine. However when i switch model to `lmsys/vicuna-13b-v1.3` i'm getting error. Do you know what is the difference? I'm assuming that both models share the same implementations in `transformers.models.llama.modeling_llama.LlamaForCausalLM`.
My code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"lmsys/vicuna-13b-v1.3",
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("lmsys/vicuna-13b-v1.3")
text = "Hello world"
input_ids = tokenizer.encode(text, return_tensors="pt").to(model.device)
inputs_embeds = model.model.embed_tokens(input_ids)
outputs = model.generate(inputs_embeds=inputs_embeds, max_new_tokens=10)
print(
"\ngenerate + inputs_embeds:",
tokenizer.decode(outputs[0], skip_special_tokens=True),
)
```
Stack trace
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[3], line 5
2 input_ids = tokenizer.encode(text, return_tensors="pt").to(model.device)
4 inputs_embeds = model.model.embed_tokens(input_ids)
----> 5 outputs = model.generate(inputs_embeds=inputs_embeds, max_new_tokens=10)
6 print("\ngenerate + inputs_embeds:", tokenizer.decode(outputs[0], skip_special_tokens=True))
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27), in _DecoratorContextManager.__call__..decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:1522](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:1522), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)
1516 raise ValueError(
1517 "num_return_sequences has to be 1 when doing greedy search, "
1518 f"but is {generation_config.num_return_sequences}."
1519 )
1521 # 11. run greedy search
-> 1522 return self.greedy_search(
1523 input_ids,
1524 logits_processor=logits_processor,
1525 stopping_criteria=stopping_criteria,
1526 pad_token_id=generation_config.pad_token_id,
1527 eos_token_id=generation_config.eos_token_id,
1528 output_scores=generation_config.output_scores,
1529 return_dict_in_generate=generation_config.return_dict_in_generate,
1530 synced_gpus=synced_gpus,
1531 streamer=streamer,
1532 **model_kwargs,
1533 )
1535 elif is_contrastive_search_gen_mode:
1536 if generation_config.num_return_sequences > 1:
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:2339](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/generation/utils.py:2339), in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
2336 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
2338 # forward pass to get next token
-> 2339 outputs = self(
2340 **model_inputs,
2341 return_dict=True,
2342 output_attentions=output_attentions,
2343 output_hidden_states=output_hidden_states,
2344 )
2346 if synced_gpus and this_peer_finished:
2347 continue # don't waste resources running the code we don't need
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194), in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module..new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:688](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:688), in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
685 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
687 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
--> 688 outputs = self.model(
689 input_ids=input_ids,
690 attention_mask=attention_mask,
691 position_ids=position_ids,
692 past_key_values=past_key_values,
693 inputs_embeds=inputs_embeds,
694 use_cache=use_cache,
695 output_attentions=output_attentions,
696 output_hidden_states=output_hidden_states,
697 return_dict=return_dict,
698 )
700 hidden_states = outputs[0]
701 logits = self.lm_head(hidden_states)
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/torch/nn/modules/module.py:1194), in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module..new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File [~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:528](https://vscode-remote+ssh-002dremote-002bjaskier.vscode-resource.vscode-cdn.net/home/nropiak/git/InstructZero/~/miniconda3/envs/InstructZero/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:528), in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
526 position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
527 else:
--> 528 position_ids = position_ids.view(-1, seq_length).long()
530 if inputs_embeds is None:
531 inputs_embeds = self.embed_tokens(input_ids)
RuntimeError: shape '[-1, 3]' is invalid for input of size 4
```
<|||||>@NorbertRop The issue is fixed in #24639 🙌 (see the PR if you're curious about why it was breaking :) )<|||||>@NorbertRop should be fixed if you install from `main` |
transformers | 23,041 | closed | Hugging Face - Time Series Transformer Error | ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_27/4172075850.py in <module>
8 static_real_features=batch1["static_real_features"],
9 future_values=batch1["future_values"],
---> 10 future_time_features=batch1["future_time_features"]
11 )
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/opt/conda/lib/python3.7/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py in forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, future_observed_mask, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1611 output_attentions=output_attentions,
1612 use_cache=use_cache,
-> 1613 return_dict=return_dict,
1614 )
1615
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/opt/conda/lib/python3.7/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py in forward(self, past_values, past_time_features, past_observed_mask, static_categorical_features, static_real_features, future_values, future_time_features, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, output_hidden_states, output_attentions, use_cache, return_dict)
1422 static_real_features=static_real_features,
1423 future_values=future_values,
-> 1424 future_time_features=future_time_features,
1425 )
1426
/opt/conda/lib/python3.7/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py in create_network_inputs(self, past_values, past_time_features, static_categorical_features, static_real_features, past_observed_mask, future_values, future_time_features)
1322 static_feat = torch.cat((static_real_features, static_feat), dim=1)
1323 if static_categorical_features is not None:
-> 1324 embedded_cat = self.embedder(static_categorical_features)
1325 static_feat = torch.cat((embedded_cat, static_feat), dim=1)
1326 expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
1264 return modules[name]
1265 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1266 type(self).__name__, name))
1267
1268 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
| 04-28-2023 07:10:13 | 04-28-2023 07:10:13 | 
<|||||>cc @kashif <|||||>@modiparv I believe you have configured the model with `num_static_categorical_features=0` and yet you are feeding the model static categorical covariates as the `static_categorical_features` input... perhaps kindly test without it and I can add an extra check there... <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,040 | closed | Skip pt/flax equivalence tests in pytorch `bigbird` test file | # What does this PR do?
#21023 fixed random attention issue in Flax bigbird model, and skipped the pt/flax equivalence tests in flax bigbird test file with
```txt
reason="Current Pytorch implementation has bug with random attention -> it always uses it not matter if we are in eval/train mode"
```
We need to skip the pt/flax equivalence tests in **pytorch** bigbird test file too.
Currently on `main`, the tests fail
https://app.circleci.com/pipelines/github/huggingface/transformers/63217/workflows/5d512271-f535-44be-a2ec-b95024f8f165/jobs/780069 | 04-28-2023 07:04:48 | 04-28-2023 07:04:48 | cc @Bearnardd<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Good catch @ydshieh! I will fix pytorch's big bird implementation today/tomorrow.<|||||>@Bearnardd Thank you 🤗 . Luckily, there is no `TFBigBirdModel` 😆 |
transformers | 23,039 | closed | Fix model parallelism for `BridgeTower` | # What does this PR do?
Make `BridgeTower` work with model parallelism. The test `test_model_parallelism` is still skipped as tiny version hits edge cases.
With larger values of `hidden_size`, `num_hidden_layers` etc., it works now while failed before. | 04-28-2023 04:58:25 | 04-28-2023 04:58:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,038 | closed | Update trainer_utils.py | This modified version of the function includes a check for whether the output of function has a learning rate scheduler that needs to be updated based on the current batch size. If so, it updates the `num_batches` attribute of the scheduler to ensure that the learning rate is adjusted correctly.
# What does this PR do?
it can be one solution for the lr_scheduler not updated when auto_find_batch_size set to True and batch_size decays #21521 problem.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [+] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-28-2023 04:46:14 | 04-28-2023 04:46:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23038). All of your documentation changes will be reflected on that endpoint.<|||||>cc @muellerzr <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@muellerzr Could you have a look here?<|||||>@mzamini92 could you rebase so we can double check no tests are breaking with this and we can merge? Thanks!<|||||>> @mzamini92 could you rebase so we can double check no tests are breaking with this and we can merge? Thanks!
@muellerzr Thanks for reaching me. I did it based on Sylvain suggestion. please double check and I will revise if needed. |
transformers | 23,037 | closed | Add LTG-BERT model | # LTG-BERT
This pull request adds the custom LTG-BERT model into the repository. This optimized LM architecture was introduced [in this paper](https://arxiv.org/abs/2303.09859) and is currently also used by a new generation of Norwegian LMs. The architecture features multiple improvements to the standard transformer module and we unfortunately cannot use any existing HF model wrappers.
@ArthurZucker and @younesbelkada | 04-28-2023 02:18:56 | 04-28-2023 02:18:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23037). All of your documentation changes will be reflected on that endpoint.<|||||>Hey! Great that you want to share this model 🔥 Would you be open to put ot on the hub following [this tutorial](https://huggingface.co/docs/transformers/custom_models)! Will be easier as there won't be any CI issues and since it is very similar to an existing model, this makes more sense!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey if you added the model on the hub could you share the links to it? This will help us keep track of the models that we support on the hub |
transformers | 23,036 | closed | [New model] Bark for realistic text-to-speech | ### Model description
As stated in their [README](https://github.com/suno-ai/bark/blob/main/README.md):
> Bark is a transformer-based text-to-audio model created by [Suno](https://suno.ai/). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference.
Some of their demos are quite amazing (albeit slightly creepy), being able to add "uhms" and "ahhs" in the synthesized audio. For example:
```
Hello, my name is Suno. And, uh — and I like pizza. [laughs]
But I also have other interests such as playing tic tac toe.
```
https://user-images.githubusercontent.com/34592747/238155864-cfa98e54-721c-4b9c-b962-688e09db684f.webm
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
GitHub repo: https://github.com/suno-ai/bark
Author: @gkucsko
Demo: https://huggingface.co/spaces/suno/bark
Model weights: Although not very well documented, [here](https://github.com/suno-ai/bark/blob/2c12023eb22868a633b76357b69d657b374736d9/bark/generation.py#L92-L119) is the portion of the code which links to the model weights. @Vaibhavs10 also looks to have uploaded them to the HF Hub [here](https://huggingface.co/reach-vb/bark-small) 🔥
| 04-27-2023 22:07:22 | 04-27-2023 22:07:22 | Hello, if no one else is working on this, I would love to take a look and try to add this model to HuggingFace! <|||||>https://github.com/huggingface/transformers/pull/23375<|||||>see https://github.com/huggingface/transformers/pull/24086 |
transformers | 23,035 | closed | Save the tokenizer and image preprocessor after training a model with the contrastive image-text example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When training a model with the contrastive image-text example, only the model is saved (see [here](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/examples/pytorch/contrastive-image-text/run_clip.py#L512)). As a consequence, when using the trained model to only perform inference with the same script, an error will be raised [here](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/examples/pytorch/contrastive-image-text/run_clip.py#L324) and [there](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/examples/pytorch/contrastive-image-text/run_clip.py#L335) because the checkpoint doesn't contain a tokenizer nor a preprocessor.
This PR fixes this issue by saving the tokenizer and the image preprocessor at the end of the training.
To reproduce it, after creating a model following [this section](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#create-a-model-from-a-vision-encoder-model-and-a-text-encoder-model), run:
```bash
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train \
--per_device_train_batch_size="64" \
--learning_rate="5e-5" \
--overwrite_output_dir
```
And then run:
```bash
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta-finetuned \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_eval \
--per_device_eval_batch_size="64" \
--overwrite_output_dir
```
which raises the following error:
```
Traceback (most recent call last):
File "run_clip.py", line 540, in <module>
main()
File "run_clip.py", line 325, in main
tokenizer = AutoTokenizer.from_pretrained(
File "/home/ubuntu/workspace/venv/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/home/ubuntu/workspace/venv/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 610, in __getitem__
raise KeyError(key)
KeyError: <class 'transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig'>
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-27-2023 22:00:14 | 04-27-2023 22:00:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger All tests passed, so I think this one can be merged :slightly_smiling_face: |
transformers | 23,034 | closed | Cannot resume FSDP optimizer state | This line does not save optimizer state correctly when using FSDP.
https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/trainer.py#L2383
It should use FSDP's full_optim_state_dict to collect optimizer states from different processes.
```python
FSDP.full_optim_state_dict(self.model, self.optimizer)
``` | 04-27-2023 20:07:46 | 04-27-2023 20:07:46 | cc @pacman100 <|||||>Hello @qywu, indeed, that seems to be the case, as you already have the fix, it would be great if you could raise the PR with the fixes, Thank you! |
transformers | 23,033 | closed | Trainer defaults to NCCL backend for ddp on windows | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following script natively on windows with >1 nvidia GPU. This example uses jsonl input in the same format as the openai api, eg:
```json
{"prompt": "I wish ddp worked with the trainer in windows", "completion": "it does, you are just doing it wrong!"}
{"prompt": "The moon is made of green cheese", "completion": "ok, but I was asking about the huggingface trainer..."}
```
```py
from datasets import load_dataset
from transformers import AutoConfig
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer
from transformers import TrainingArguments
def train(
train_file_path,
eval_file_path=None,
name=None,
n_epochs=5,
model_name="EleutherAI/gpt-neo-125m",
use_scheduler=False,
):
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(model_name)
print(model.config)
if eval_file_path:
train_dataset = load_dataset("json", data_files=train_file_path)
eval_dataset = load_dataset("json", data_files=eval_file_path)
else:
eval_dataset = load_dataset("json", data_files=train_file_path, split="train[:5%]")
train_dataset = load_dataset("json", data_files=train_file_path, split="train[5%:]")
def tokenize_dataset(entry):
inputs = tokenizer(entry["prompt"] + entry["completion"], return_tensors="pt")
return {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
}
n_steps_epoch = len(train_dataset)
train_dataset = train_dataset.map(tokenize_dataset).remove_columns(["prompt", "completion"])
eval_dataset = eval_dataset.map(tokenize_dataset).remove_columns(["prompt", "completion"])
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
training_args = TrainingArguments(
output_dir=name,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
gradient_accumulation_steps=32,
logging_dir=f"{name}/logs",
logging_steps=16,
evaluation_strategy="steps",
save_steps=n_steps_epoch // 2048,
eval_steps=n_steps_epoch // 1024,
save_total_limit=3,
report_to="tensorboard",
tf32=True,
seed=1679815,
)
training_args.set_optimizer(
"adamw_torch_fused",
learning_rate=1e-4,
beta1=.9,
beta2=.95,
epsilon=1e-8,
weight_decay=.1
)
if use_scheduler:
training_args.set_lr_scheduler(
name="linear",
warmup_steps=250,
num_epochs=n_epochs,
)
print(training_args)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()
if __name__ == "__main__":
train("path/to/json/dataset", name="no_gloo")
```
### Expected behavior
Outside of the context of the huggingface trainer I am able to use the gloo backend in conjunction with mpi for distributed training with pytorch using the following setup:
```
def setup_ddp(hps):
if hps.ddp:
port = "29500"
rank = MPI.COMM_WORLD.Get_rank()
world_size = MPI.COMM_WORLD.Get_size()
os.environ["RANK"] = str(rank)
os.environ["WORLD_SIZE"] = str(world_size)
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = port
dist.init_process_group("gloo", rank=rank, world_size=world_size)
group = dist.new_group(ranks=devices())
else:
rank, world_size, group = 0, 1, None
return rank, world_size, group
```
When running the provided training script in linux (via WSL2) with >1 GPU, everything executes as one would expect - but WSL2 is significantly slower than native sadly.
I would expect that the trainer would expose the backend that is used as a variable, but `xpu_backend` does not affect the behavior I am experiencing nor is it immediately clear if this is meant to be configurable as-is. NCCL is not supported in windows on pytorch currently (https://github.com/pytorch/pytorch/issues/89688) and so the trainer should not attempt to default to NCCL unless it is installed/supported by the OS (ie windows should always default to a different, functional backend).
At the very least I would expect a clean error message that explains that DDP is not supported on windows or whatever the actual state of compatibility is. Instead, the script throws warnings about pytorch not being compiled with NCCL support and fails on the first forward pass with inscrutable cuda errors. | 04-27-2023 19:35:06 | 04-27-2023 19:35:06 | On main `xpu_backend` (soon to be renamed to `ddp_backend`) lets you pick up the backend you want.<|||||>Excellent, thank you! |
transformers | 23,032 | closed | Fix CLAP link across all READMEs | # What does this PR do?
Fixes CLAP link across all READMEs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). | 04-27-2023 16:37:09 | 04-27-2023 16:37:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm getting
```
Traceback (most recent call last):
File "/Users/ehsan/workspace/transformers/utils/check_task_guides.py", line 58, in <module>
"asr.mdx": transformers_module.models.auto.modeling_auto.MODEL_FOR_CTC_MAPPING_NAMES,
File "/Users/ehsan/workspace/transformers/src/transformers/utils/import_utils.py", line 1150, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers.models.auto has no attribute modeling_auto. Did you mean: 'processing_auto'?
```
not sure what's going on so I added it manually to all the `index.mdx`. It seems not all models research paper links have been synced across docs.<|||||>Thanks again! |
transformers | 23,031 | closed | Add methods to update and verify out_features out_indices | # What does this PR do?
`out_features` and `out_indices` are two parameters which control the behaviour of a backbone. `out_indices` was recently [added as a config argument](https://github.com/huggingface/transformers/pull/22493) for the future addition of timm backbones (otherwise the timm backbone requires loading in, inspecting the feature names, then mapping to equivalent names in transfomers).
It's necessary that `out_features` and `out_indices` are consistent i.e. that they both map the same stage names. Otherwise there is conflicting sources of truth in the config. At the moment, `out_features` and `out_indices` are set and verified [within the config](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/models/swin/configuration_swin.py#L162-L189).
For backwards compatibility, backbone models can be created even if their config only has `out_features` set e.g. here for [SwinBackbone](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/models/swin/modeling_swin.py#L1268).
However, it's possible to modify the config after creation e.g. [like here in the DINAT tests](https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/tests/models/dinat/test_modeling_dinat.py#L178), resulting in a mismatch between `out_features` and `out_indices`.
This PR resolves two issues by creating a single backbone utils module.
1. Ensures `out_features` or `out_indices` attribute can only be updated using `set_out_features` and `set_out_indices` methods respectively. These perform argument checks and updates the complementary feature.
2. Remove repeated `out_features` and `out_indices` getting and verification logic between configurations and backbone models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 04-27-2023 15:22:43 | 04-27-2023 15:22:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh @sgugger I've updated with the setter tip - all looks a lot tidier! Let me know if the changes are OK.
In the spirit of not doing things magically, when setting the `out_features` and `out_indices` should I have a `logger.warning_once` notifying the user the other property is also updated? <|||||>Thanks for the iteration. I don't feel strong to `logger.warning_once` when setting one property (as it's mentioned in the docstring), but it's a good thought! Let's see what Sylvain thinks. |
transformers | 23,030 | closed | GPT2ForQuestionAnswering | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@sgugger | 04-27-2023 15:02:16 | 04-27-2023 15:02:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker @sgugger @younesbelkada The next one is ready for review.
This is a bit funny. The tests are fine, and it runs great on one of my 4x V100 machines. But on another machine, I get this funny error:
/home/jps/anaconda3/envs/scandeval/lib/python3.9/site-packages/transformers/trainer.py:375
/home/jps/anaconda3/envs/scandeval/lib/python3.9/site-packages/torch/nn/modules/module.py:1269
AttributeError: 'GPT2ForQuestionAnswering' object has no attribute 'model_parallel'
Any ideas? I though model_parallel was legacy and not needed. Should I add to be on the safe side? torch version is torch==1.13.1.<|||||>GPT2 is an old model so it might still be checking for `model_parallel`, which in this case has to be added! LMK if this fixes the issues <|||||>@ArthurZucker ready for review!<|||||>cc @younesbelkada since Arthur is on vacation. |
transformers | 23,029 | closed | Update `BridgeTowerModelTester` | # What does this PR do?
Update `BridgeTowerModelTester` to use small values for config.
| 04-27-2023 14:42:50 | 04-27-2023 14:42:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Remark: with lager model (but not too large), we get
```bash
FAILED tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_model_parallelism - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
```
Better to check this separately.
---------------------
Here is the full log
```bash
> new_output = new_model(**inputs_dict_class)
tests/test_modeling_common.py:2616:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501: in _call_impl
return forward_call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:165: in new_forward
output = old_forward(*args, **kwargs)
src/transformers/models/bridgetower/modeling_bridgetower.py:1423: in forward
image_embeds = self.vision_model.visual.transformer.resblocks[i](image_embeds).type(
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501: in _call_impl
return forward_call(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = BridgeTowerResidualAttention(
(attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_feature...ar(in_features=2048, out_features=512, bias=True)
)
(ln_2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
)
hidden_state = tensor([[[ 0.5531, 0.0555, -0.0248, ..., 0.2110, -0.0403, 0.0487]],
[[ 0.2963, -0.1709, 0.0074, ..., 0... [[ 0.3324, -0.0536, -0.0069, ..., 0.0911, -0.0565, -0.2751]]],
device='cuda:1', grad_fn=<ViewBackward0>)
attention_mask = None
def forward(self, hidden_state: torch.Tensor, attention_mask: torch.Tensor = None):
residual_state = hidden_state + self.attention(self.ln_1(hidden_state), attention_mask)
hidden_state = self.ln_2(residual_state)
for _, layer in self.mlp.items():
hidden_state = layer(hidden_state)
> hidden_state = residual_state + hidden_state
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
src/transformers/models/bridgetower/modeling_bridgetower.py:237: RuntimeError
================================================================================================== warnings summary ==================================================================================================
../usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/transform.py:46
/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/transform.py:46: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use BILINEAR or Resampling.BILINEAR instead.
def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============================================================================================== short test summary info ===============================================================================================
FAILED tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_model_parallelism - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
``` |
transformers | 23,028 | closed | [MEGA] nit size test | # What does this PR do?
Adresses #23025, the input_shape should be tested, not input ids because it might be None. | 04-27-2023 13:39:33 | 04-27-2023 13:39:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,027 | closed | remove tee | # What does this PR do?
Remove the piping with `| tee` when running pytest.
1. No one looks at the artifacts, and `tee` makes the output un-readable, color less etc.
2. I checked that even if you remove this, the outputs are still visible because Circle CI uses a custom file system handling for this and not the `output.txt`.
3. Subtests are omitted from the output this it does not include everything | 04-27-2023 13:09:21 | 04-27-2023 13:09:21 | Example test that does not have tee: https://app.circleci.com/pipelines/github/huggingface/transformers/63142/workflows/4754f622-1e12-40bb-be2f-0dcb363a216b/jobs/778648/steps?invite=true#step-111-6582
<|||||>CI outputs without tee:

With tee:
<img width="995" alt="image" src="https://user-images.githubusercontent.com/48595927/234875612-209fd9fe-8961-4f25-abb4-11551402ee73.png">
<|||||>Other files are still available:
[~/transformers/installed.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/installed.txt)
[~/transformers/reports/tests_onnx/durations.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/durations.txt)
[~/transformers/reports/tests_onnx/stats.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/stats.txt)
[~/transformers/reports/tests_onnx/summary_short.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/summary_short.txt)
[~/transformers/reports/tests_onnx/warnings.txt](https://output.circle-artifacts.com/output/job/c12dd81b-f182-4b6a-bdaf-051887b78948/artifacts/0/~/transformers/reports/tests_onnx/warnings.txt) <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23027). All of your documentation changes will be reflected on that endpoint.<|||||>> No one looks at the artifacts
Speak for yourself ;-) , I only look at artifacts since it's impossible to get the traceback in the output. I do not look at the output artifact however, so that change wouldn't impact how I use the reports. However I'm not alone so make sure @LysandreJik @amyeroberts and @ydshieh all agree before merging this.<|||||>(The full traceback can still be seen!)<|||||>It seems expanding the run test step is still fast, and personally I don't read `test_output.txt` but just the reports given by `--make_reports`, I am OK for this change.
One thing remaining is to remove the upload artifact step if we don't produce it. So far, we get
```bash
Uploading /home/circleci/transformers/tests_output.txt to ~/transformers/tests_output.txt
No artifact files found at /home/circleci/transformers/tests_output.txt
Total size uploaded: 0 B
```<|||||>I use the artefacts all the time :D!
I think it's fine for `test_outputs.txt` to go though, as I rarely look at it and I think all the other info can be found in the other .txt files 👍 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,026 | closed | [i18n-KO] Translated video_classification.mdx to Korean | # What does this PR do?
Translated the video_classification.mdx file of the documentation to Korean.
Thank you in advance for your review.
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. [[lowercased-header]])
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review(initial)?
Team PseudoLab, may you please review this PR? @0525hhgus, @HanNayeoniee, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review(initial)?
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
| 04-27-2023 11:34:27 | 04-27-2023 11:34:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey! Sorry for the long delay. There seems to be 2 suggestions not adresses, should we wait for these? 🤗 <|||||>@ArthurZucker the suggestions that i didn't accept were about same sentences or ealier version of our glossary so you don't need to wait for the other suggestions to be accepted!! Thank you!! |
transformers | 23,025 | closed | MegaModel not usable with chunking if input_embeds are used instead of input_ids | ### System Info
When input_embeds are used instead of input_ids then input_ids = None.
Therefore this error happens in modeling_mega.py:
-> 1544 if self.config.use_chunking and (input_ids.size(1) > self.config.chunk_size):
1545 print(input_ids.size(1))
1546 if input_ids.size(1) % self.config.chunk_size != 0:
AttributeError: 'NoneType' object has no attribute 'size'
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import MegaModel, MegaConfig
class MegaRegressor(nn.Module):
def __init__(self, input_dim=4, hidden_dim=4, num_layers=2, num_heads=4):
super().__init__()
config = MegaConfig(
vocab_size=4,
hidden_size=hidden_dim,
num_attention_heads=num_heads,
intermediate_size=4*hidden_dim,
max_positions=50000,
num_hidden_layers=num_layers,
output_attentions=False,
return_dict=True,
use_chunking=True,
chunk_size = 100
)
self.encoder = MegaModel(config)
self.fc = nn.Linear(config.hidden_size, 1)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, inputs_embeds):
out = self.encoder(inputs_embeds=inputs_embeds) # (batch_size, seq_length, hidden_size)
output = out['last_hidden_state']
output = torch.mean(output, dim=1)
output = self.dropout(output)
logits = self.fc(output).squeeze() # (batch_size, 1)
return logits
model = MegaRegressor().to(device)
#print(data.shape) --> torch.Size([8, 49800, 4])
output = model(data)
### Expected behavior
Model should not raise AttributeError: 'NoneType' object has no attribute 'size' | 04-27-2023 10:52:25 | 04-27-2023 10:52:25 | Good catch, the error is pretty straightforward, we should check with either size. |
transformers | 23,024 | closed | 🚨🚨🚨 [`Blip`] remove labels masking | # What does this PR do?
Addresses https://github.com/huggingface/transformers/pull/23004#issuecomment-1523776082
This PR aims to harmonize the training procedure for most of recent additions in `transformers`. It should be users' responsibility to fill_mask the padding tokens of the labels with the correct value. This PR addresses the issue that was raised by other architectures such as Luke or Pix2Struct
However I realize that even if this patch is applied, we still mask fill the labels with -100 [here](https://github.com/huggingface/transformers/blob/9435cc6670b7b8656b33e8ff28d3bbe9bafbca9d/src/transformers/models/blip/modeling_blip.py#L1133), similarly as in [T5](https://github.com/huggingface/transformers/blob/9435cc6670b7b8656b33e8ff28d3bbe9bafbca9d/src/transformers/models/t5/modeling_t5.py#L868)
I would love your feedback on this, @amyeroberts !
| 04-27-2023 10:01:04 | 04-27-2023 10:01:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,023 | closed | [`Pix2Struct`] Fix pix2struct doctest | # What does this PR do?
Fixes Pix2Struct doctest
Link to failing job: https://github.com/huggingface/transformers/actions/runs/4815336726/jobs/8573921590
With https://github.com/huggingface/transformers/pull/23004 being merged, the label smoothing of the loss function has been removed. Therefore the expected value of the loss function changed, leading to the failing doctest.
cc @ydshieh
| 04-27-2023 09:14:11 | 04-27-2023 09:14:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,022 | closed | Fix the expected error in `test_offline_mode_pipeline_exception` | # What does this PR do?
The expected error becomes `RuntimeError: You cannot infer task automatically within `pipeline` when using \noffline mode\n` since April 18, i.e. with 2 extra `\n`. It's not very clear where this change comes from, probably just the format thing from the `subprocess`.
Strangely, if I run the command in that test like below, there is no extra newline.
```bash
HF_HOME=/mnt/cache TRANSFORMERS_IS_CI=yes TRANSFORMERS_OFFLINE=1 python3 temp.py
```
with temp.py
```python
from transformers import pipeline
mname = "hf-internal-testing/tiny-random-bert"
pipe = pipeline(model=mname)
``` | 04-27-2023 08:55:47 | 04-27-2023 08:55:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This is extremely weird. The error is issued [here](https://github.com/huggingface/transformers/blob/9435cc6670b7b8656b33e8ff28d3bbe9bafbca9d/src/transformers/pipelines/__init__.py#L430) and is only on one line.<|||||>Yeah, but this is the log (see the last line)
-----------------------------
2023-04-26T19:08:23.5081961Z E AssertionError: 'You cannot infer task automatically within `pipeline` when using offline mode' not found in '2023-04-26 19:07:56.056711: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2023-04-26 19:07:57.005479: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library \'libnvinfer.so.7\'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64\n2023-04-26 19:07:57.005598: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library \'libnvinfer_plugin.so.7\'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64\n2023-04-26 19:07:57.005612: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n╭───────────────────── Traceback (most recent call last) ──────────────────────╮\n│ <string>:11 in <module> │\n│ /transformers/src/transformers/pipelines/__init__.py:726 in pipeline │\n│ │\n│ 723 │ │ │ │ "Inferring the task automatically requires to check th │\n│ 724 │ │ │ │ f"{model} is not a valid model_id." │\n│ 725 │ │ │ ) │\n│ ❱ 726 │ │ task = get_task(model, use_auth_token) │\n│ 727 │ │\n│ 728 │ # Retrieve the task │\n│ 729 │ if task in custom_tasks: │\n│ │\n│ /transformers/src/transformers/pipelines/__init__.py:430 in get_task │\n│ │\n│ 427 │\n│ 428 def get_task(model: str, use_auth_token: Optional[str] = None) -> str: │\n│ 429 │ if is_offline_mode(): │\n│ ❱ 430 │ │ raise RuntimeError("You cannot infer task automatically within │\n│ 431 │ try: │\n│ 432 │ │ info = model_info(model, token=use_auth_token) │\n│ 433 │ except Exception as e: │\n╰──────────────────────────────────────────────────────────────────────────────╯\nRuntimeError: You cannot infer task automatically within `pipeline` when using \noffline mode\n'
|
transformers | 23,021 | closed | Adding XLA support for greedy sampling | # What does this PR do?
This CR enables greedy sampling in model.generate on XLA devices such as Trainium and TPU. This addresses issues such as https://github.com/huggingface/transformers/issues/18661 and https://github.com/huggingface/transformers/issues/12322.
The implementation is inspired by the corresponding Tensorflow generate function in transformers. The CR uses conditional statements to support greedy sampling, and the user can switch between GPU implementation or XLA implementation depending on the state of `is_torch_tpu_available`. The CR also implements kv-cache functionality that is XLA compatible.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante Feel free to suggest appropriate tests/refactors for this PR. We have tested generation locally using a trn1.32xlarge instance and matched Rouge scores for T5-small summarization.
| 04-27-2023 08:45:51 | 04-27-2023 08:45:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23021). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @aashiqmuhamed! In general the PR looks positive, but before diving deeper, let us (`transformers` team) have a discussion about adding this type of PRs (new HW-oriented optimizations). `generate` is very complex ATM, and we want to make it more manageable -- if we merge all PRs of this kind in the current `generate` state, it will become maintenance hell for everyone.
I will get back to you in this PR within a week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,020 | closed | added type hints for blip_text model | # What does this PR do?
Added type hints for ```blip_text``` pytorch model as tasked in #16059
@Rocketknight1 Could you review this? | 04-27-2023 08:30:15 | 04-27-2023 08:30:15 | Error logs suggest that to use ```pip install "black[jupyter]"``` but unable to understand what to do after that. @Rocketknight1 could you suggest how to fix test failures?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_23020). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @iamarunbrahma, `black` is a code formatting tool. After you've installed it, run `make style` or `make fixup` in the `transformers` directory. This should reformat your file for you and get the tests in the CI to pass.<|||||>@Rocketknight1 getting this error while running ```make fixup```:
```
/bin/sh: line 3: black: command not found
/bin/sh: line 4: ruff: command not found
make: *** [Makefile:10: modified_only_fixup] Error 127
```
and while running ```make style```:
```
make: black: No such file or directory
make: *** [Makefile:68: style] Error 127
```
I have installed both ```black``` and ```ruff```<|||||>@iamarunbrahma looks like `black` wasn't installed after all! The easiest way to get it is to `cd` to the `transformers` source directory you're working on and `pip install .[quality]`. You can also just type `pip install transformers[quality]` anywhere, but this may get slightly older versions of the code quality tools (it's usually fine though).
Once the tools you need are installed, `make fixup` or `make style` should work. |
transformers | 23,019 | closed | Error in get embedding_size. | ### System Info
- transformers version: 4.28.1
- deepspeed 0.8.3
In [examples](https://github.com/huggingface/transformers/tree/main/examples)/[pytorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch)/[language-modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling)/[run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py)
when I set deepspeed config, I get an embedding_size as 0.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
```
At this point, the value of embedding_size obtained is 0, which may cause an error in code. Maybe it's because using deepspeed config.
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": "auto"
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
Gets that the embedding size is not equal to zero | 04-27-2023 08:23:52 | 04-27-2023 08:23:52 | cc @stas00 <|||||>With zero-3 outside of fwd/bwd logic where this is done automatically you need to manually gather the sharded model's weights that you need.
Please see:
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#gathering-parameters
And you will find several examples in our code, e.g.:
https://github.com/huggingface/transformers/blob/88399476c3892435395618ed37993176dbb0de73/src/transformers/modeling_utils.py#L1455-L1456 |
transformers | 23,018 | closed | Parameter at index 195 has been marked as ready twice. | ### System Info
- `transformers` version: 4.28.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I retrained Roberta on my own corpus with the MLM task. I set `model.gradient_checkpointing_enable()` to save memory.
```python
model = RobertaModel.from_pretrained(model_name_or_path,config=config)
model.gradient_checkpointing_enable() # Activate gradient checkpointing
model = Model(model,config,tokenizer,args)
```
My model:
```python
class Model(nn.Module):
def __init__(self, model,config,tokenizer,args):
super(Model, self).__init__()
self.encoder = model
self.config = config
self.tokenizer = tokenizer
self.args = args
self.lm_head = nn.Linear(config.hidden_size,config.vocab_size)
self.lm_head.weight = self.encoder.embeddings.word_embeddings.weight
self.register_buffer(
"bias", torch.tril(torch.ones((args.block_size, args.block_size), dtype=torch.uint8)).view(1, args.block_size, args.block_size)
)
def forward(self, mlm_ids):
...
```
There is an error:
```
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parame
ter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. o
r try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multipl
e reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result
in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple
times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does n
ot change over iterations.
Parameter at index 195 with name encoder.encoder.layer.11.output.LayerNorm.weight has been marked as ready twice. This means that multiple
autograd engine hooks have fired for this particular parameter during this iteration.
```
If I get rid of this line of code:`model.gradient_checkpointing_enable()`, it is ok. Why?
### Expected behavior
I want to pre-train with `gradient_checkpointing`. | 04-27-2023 06:21:51 | 04-27-2023 06:21:51 | There is little we can do to help without seeing a full reproducer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Got exact same bug when gradient_checkpointing_enable()
|
transformers | 23,017 | closed | model generate with different batch size but get different results | ### System Info
I'm using MT5ForConditionalGeneration by transformers to generate summaries,but when I use the arguments below,I will get different results when using beam search + do_sample + top_k + top_p。Only use beam search or do_sample can not cause this phenomenon。But why?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
self.model = MT5ForConditionalGeneration.from_pretrained(model_dir)
results = conditional_generation_summarizer.batch_summarize(
documents=temp,
# max_length=MAX_LENGTH,
# bad_words=bad_words,
max_new_tokens=MAX_LENGTH,
num_beams=6,
# num_beam_groups=3,
# temperature=4.0,
# diversity_penalty=2.0,
# max_new_tokens=30,
no_repeat_ngram_size=2,
do_sample=True,
top_k=20,
# top_k=1,
top_p=0.9,
repetition_penalty=4.0,
length_penalty=20.0,
early_stopping=True,
# num_return_sequences=6
)
result = self.model.generate(
input_ids,
# attention_mask=attention_mask,
decoder_start_token_id=self.tokenizer.cls_token_id,
eos_token_id=self.tokenizer.sep_token_id,
# max_length=max_length,
# early_stopping=True,
# num_beams=num_beams,
**kwargs
)
```
<img width="526" alt="Pasted Graphic 20" src="https://user-images.githubusercontent.com/33918902/234769066-ee29b84f-90ca-46a4-942c-fef7adcdf1ba.png">
<img width="501" alt="Pasted Graphic 21" src="https://user-images.githubusercontent.com/33918902/234769088-348c19e6-d5db-42b9-b7ee-4a4c1b3690a6.png">
### Expected behavior
I think different batch_size should not do effect on generation. | 04-27-2023 05:40:43 | 04-27-2023 05:40:43 | cc @gante<|||||>> cc @gante
thks to reply.I find some things about this stranger phenomenon.When doing generation,the second time it started to be different.The pic below is the beam_new_tokens results when using `batch size = 1` and `batch_size = 2`.
<img width="1975" alt="image" src="https://user-images.githubusercontent.com/33918902/235030665-5fde95a5-f22c-436b-afcc-9e9953d95b3e.png">
debug line by line,I find that something about the torch.multinomial,after this operator,results begin to be different<|||||>Hey @Alwin4Zhang
When you use `do_sample=True`, the results will be different every time you call `generate` :) Read [our blog post on text generation](https://huggingface.co/blog/how-to-generate) and [our docs](https://huggingface.co/docs/transformers/generation_strategies) for further information.<|||||>> Hey @Alwin4Zhang
>
> When you use `do_sample=True`, the results will be different every time you call `generate` :) Read [our blog post on text generation](https://huggingface.co/blog/how-to-generate) and [our docs](https://huggingface.co/docs/transformers/generation_strategies) for further information.
Thks 4 reply.It's weird that it's the same when I only use `do_sample=True` + `top_k` + `top_p` with different batch_size,I'm sure that manual_seed is the same.But add `beam_search` arg,the strange things above will happen.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@gante Facing the same issue even when `do_sample = False`
[Colab Notebook replicating this issue](https://colab.research.google.com/drive/1et5wYV25Bv8miAx9T8ijJ4trpTV2QPGh?usp=sharing)<|||||>Hey @varadhbhatnagar 👋
To be more specific, batching is not entirely innocuous on the output. The final result depends on the order of operations with FP16 (and other precisions), and batching changes the order of operations, meaning that the model outputs will see tiny fluctuations. There is nothing we can do to remedy this effect other than increasing the precision (e.g. to FP32).
These fluctuations often cause no change in the model output with `do_sample = False` -- unless the two most likely tokens have very similar probabilities. This may happen with some frequency when you're using a model with out of distributions inputs, such as using a code model with a non-code input (as seen in your colab) :) |
transformers | 23,016 | closed | When download model. Error: DefaultCPUAllocator: can't allocate memory: you tried to allocate | ### System Info
I want to download the GPT4all-J model.
according to this weblink : https://huggingface.co/nomic-ai/gpt4all-j
download code:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")
```
it got thie error in the end:
```
RuntimeError: [enforce fail at alloc_cpu.cpp:75] err == 0. DefaultCPUAllocator: can't allocate memory: you
tried to allocate 268435456 bytes. Error code 12 (Cannot allocate memory)
```
----
and the memory of my machine is enough for 268435456 bytes.
```
[root@VM-0-5-centos ~]# free
total used free shared buff/cache available
Mem: 19750992 386456 18700296 2024 664240 19070948
Swap: 0 0 0
```
> 19070948 KB > 268435456 bytes
---
@vanpelt @pvl @arfon @xeb
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. create the download python file;
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")
```
2. run step 1;
3. wait for it;
4. got the error.
### Expected behavior
download success. | 04-27-2023 05:15:11 | 04-27-2023 05:15:11 | what's wrong with the code above.<|||||>This means you do not have enough RAM to load the model, not disk memory. You can try adding `device_map="auto"` to load directly the model on the GPUs if you have any, or `torch_dtype=torch.float16` to save 2x the memory (inference only). |
transformers | 23,015 | closed | Saving prediction for --do_predict and --predict_with_generate in transormers/examples/pytorch/question-answering /run_seq2seq_qa.py | ### Feature request
The feature for saving predictions for `--do_predict` and `--predict_with_generate` was not functional for `run_seq2seq_qa.py` module.
Missing code file -> `transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py`
Image of the section of code which should handle this.

### Motivation
Some other modules like run_summarization.py have that feature.
Motivation from `transformers/examples/pytorch/summarization/run_summarization.py`

### Your contribution
Adding a code snippet would help to save the predictions for `--do_predict` and `--predict_with_generate`.
Code changes to be done here -> `transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py`
```python
# Prediction
if training_args.do_predict:
logger.info("*** Predict ***")
results = trainer.predict(predict_dataset, predict_examples)
metrics = results.metrics
max_predict_samples = (
data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)
)
metrics["predict_samples"] = min(max_predict_samples, len(predict_dataset))
trainer.log_metrics("predict", metrics)
trainer.save_metrics("predict", metrics)
# Added code section for saving predictions
if trainer.is_world_process_zero():
if training_args.predict_with_generate:
predictions = predict_results.predictions
predictions = [pred['prediction_text'] for pred in predictions]
output_prediction_file = os.path.join(training_args.output_dir, "generated_predictions.txt")
with open(output_prediction_file, "w") as writer:
writer.write("\n".join(predictions))
```

| 04-26-2023 19:20:39 | 04-26-2023 19:20:39 | Do you want to open a PR with those changes?<|||||>Yeah Sure I can do that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,014 | closed | 🚨🚨🚨 Use default ignore index in Luke | # What does this PR do?
As discussed in #22981, the `ignore_index` for Luke should be the same as all models in Transformers, even if it does not match the original authors implementation.
This is breaking but needed to align all models to have the same API.
Fixes #22981 | 04-26-2023 17:47:02 | 04-26-2023 17:47:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,013 | closed | Upgrading sentencepiece modeling file (for proto > 4 support). | # What does this PR do?
Upgrades the file.
Taken from `google/sentencepiece` directly.
Should prevent "Downgrade the protobuf package".
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 04-26-2023 16:18:12 | 04-26-2023 16:18:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can you push an empty commit with "[all-test]" in the message? I'd like to see if there is a problem with an older version of protobuf (CI should be on < 4).<|||||>I remember we said we should check if the protobuf version is installed somewhere, as we had some issues. Did it turn out to be ok?<|||||>It turns out the file in https://github.com/google/sentencepiece is actually not even valid protobuf 4.x ...<|||||>So this doesn't work with protobuf 4.x in the end?<|||||>Nope. it passes with protobuf 3.20 in the tests, but not with 4.x.....
I think we should wait for them to upgrade before doing it ourselves (the .proto files are available so we could generate them with a 4.x compiler... but I don't like doing that.) As much as handling 4.x and 3.20 codebases is annoying I don't want to spend much time on this tbh.
I'll close this PR, we can resurrrect later maybe.<|||||>Thanks for having tried! |
transformers | 23,012 | closed | 🌐 [i18n-KO] Translated `tasks/question_answering.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `tasks/question_answering.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 04-26-2023 15:35:09 | 04-26-2023 15:35:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>May you please review this PR? 😄
@sgugger, @ArthurZucker, @eunseojo |
transformers | 23,011 | closed | Remove a failing ONNX test | # What does this PR do?
Same as in #22660, but for `swin` after the recent PR #22893 | 04-26-2023 15:05:36 | 04-26-2023 15:05:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,010 | closed | Add Trainer support for ReduceLROnPlateau | # What does this PR do?
This PR solves #16503 by adding support to pytorch's [ReduceLROnPlateau](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html) to `Trainer`.
It does so by adding a new `REDUCE_ON_PLATEAU` field to `SchedulerType` and a new `reduce_lr_on_plateau_args` parameter to `TrainingArguments` that is parsed at initialization to avoid adding 9 new individual arguments. The scheduler re-uses the metric stored in `metric_for_best_model`, and is delayed to run after evaluation since it requires metrics to be populated.
I'm not sure whether it is due to the complexity of `Trainer`, my lack of experience (this is my first PR to a large project) or the uniqueness of `ReduceLROnPlateau` compared to other schedulers, but this PR feels a bit hacky, so I welcome any feedback.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Looking at #16503, I believe this is for @sgugger. | 04-26-2023 15:01:29 | 04-26-2023 15:01:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review! I believe this should do it. There isn't much in the way of default arguments, but `ReduceLROnPlateau` is quite different from other schedulers in the first place. |
transformers | 23,009 | closed | whisper identified the wrong language | ### Feature request
When I follow the example of long-form transcription for whisper-large with Korean, the result is English. But after finetuning the whisper-large model with some Korean data, the checkpoint can output Korean. I also test other model size, but all the models output English.
I was confused about it. How should I do to output Korean with the original model?
Thank you!
### Motivation
Test whisper in Korean.
### Your contribution
Test whisper in Korean. | 04-26-2023 14:32:51 | 04-26-2023 14:32:51 | Hi there. Questions like this are better suited on the [forums](https://discuss.huggingface.co/) or a discussion on the model page as we keep issues for bugs and feature requests only.<|||||>If you use pipeline, you should add option like
generate_kwargs = {"task":"transcribe", "language":"<|fr|>"}
ref1: https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=dPD20IkEDsbG
ref2: https://github.com/huggingface/transformers/issues/22331
however, I think default task should be "transcribe" not "translate". I insist It's an error.<|||||>I have solved the problem.
Step 1: Upgrade transformers, unfixed.
Step 2: Add option like "generate_kwargs = {"task":"transcribe", "language":"<|fr|>"}", unfixed.
Step 3: Add a line like "pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language="ko", task="transcribe")", fixed.
However, I still don't understand why the original model output is English but the fine-tuned model output is in Korean.<|||||>maybe you can checked your fine-tuned model's config.json or generation_config.json, double check the default task type, I think it's null or "transcribe"<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 23,008 | closed | 🌐 [i18n-KO] Translated `multilingual.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `multilingual.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 04-26-2023 13:18:22 | 04-26-2023 13:18:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,007 | closed | Update test_beam_constraints.py | # What does this PR do?
The advantage of using `assertEqual` over `==` is that it provides more informative error messages in case of a failure. For example, if you use `assertEqual(a, b)` and the assertion fails, the error message will include the values of a and b as well as the test name and line number. This makes it easier to identify and fix the problem. Similarly, the advantage of using `assert_` over `==` is that it provides a more informative error `message. assert_` is a method provided by the `unittest.TestCase` class that takes a single argument and asserts that it evaluates to True. If the argument is not True, the test fails and an error message is printed that includes the test name and line number.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [+] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [+] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [-] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [+] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [+] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-26-2023 13:06:06 | 04-26-2023 13:06:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 23,006 | closed | [`PEFT`] Add HFTracer support for PEFT | # What does this PR do?
For more context, PiPPy is a library for an out-of-the-box Pipeline Parallelism for torch models. PiPPY heavily relies on HF tracer under the hood.
Some interest has grown to support PiPPy for PEFT models: https://github.com/huggingface/peft/issues/194#issuecomment-1496767740 but it appeared that before this PR PEFT models were not supported by the HF tracer for many reasons. Therefore this PR addresses this, by doing precisely:
1- Relaxing the constraints for the model check in the tracing mechanism
2- Define a proper `__iter__` method for `HFProxy` class to properly handle `**kwargs` calls in the forward passes
A proper testing suite will be added in PEFT, as a set of slow tests, as the GH runner uses an environment that is complied with the main branch of transformers.
Thanks a lot @michaelbenayoun for digging the issue with me
cc @michaelbenayoun @sgugger @pacman100
| 04-26-2023 12:39:23 | 04-26-2023 12:39:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm not against the auto-formatting changes, but could we have them in a separate PR please? This is polluting the diff here. |
transformers | 23,005 | closed | Pycharm Debug Mode Errors | ### System Info
I am using Pycham debug mode, and It has no problem in transformers==4.24.0, but after version 4.24.0, it get below errors during debug mode due to transformers. The code works without debug mode but only gets a problem during debug modes due to the transformer library when the version is above 4.24.0). My environment is Ubuntu 18.04, Torch 1.12.1, CUDA 11.3
```
/home/miruware/anaconda3/envs/dreamfusion/bin/python /snap/pycharm-professional/331/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 32875 --file /home/miruware/ssd_4tb/diffusion/workspace/stable-dreamfusion/main.py -O --image data/hamburger_rgba.png --workspace results/test --iters 5000
/home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/pydevd.py:55 in │
│ <module> │
│ │
│ 52 from _pydevd_bundle.pydevd_custom_frames import CustomFramesContainer │
│ 53 from _pydevd_bundle.pydevd_frame_utils import add_exception_to_frame, │
│ 54 from _pydevd_bundle.pydevd_kill_all_pydevd_threads import kill_all_py │
│ ❱ 55 from _pydevd_bundle.pydevd_trace_dispatch import ( │
│ 56 │ trace_dispatch as _trace_dispatch, global_cache_skips, global_cac │
│ 57 from _pydevd_frame_eval.pydevd_frame_eval_main import ( │
│ 58 │ frame_eval_func, dummy_trace_dispatch, show_frame_eval_warning) │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_trace_dispatch.py:60 in <module> │
│ │
│ 57 elif use_cython is None: │
│ 58 │ # Regular: use fallback if not found and give message to user │
│ 59 │ try: │
│ ❱ 60 │ │ from _pydevd_bundle.pydevd_cython_wrapper import trace_dispatch │
│ 61 │ │ def trace_dispatch(py_db, frame, event, arg): │
│ 62 │ │ │ if _trace_dispatch is None: │
│ 63 │ │ │ │ return None │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_cython_wrapper.py:4 in <module> │
│ │
│ 1 import sys │
│ 2 │
│ 3 # This version number is always available │
│ ❱ 4 from _pydevd_bundle.pydevd_additional_thread_info_regular import versio │
│ 5 │
│ 6 try: │
│ 7 │ try: │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_additional_thread_info_regular.py:7 in <module> │
│ │
│ 4 # IFDEF CYTHON │
│ 5 # pydev_log.debug("Using Cython speedups") │
│ 6 # ELSE │
│ ❱ 7 from _pydevd_bundle.pydevd_frame import PyDBFrame │
│ 8 # ENDIF │
│ 9 │
│ 10 version = 37 │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_frame.py:32 in <module> │
│ │
│ 29 from _pydevd_bundle.pydevd_constants import IS_PY2 │
│ 30 │
│ 31 try: │
│ ❱ 32 │ from _pydevd_bundle.pydevd_signature import send_signature_call_tr │
│ 33 except ImportError: │
│ 34 │ def send_signature_call_trace(*args, **kwargs): │
│ 35 │ │ pass │
│ │
│ /snap/pycharm-professional/331/plugins/python/helpers/pydev/_pydevd_bundle/p │
│ ydevd_signature.py:3 in <module> │
│ │
│ 1 │
│ 2 try: │
│ ❱ 3 │ import trace │
│ 4 except ImportError: │
│ 5 │ pass │
│ 6 else: │
│ │
│ /home/miruware/ssd_4tb/diffusion/workspace/stable-dreamfusion/trace.py:31 in │
│ <module> │
│ │
│ 28 unet.forward = functools.partial(unet.forward, return_dict=False) # se │
│ 29 │
│ 30 # load inputs │
│ ❱ 31 train_latent_model_input = torch.load("train_latent_model_input.pt").to │
│ 32 train_t = torch.load("train_t.pt").to(torch.float16) │
│ 33 train_text_embeddings = torch.load("train_text_embeddings.pt").to(torch │
│ 34 │
│ │
│ /home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/torch/ │
│ serialization.py:699 in load │
│ │
│ 696 │ if 'encoding' not in pickle_load_args.keys(): │
│ 697 │ │ pickle_load_args['encoding'] = 'utf-8' │
│ 698 │ │
│ ❱ 699 │ with _open_file_like(f, 'rb') as opened_file: │
│ 700 │ │ if _is_zipfile(opened_file): │
│ 701 │ │ │ # The zipfile reader is going to advance the current file │
│ 702 │ │ │ # If we want to actually tail call to torch.jit.load, we │
│ │
│ /home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/torch/ │
│ serialization.py:230 in _open_file_like │
│ │
│ 227 │
│ 228 def _open_file_like(name_or_buffer, mode): │
│ 229 │ if _is_path(name_or_buffer): │
│ ❱ 230 │ │ return _open_file(name_or_buffer, mode) │
│ 231 │ else: │
│ 232 │ │ if 'w' in mode: │
│ 233 │ │ │ return _open_buffer_writer(name_or_buffer) │
│ │
│ /home/miruware/anaconda3/envs/dreamfusion/lib/python3.9/site-packages/torch/ │
│ serialization.py:211 in __init__ │
│ │
│ 208 │
│ 209 class _open_file(_opener): │
│ 210 │ def __init__(self, name, mode): │
│ ❱ 211 │ │ super(_open_file, self).__init__(open(name, mode)) │
│ 212 │ │
│ 213 │ def __exit__(self, *args): │
│ 214 │ │ self.file_like.close() │
╰──────────────────────────────────────────────────────────────────────────────╯
FileNotFoundError: [Errno 2] No such file or directory:
'train_latent_model_input.pt'
Process finished with exit code 1
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Any example using official diffusers(0.15.1) code with transformers(4.28.1) library!
### Expected behavior
No Errors during Pycharm debug modes | 04-26-2023 10:21:02 | 04-26-2023 10:21:02 | There is nothing in that traceback that is linked to the Transformers package. It is all in stable-dreamfusion.<|||||>Thanks, I will recheck it! |
transformers | 23,004 | closed | 🚨🚨🚨 [`Pix2Struct`] Attempts to fix training issues 🚨🚨🚨 | # What does this PR do?
This PR attempts to partially fix: https://github.com/huggingface/transformers/issues/22903 for better user experience when training `Pix2Struct`.
As stated in the aformentioned issue, some users are having hard times to train pix2struct, for many reasons, some of them being:
- Force adding the special tokens when encoding text --> otherwise the model will keep repeating the generated text
- Remove label smoothing to comply with other model architectures design
- Also remove label masking to be consistent with other models. As referred in https://github.com/huggingface/transformers/issues/22903#issuecomment-1518275840 I agree it should be users responsibility to add that masking
With these fixes, the following script:
```python
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, AutoProcessor
from torch.optim import AdamW
import torch
torch.manual_seed(42)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-base", torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained("google/pix2struct-base")
dummy_target = "The model should overfit this sentence"
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
encoded_image = processor(images=image, return_tensors="pt")
encoded_text = processor(text=dummy_target, return_tensors='pt', max_length=20)
optimizer = AdamW(model.parameters(), lr=1e-4)
model.train()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
flattened_patches=encoded_image.flattened_patches.to(device).to(torch.bfloat16)
attention_mask=encoded_image.attention_mask.to(device)
labels=encoded_text.input_ids.to(device)
for i in range(1000):
outputs = model(
flattened_patches=flattened_patches,
attention_mask=attention_mask,
labels=labels
)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
if i % 50 == 0:
model.eval()
prediction = model.generate(
flattened_patches=flattened_patches,
attention_mask=attention_mask)
print(f'step: {i} train_loss: {loss.item()} prediction: {processor.batch_decode(prediction)}')
model.train()
```
Goes from outputting:
```bash
step: 0 train_loss: 8.259493827819824 prediction: ['<pad> <img_src=cropped-img-20180924']
step: 50 train_loss: 1.9695181846618652 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 100 train_loss: 2.071323871612549 prediction: ['<pad> <The model should overfit this sentence should overfit this sentence should overfit this sentence should']
step: 150 train_loss: 2.0366554260253906 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 200 train_loss: 1.8225889205932617 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 250 train_loss: 1.6568734645843506 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 300 train_loss: 1.6770282983779907 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence']
step: 350 train_loss: 1.688515067100525 prediction: ['<pad> The model should overfit this sentence sentence overfit this sentence sentence overfit this sentence sentence over']
step: 400 train_loss: 1.6118296384811401 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 450 train_loss: 1.6204414367675781 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence should overfit this sentence should']
step: 500 train_loss: 1.59645676612854 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 550 train_loss: 1.5818239450454712 prediction: ['<pad> The model should overfit this sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence sentence']
step: 600 train_loss: 1.5775129795074463 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 650 train_loss: 1.561257243156433 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 700 train_loss: 1.5319150686264038 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 750 train_loss: 1.646193504333496 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 800 train_loss: 1.533736228942871 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 850 train_loss: 1.6203268766403198 prediction: ['<pad> The model should overfit this sentence should overfit this sentence should overfit this sentence should over']
step: 900 train_loss: 1.5132172107696533 prediction: ['<pad> The model should overfit this sentence sentence should overfit this sentence sentence should overfit this sentence']
step: 950 train_loss: 1.491452693939209 prediction: ['<pad> The model should overfit this sentence The model should overfit this sentence The model should overfit']
```
To:
```bash
step: 0 train_loss: 9.75 prediction: ['<pad> <<img_src=1> <img_src=2> <img_src=']
step: 50 train_loss: 0.125 prediction: ['<pad> <<img_src=1> <img_src=1> <img_src=']
step: 100 train_loss: 0.0089111328125 prediction: ['<pad> The model should overfit this sentence</s>']
...
```
cc @sgugger @amyeroberts @NielsRogge | 04-26-2023 10:08:07 | 04-26-2023 10:08:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>For me it's the same thing as what we discussed with Luke yesterday. It's important to have a consistent API so 100% for:
- leaving label_smoothing out of the loss computation by default (users can compute the loss themselves by not passing the logits or using the Trainer with label_smoothing)
- using -100 as ignore index and not the pad token (this is something I should have caught in the review, and we have already gone to a lot of trouble to harmonize all models to this)<|||||>Thanks for the review!
Let me know if I should also make the changes for BLIP as well as you suggested @amyeroberts <|||||>@younesbelkada Yes please! <|||||>Hmm actually this broke some slow tests, I will need to dig more into that combined with https://github.com/huggingface/transformers/issues/22903#issuecomment-1525771904 , will let you know
<|||||>False alarm, I will just properly document how to use pix2struct for conditional text generation!<|||||>Hi,
I'm having problems when trying to fine-tune the pix2struct model.
I am basing myself on the notebook from https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Pix2Struct and that is why I have generated an issue in that repository where I explain my problem in detail: https://github.com/NielsRogge/Transformers-Tutorials/issues/293
Mainly it is that after several trainings and inferences with the generated models I see that the result of the inference is always the same, regardless of the input image.
Do you know what could be happening or how to fix it?
|
transformers | 23,003 | closed | PipelineChunkIterator does not provide the correct length | ### System Info
- `transformers` version: 4.28.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Relatively minor in the scheme of things, but I looked into it a bit to make sure it wasn't an issue with batching.
```python
from transformers import pipeline
pipe = pipeline("token-classification")
pipe(["New York " * 600] * 2, stride=0)
```
Leads to noisy warnings from torch:
```none
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 3 samples have been fetched.
warnings.warn(warn_msg)
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 4 samples have been fetched.
warnings.warn(warn_msg)
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 5 samples have been fetched.
warnings.warn(warn_msg)
/tmp/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:646: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0x7f084a9bce50> was reported to be 2 (when accessing len(dataloader)), but 6 samples have been fetched.
warnings.warn(warn_msg)
```
### Expected behavior
`PipelineChunkIterator` provides the intended length, no noisy warnings. | 04-26-2023 09:51:15 | 04-26-2023 09:51:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @adrianeboyd I am experiencing this same issue. Did you manage to solve this? Or was it just a versioning thingy?<|||||>As far as I know this hasn't changed in any newer releases. I think that the implementation works in practice, but it triggers these warnings from pytorch that are trying to protect you from yourself in case you've written a faulty iterator. The problem is that it's returning as the length the number of texts rather than the number of (strided) subtexts that will be processed in the end. But with the multiple levels of iterators involved I wasn't sure how to fix it for all possible use cases.<|||||>Cool, thanks for the explanation. I had not spent time on it yet but will ignore and disable the warnings for now. |
transformers | 23,002 | closed | added GPTNeoXForTokenClassification | # What does this PR do?
It adds the class GPTNeoXForTokenClassification, which allows using GPT NeoX models for token classification tasks. The implementation follows the one for other models (such as GPT2 and GPT Neo) closely and simply adds a linear layer after the hidden states.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@younesbelkada | 04-26-2023 08:32:50 | 04-26-2023 08:32:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ready for review, @ArthurZucker and @younesbelkada 👍 |
transformers | 23,001 | open | `return_overflowing_tokens` has different behavior between slow tokenizer and fast tokenizer | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Arthur
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm studying the [nlp course chapter 6](https://huggingface.co/learn/nlp-course/en/chapter6/3b), and find `return_overflowing_tokens` has different behavior between slow tokenizer and fast tokenizer, is it a feature or a bug?
```python
from transformers import DistilBertTokenizer, DistilBertTokenizerFast
model_checkpoint = "distilbert-base-cased-distilled-squad"
slow_tokenizer = DistilBertTokenizer.from_pretrained(model_checkpoint)
fast_tokenizer = DistilBertTokenizerFast.from_pretrained(model_checkpoint)
```
```python
sentence = "This sentence is not too long but we are going to split it anyway."
inputs = fast_tokenizer(
sentence, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2
)
print(inputs["input_ids"])
```
Then I got the output
```
[[101, 1188, 5650, 1110, 1136, 1315, 1263, 102], [101, 1315, 1263, 1133, 1195, 1132, 1280, 102], [101, 1132, 1280, 1106, 3325, 1122, 4050, 102], [101, 1122, 4050, 119, 102]]
```
but when I replace `fast_tokenizer` with `slow_tokenizer`, I got
```
[101, 1188, 5650, 1110, 1136, 1315, 1263, 102]
```
### Expected behavior
The slow tokenizer should behave same as fast tokenizer. | 04-26-2023 07:42:16 | 04-26-2023 07:42:16 | cc @ArthurZucker but I think the overflowing tokens is specifically a feature of our fast tokenizers, so it's completely normal that you don't ahve it in the slow ones.<|||||>Hey! Thanks for reporting this. No it seems that the `return_overflowing_tokens` logic is implemented in the base class, so might be interesting to look at this. I'll have a look when I can, in the mean time labelling as a tokenizers bug
<|||||>Okay, it seems that there is a difference in design, `tokenizers` library returns a batch of overflowing tokens, which takes into account the max length and stride. So it creates a batch from a non batched sentence, which could (?) be what was originally intended. However, this will fail if `return_tensors=True` with an error.
On the other hand, `transformers` just cuts the input sentence and returns everything that was truncated, without creating this strange behaviour.
I am not really sure what is best honestly, cc @Narsil I think it's fine to just leave it as is? ( I can edit the doc to make sure that the format in slow is different from fast ?)<|||||>Yes I'm not sure we should do something about it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
>
> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
The problem still exists in the latest version |
transformers | 23,000 | closed | Possible bug in BlipForQuestionAnswering loss computation due to redundant right-shift | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.0
- Safetensors version: not installed
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In `BlipForQuestionAnswering.forward()`, if the `labels` parameter is provided, then [lines 1218-1222](https://github.com/huggingface/transformers/blob/v4.28.1/src/transformers/models/blip/modeling_blip.py#L1218) set `decoder_input_ids` to the right-shifted version of `labels`, then passes both those variables into `self.text_decoder`, which is an instance of `BlipTextLMHeadModel`:
```python
if labels is not None and decoder_input_ids is None:
# get decoder inputs from shifting lm labels to the right - this is used in training mode
decoder_input_ids = self._shift_right(labels)
# replace possible -100 values in labels by `pad_token_id`
labels = labels.masked_fill(labels == self.decoder_pad_token_id, -100)
answer_output = self.text_decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=question_embeds,
encoder_attention_mask=attention_mask,
labels=labels,
return_dict=return_dict,
reduction="mean",
)
```
However, in the code for `BlipTextLMHeadModel.forward()`, it seems like it's [already doing that shift for you](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/models/blip/modeling_blip_text.py#L888):
```python
if labels is not None:
# we are doing next-token prediction; shift prediction scores and input ids by one
shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous().to(shifted_prediction_scores.device)
loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1)
lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
if reduction == "none":
lm_loss = lm_loss.view(prediction_scores.size(0), -1).sum(1)
```
Am I just misinterpreting this, or is the shift done twice, i.e., the loss is for next-next token prediction??
EDIT: As another point, the official [Jupyter notebook for BLIP](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) creates and instance of and trains `BlipForConditionalGeneration`, which also uses `BlipTextLMHeadModel` as the decoder. In this case, the `input_ids` and `labels` are the same (not shifted):
```python
for idx, batch in enumerate(train_dataloader):
input_ids = batch.pop("input_ids").to(device)
pixel_values = batch.pop("pixel_values").to(device)
outputs = model(input_ids=input_ids,
pixel_values=pixel_values,
labels=input_ids)
```
Inside `BlipForConditionalGeneration.forward()`, it also doesn't shift the tokens:
```python
outputs = self.text_decoder(
input_ids=input_ids,
attention_mask=attention_mask,
encoder_hidden_states=image_embeds,
labels=labels,
return_dict=return_dict,
reduction="mean",
)
```
EDIT 2: Seems like the original BLIP code similarly only shifts once. In `BLIP_VQA.forward()`, located [here](https://github.com/salesforce/BLIP/blob/main/models/blip_vqa.py#L51), there is no shift:
```python
answer = self.tokenizer(answer, padding='longest', return_tensors="pt").to(image.device)
answer.input_ids[:,0] = self.tokenizer.bos_token_id
answer_targets = answer.input_ids.masked_fill(answer.input_ids == self.tokenizer.pad_token_id, -100)
question_output = self.text_encoder(question.input_ids,
attention_mask = question.attention_mask,
encoder_hidden_states = image_embeds,
encoder_attention_mask = image_atts,
return_dict = True)
question_states = []
question_atts = []
for b, n in enumerate(n):
question_states += [question_output.last_hidden_state[b]]*n
question_atts += [question.attention_mask[b]]*n
question_states = torch.stack(question_states,0)
question_atts = torch.stack(question_atts,0)
answer_output = self.text_decoder(answer.input_ids,
attention_mask = answer.attention_mask,
encoder_hidden_states = question_states,
encoder_attention_mask = question_atts,
labels = answer_targets,
return_dict = True,
reduction = 'none',
)
```
and there is a shift in `self.text_decoder.forward()`, as seen [here](https://github.com/salesforce/BLIP/blob/main/models/med.py#L904):
```python
prediction_scores = self.cls(sequence_output)
if return_logits:
return prediction_scores[:, :-1, :].contiguous()
lm_loss = None
if labels is not None:
# we are doing next-token prediction; shift prediction scores and input ids by one
shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous()
loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1)
lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
if reduction=='none':
lm_loss = lm_loss.view(prediction_scores.size(0),-1).sum(1)
```
Only the `text_decoder` itself shifts the text (again, in the forward function).
### Expected behavior
N/A | 04-26-2023 07:29:13 | 04-26-2023 07:29:13 | cc @younesbelkada I think the thread above is correct, either:
* `input_ids` and `labels` are the same but then one needs to shift the `logits` when computing the loss. Examples of models that do this are GPT-2, GPT-J, etc
* you shift the `input_ids` (actually `decoder_input_ids` in case of a decoder) before feeding them to the Transformer and then you don't need to shift the `logits`. Examples of models that do this are T5, BART, etc.
This can probably be confirmed by fine-tuning `BlipForQuestionAnswering` on 10 example image, question and answer triplets and see whether the model is able to overfit them.<|||||>@NielsRogge @younesbelkada I just tried out your suggestion, and sure enough, it could not overfit to them. All answers are of the form ([CLS], a, [some noun], ., [SEP]). With the current implementation, the first shift changes that to ("", [cls], a, [some noun], .). Then, the second shift changes the pairing to inputs = ("", [cls], a, [some noun]) and labels = (a, [some noun], ., [sep]), e.g., next-next token prediction.
The outputs there are:
```
['', 'a', '.', 'cat', '[SEP]']
['', 'a', '.', 'dog', '[SEP]']
['', 'a', '.', 'wolf', '[SEP]']
['', 'a', '.', 'bear', '[SEP]']
```
Due to it learning next-next token prediction, "" is always followed by 'a', which is followed by '.' It never learns what should come after '.' but it just outputs the noun, which is then two tokens away from [SEP].
However, I found a fix. Instead of shifting the decoder's `input_ids` but not the `labels`, shift _both_, but do NOT get rid of the final character (since that's [SEP], which it should learn as the final character). Here's my code:
```python
# Now, train without redundant shift!
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base").to(device)
processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")
optimizer = transformers.AdamW(model.parameters(), lr=3e-5)
return_dict = model.config.use_return_dict
output_attentions = model.config.output_attentions
output_hidden_states = model.config.output_hidden_states
for i in range(40):
total_loss = 0
for inputs in training_points:
# Copy-pasted code from BlipForQuestionAnswering.forward()
vision_outputs = model.vision_model(
pixel_values=inputs.pixel_values,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
image_embeds = vision_outputs[0]
image_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long)
question_embeds = model.text_encoder(
input_ids=inputs.input_ids,
encoder_hidden_states=image_embeds,
encoder_attention_mask=image_attention_mask,
return_dict=return_dict,
)
question_embeds = question_embeds[0] if not return_dict else question_embeds.last_hidden_state
# Shift both the labels AND the input_ids. However, do not delete final [SEP] character.
labels = inputs.labels.new_zeros(inputs.labels.shape[0], inputs.labels.shape[1] + 1)
labels[..., 1:] = inputs.labels
labels[..., 0] = model.decoder_start_token_id
output = model.text_decoder(
input_ids=labels,
encoder_hidden_states=question_embeds,
labels=labels,
return_dict=return_dict,
reduction="mean",
)
loss = output.loss.mean() if return_dict else answer_output[0].mean()
total_loss += loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
Then, when decoding:
```python
for inputs in training_points:
outputs = model.generate(input_ids = inputs.input_ids,
pixel_values = inputs.pixel_values)
print(processor.batch_decode(outputs[0]))
```
(where the `input_ids` are tokens for "What animal is this?")
The end result:
```
['', '[CLS]', 'a', 'cat', '.', '[SEP]']
['', '[CLS]', 'a', 'dog', '.', '[SEP]']
['', '[CLS]', 'a', 'wolf', '.', '[SEP]']
['', '[CLS]', 'a', 'bear', '.', '[SEP]']
```
(Sorry I can't link directly to my code -- if it's really necessary/convenient, let me know and I can convert it into a colab nb)<|||||>Hi @verityw
Thanks for flagging this! I made an attempt to fix your issue in https://github.com/huggingface/transformers/pull/23153
I am not sure this fixes 100% your problem, as I don't have access to your code, can you try to uninstall `transformers` and install `transformers` from that branch and let us know if you still face the issue?
```bash
pip install git+https://github.com/younesbelkada/transformers.git@blip-qa-loss-fix
``` |
transformers | 22,999 | closed | Help on Firewalled installation | Hello,
I would like to install huggingface in a firewalled environment. I have a Git-lfs repository with all my models stored, I would like to use them with the transformers libraries' `.from_pretrained()` feature, I referred to this (https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) to set up the git-repo, could someone help me use the models stored on my git instead of the huggingface hub.
Thanks | 04-26-2023 06:08:49 | 04-26-2023 06:08:49 | That is not possible. You can only use models from the Hub or locally downloaded.<|||||>Thanks @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,998 | closed | Fix typo in mega.mdx | # What does this PR do?
Fixes a typo in the Mega documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger, @stevhliu and @MKhalusova | 04-25-2023 21:28:02 | 04-25-2023 21:28:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,997 | closed | Add Missing tokenization test [electra] | # What does this PR do?
Added tokenization test for electra | 04-25-2023 21:24:50 | 04-25-2023 21:24:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for your PR! @ArthurZucker could you review?<|||||>@sgugger Any updates, is there something wrong?<|||||>@sgugger Done!!<|||||>Thanks for contributing 🔥
|
transformers | 22,996 | closed | Make `_test_xla_generate` less flaky | # What does this PR do?
Make `_test_xla_generate` less flaky by relaxing the condition:
- if number of examples < 10: be strict, no difference is allowed
- otherwise, only fail the test if there are more than 10% of examples give different outputs between XLA and non-XLA versions.
Since this test is slow (generation), better not to decorate with `is_flaky`.
For `TFPegasusModelTest::test_xla_generate_slow`: there were more than 10 failures in 70 runs. With this PR, 0 failure shows.
| 04-25-2023 18:59:07 | 04-25-2023 18:59:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,995 | closed | Added tokenizer kwargs for fill mask pipeline | Added tokenizer kwargs for the fill mask pipeline, which enables to truncate/pad/specify max length etc for the tokenizer. Pipeline can be used as follows following the edit:
`from transformers import pipeline`
`fill_mask_pipeline = pipeline(`
` 'fill-mask', `
` model=model, `
` tokenizer=tokenizer, `
` device=0`
`)`
`tokenizer_kwargs = {'truncation':True, 'max_length':2048}`
`output = fill_mask_pipeline("Text to predict <mask>", **tokenizer_kwargs)`
| 04-25-2023 18:18:31 | 04-25-2023 18:18:31 | cc @Narsil <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22995). All of your documentation changes will be reflected on that endpoint.<|||||>Can we refactor in order to do :
```python
output = fill_mask_pipeline("Text to predict <mask>", tokenizer_kwargs=tokenizer_kwargs)
```
Instead ? Accepting kwargs directly is very hard to maintain down the line because of clashing arguments (for instance `max_length` is one that pops up often enough).
We can also whiteliste some parameters like `truncation` or `padding` to make them more convenient. but enabling all the kwargs directly is really not something we want I think.
Thanks for the contribution though, it's a step in the good direction ! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,994 | open | RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation | ### System Info
transformers 4.28.1
torch 2.0.0
torchaudio 2.0.0
torchvision 0.15.0
huggingface-hub 0.13.4
trl 0.4.2.dev0
### Who can help?
Probably people from accelerate, trainer, and text:
@pacman100, @sgugger, @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Install the TRL package from (https://github.com/lvwerra/trl)
2. Clone the package and go to `trl/examples/summarization/scripts`
3. Setup `accelerate config` like this
```
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: GPT2Block
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
4. call `accelerate launch reward_summarization.py`
This results in the following error:
```
/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/autograd/__init__.py:200: UserWarning: Error detected in WhereBackward0. Traceback of forward call that caused the error:
File "reward_summarization.py", line 203, in <module>
trainer.train(script_args.resume_from_checkpoint)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 2699, in training_step
loss = self.compute_loss(model, inputs)
File "reward_summarization.py", line 185, in compute_loss
rewards_j = model(input_ids=inputs["input_ids_j"], attention_mask=inputs["attention_mask_j"])[0]
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0]) # type: ignore[index]
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1420, in forward
transformer_outputs = self.transformer(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 899, in forward
outputs = block(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 389, in forward
attn_outputs = self.attn(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 330, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 201, in _attn
attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)
(Triggered internally at /opt/conda/conda-bld/pytorch_1678402379298/work/torch/csrc/autograd/python_anomaly_mode.cpp:114.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "reward_summarization.py", line 203, in <module>
trainer.train(script_args.resume_from_checkpoint)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/transformers/trainer.py", line 2717, in training_step
loss.backward()
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/ubuntu/miniconda3/envs/trl/lib/python3.8/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [CUDABoolType [1, 1, 385, 385]] is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
### Expected behavior
I expect it should run fine, but it ends in that error. Although it is not a native huggingFace code, it seems that it the issue is from the gpt2 trainer code. | 04-25-2023 17:53:37 | 04-25-2023 17:53:37 | I cannot transfer the issue to the `trl` repo but it should be opened there since the bug is in their example.<|||||>@sgugger I already have posted it there, and it seems that the issue is not on TRL side. <|||||>`torch.autograd.set_detect_anomaly(True)` reports that the root of issue might be in line 201 in `site-packages/transformers/models/gpt2/modeling_gpt2.py`
<img width="941" alt="image" src="https://user-images.githubusercontent.com/20797260/234368588-cdd90db1-7ddd-4087-a7c5-296fd36d6019.png">
<|||||>Turned out that modifying line 201 as below solves the issue.
`attn_weights = torch.where(causal_mask.clone(), attn_weights.to(attn_weights.dtype).clone(), mask_value)`
Remember that it was:
`attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)`
@sgugger Do you know if it is a safe modification?
<|||||>This will break the flow of the gradients from the attention weights, so no it's a good fix.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update on this? I am having the same issue<|||||>I'm experiencing same issue with `WhisperModel`<|||||>Actually according to `torch`, the `clone()` operation is not breaking the flow of the gradient. see [here](https://pytorch.org/docs/stable/generated/torch.clone.html):
> This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see [detach()](https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html#torch.Tensor.detach).
Apparently, previous torch version did not check for these, but gradients were wrong (source is a lost stack overflow thread), there are at least 5 more issues linked to this one: #25130, #22225, #15677, #14179, #24996, #23087. Now wether this was fixed in the latest versions of torch or not is also a question, but all these issues use FSDP.
Every inplace operation seems to be causing this. But we have a lot of these 😓 cc @pacman100 wondering what you would recommend? Should we make everything compatible removing inplace operations? Seems kind of impractible.
This wrapper : https://github.com/pytorch/pytorch/blob/main/torch/autograd/graph.py#L508 seems to add `clone()` wherever its needed. Might be something to do there?
We should also PIN the issue to redirect everyone that has FSDP + inplace operation issue. <|||||>Also removing all inplace operations might make the memory used a bit higher, so would love if there was an alternative solution for FSDP/ |
transformers | 22,993 | closed | Using data collator in `Pipeline` | Hello,
I am in the process of moving a bunch of pre- and post-processing logic to use the [Pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines). In my original code I would use a data collator in my `Trainer` constructor to take care of padding inputs among other things. The `Trainer` then takes care of collating data for both training and evaluation.
I could move the logic within the collator into the processing of the pipeline, but I want to keep the code as similar as possible when using the `Trainer` for training specifically, and when I use the pipeline during inference or evaluation.
What could be the best way to go about this? In the more general case I could just scrap the pipeline and opt for a torch dataloader and run evaluation with that, but I am interested in keeping the pipeline around as I am inheriting some logic for aggregation around. I also think the ability to encapsulate pre- and post-processing in the pipeline is useful. | 04-25-2023 17:39:02 | 04-25-2023 17:39:02 | I'm not too sure where the question is here. Each `Pipeline` has the pre/post-processing logic they need implemented in their `preprocess` and `postprocess` methods. |
transformers | 22,992 | closed | Weird behavior for initial tokens in BERT Base Cased | ### System Info
transformers version: 4.27.4
python version: 3.8.8
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running a simple MLM task using BERT Base Cased. I'm noticing weird behavior when decoding the first token (after the CLS token) in the output. Here's an example:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
model = AutoModelForMaskedLM.from_pretrained('bert-base-cased')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
inputs = tokenizer(['The laws have done [MASK] harm.'], return_tensors='pt')
with torch.no_grad():
outputs = model(**inputs)
tokenizer.batch_decode(torch.argmax(outputs.logits, dim=-1))
```
This produces the output: `.. laws have done no harm..`. I know the first and last dots correspond to predictions for the CLS and EOS tokens, so they should be ignored, but the second dot is where `The` should be. This happens with a variety of words in many sentences, but it doesn't always happen for the same words. It does seem to be paying attention to this initial word even when it is not produced, since the results differ depending on the initial word, even if it's not decoded from the output. But it looks weird. Is this normal behavior?
When I use the fill-mask pipeline, I get a different result, but I'm assuming that the pipeline just internally uses string replacement for the mask token rather than actually decoding the full output.
```python
from transformers import pipeline
pipe = pipeline('fill-mask', 'bert-base-cased')
pipe('The laws have done [MASK] harm.')[0]['sequence']
```
Produces `The laws have done no harm.`, as expected.
### Expected behavior
I'd expect that given tokens would be retained as is, for the most part. Sentence initial `The` and `I` seem to cause this problem a lot, which is odd, given I'd expect those to be well-attested in the training data. | 04-25-2023 17:00:43 | 04-25-2023 17:00:43 | Hey! Thanks for reporting, I believe that this is somewhat expected, as the `mask-fill` pipeline does not exactly use just `argmax`. There is a bit more process involved in how to obtain the correct output. This is normal! 🤗 <|||||>I understand why the `fill-mask` output differs from the output when using `argmax` now, but is it still expected that it predicts `.` instead of `The` when using `argmax`?<|||||>No I believe that the most important is that it correctly predicts the masked word, which the loss will be computed on. Other tokens are ignored<|||||>Got it, thanks! |
transformers | 22,991 | closed | 🌐 [i18n-KO] Translated `model_sharing.mdx` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `model_sharing.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 04-25-2023 14:45:13 | 04-25-2023 14:45:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> In general, additional proofreading is necessary. It is recommended to carefully review the machine-translations and revise any sections that appear unclear or inaccurate.
Thank you for the advice. I will proceed with more detailed proofreading in machine translation!<|||||>May you please review this PR? 🙂
@sgugger, @ArthurZucker, @eunseojo |
transformers | 22,990 | closed | Fix None value when adding info to auto_map | # What does this PR do?
This should fix the issue encountered in #22983: before testing if `--` in in a value of the auto map, we need to make sure it's not `None`.
Fixes #22983 | 04-25-2023 13:37:13 | 04-25-2023 13:37:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>There are some, it's juss the case when there is only a fast tokenizer that is custom that is not tested. |
transformers | 22,989 | closed | fix bug auto loading llamatokenizer | # What does this PR do?
Huggingface decapoda-research/llama-7b-hf config decide the name of tokenizer LLaMATokenizer, while in transformers it is LlamaTokenizer.
Unify the name as LLaMATokenizer, so that we can use AutoTokenizer to load llama tokenizer.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 04-25-2023 13:33:26 | 04-25-2023 13:33:26 | Lol no, but nice try. Maybe decapoda-research/llama-7b-hf should merge one of the multiple PRs they received that fixes the tokenizer on their side.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22989). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,988 | closed | [`DocTest`] Fix correct checkpoint | # What does this PR do?
Related failing test: https://github.com/huggingface/transformers/actions/runs/4793034296/jobs/8525118203
Sets the correct (and lighter) checkpoint name in the docstring
cc @amyeroberts @ArthurZucker | 04-25-2023 12:47:27 | 04-25-2023 12:47:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,987 | closed | [Doctests] Refactor doctests + add CI | # What does this PR do?
Wow! The fix is crazy simple but I broke my head finding the correct way to include this in the cleanest way.
We are keeping `pytest --doctest-module` 🥳 Basically it just came down to:
- change the `doctest.DocTestParser()`'s default regex compilation
- rewrite `_pytest.doctest` utilities that are private to use this parser!
TODOS:
- [x] change parser
- [x] add CI Job
- [x] Filter jobs that can't run on CUDA! This is pretty important
- [ ] update doc
- [x] fiind a way to default `--doctest-glob` but that's a small nit
- [ ] add test, to test the parser mostly
- [ ] add check file is doctested if it has some doctring | 04-25-2023 11:40:57 | 04-25-2023 11:40:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Local tests run, I now have this strange error:
```python
_____ ERROR collecting src/transformers/models/whisper/modeling_whisper.py _____
import file mismatch:
imported module 'transformers.models.whisper.modeling_whisper' has this __file__ attribute:
/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/models/whisper/modeling_whisper.py
which is not the same as the test file we want to collect:
/home/circleci/transformers/src/transformers/models/whisper/modeling_whisper.py
HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
```<|||||>@ArthurZucker Regarding the error you mentioned in the above comment, could you provide the command you used to launch?
Also, for this PR to be merged, two thing we should check are:
- for a single modeling file, how long it will take to run the doctest against it on CircleCI, and if it will fit in the available memory.
- (we should probably run the doctest against a few existing models)
- There should NOT have multiple modeling files being included in `test_to_run` for doctest.
- This PR currently checks if a file is in `utils/documentation_tests.txt`, but that file doesn't contain all existing modeling files if I remember correctly.<|||||>Regarding
> There should NOT have multiple modeling files being included in test_to_run for doctest.
This PR currently checks if a file is in utils/documentation_tests.txt, but that file doesn't contain all existing modeling files if I remember correctly.
Actually the CI checks doc for all files in the diff that end in .py and .mdx. This is prone to changes! Fully open to recommandations.
For slow test, I skip all codeblocks that includ "cuda" in it, we can refine the filter.
<|||||>CUDA tests are properly skipped! :
```python
(11 durations < 0.005s hidden. Use -vv to show these durations.)
=================================================================================================================================================================== short test summary info ====================================================================================================================================================================
PASSED docs/source/en/testing.mdx::testing.mdx
PASSED docs/source/en/testing.mdx::testing.mdx
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForAudioClassification.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForAudioClassification.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForAudioClassification.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForConditionalGeneration.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForConditionalGeneration.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperForConditionalGeneration.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperModel.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperModel.forward
PASSED src/transformers/models/whisper/modeling_whisper.py::transformers.models.whisper.modeling_whisper.WhisperModel.forward
PASSED docs/source/en/model_doc/wav2vec2.mdx::wav2vec2.mdx
PASSED docs/source/en/model_doc/wav2vec2.mdx::wav2vec2.mdx
SKIPPED [1] <doctest testing.mdx[0]>:1: Codeblock line xxx in xxxx has cuda involved. Skipping
SKIPPED [1] <doctest wav2vec2.mdx[0]>:1: Codeblock line xxx in xxxx has cuda involved. Skipping
Results (19.65s):
3 passed
2 skipped
(py39) arthur_huggingface_co@arthur-gpu-2:~/transformers$
```<|||||>Warning will be imporved<|||||>Looked this PR and played a bit with it: so far so good 👍
One thing I found:
```
SKIP_CUDA_DOCTEST=1 pytest -v --doctest-modules --doctest-glob="*.mdx" docs/source/en/model_doc/longt5.mdx
```
The doctest is running while I assume it will be skipped as it has `cuda` thing.<|||||>What should be detailed is that only the codeblocks (and not the entire file) should be skipped. This might be why longt5 is not skipped!
I’ll be off for a while, I leave this in your hands! 🤗🤗<|||||>For info: I will take over this PR to try to merge it earlier.<|||||>Convert to draft for now, as more changes to deal with `cuda` is required.<|||||>cc @amy @sgugger
I think the PR is ready (You can see the changes I made [here](https://github.com/huggingface/transformers/pull/22987/files/a5a337d856cdd20c8acac8de04823ae24469f6e1..531fc2673e081655a1fff70a25b9e60c30c0dad8)), but a few points need to be considered before I merge it.
- no model/dataset being cached as we have done for our daily CI (with our own GCP firestore): so in each PR CI run, they will always be re-downloaded.
- timeout will give a status code `124` and the job will be failed (`red`). I am not sure this is really what we want to see on PRs.
- ~probably there is some hacky way to avoid this. Not sure.~
- We haven't checked all current files will pass the doctesting. For example, only a subset of modeling files and doc files are tested in our daiily doctest run.
- I assume we don't want to see surprising failed doctest on PR CI. Any suggestion? Like a list of (temporarily) ignored files, or try to run all the files and fix all the failure (..?) before this PR being merged?<|||||>All very good questions!
> no model/dataset being cached as we have done for our daily CI (with our own GCP firestore): so in each PR CI run, they will always be re-downloaded.
That's okay since the test should only be triggered when someone modifies a guide/docstring. This is not in every PR.
> timeout will give a status code 124 and the job will be failed (red). I am not sure this is really what we want to see on PRs.
Indeed, if you can filter that one out to be green instead, it would be better
> We haven't checked all current files will pass the doctesting.
The files that are tested on the CI should be present in the list of tests for doctests (that list will be remvoed one day when we get to 100% coverage but we're not there yet).<|||||>@sgugger Let me know if [the changes](https://github.com/huggingface/transformers/pull/22987/files/7978f9ac0fa915a1abe074be68d660f44b8c1349..844b46df871c2db717083fe9be9c44b0c78e9866) to address the above comments if fine.
One example run is [here](https://app.circleci.com/pipelines/github/huggingface/transformers/63781/workflows/7430b1eb-3613-41b1-b548-a54366b02dae/jobs/787437)
<img width="290" alt="Screenshot 2023-05-05 120741" src="https://user-images.githubusercontent.com/2521628/236430742-c5104bb4-49af-4bb9-a3db-f32e484af914.png">
|
transformers | 22,986 | closed | Failed to import due to invalid escape sequence '\d' (modeling_utils.py, line 1825) | ### System Info
- `transformers` version: 4.28.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I did not manage to recreate the bug consistently, however I had it appear on both windows and linux and on python3.8 and python3.10.
I lack understanding, why it sometimes works and sometimes throws a syntax error. From my python-understanding, this should fail on all python environments with all transformers versions above `4.27.0`.
in python environment, run
`from transformers import AlbertModel` # or any other model
will **sometimes** lead to an error:
```
module = <module 'transformers' from '.../.venv/lib/python3.10/site-packages/transformers/__init__.py'>
fromlist = ('AlbertModel', 'AlbertTokenizer', 'BertModel', 'BertTokenizer', 'CamembertModel', 'CamembertTokenizer', ...)
import_ = <built-in function __import__>
> ???
<frozen importlib._bootstrap>:1075:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers' from '.../.venv/lib/python3.10/site-packages/transformers/__init__.py'>
name = 'AlbertModel'
def __getattr__(self, name: str) -> Any:
if name in self._objects:
return self._objects[name]
if name in self._modules:
value = self._get_module(name)
elif name in self._class_to_module.keys():
module = self._get_module(self._class_to_module[name])
> value = getattr(module, name)
.venv/lib/python3.10/site-packages/transformers/utils/import_utils.py:1137:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers.models.albert' from '.../.venv/lib/python3.10/site-packages/transformers/models/albert/__init__.py'>
name = 'AlbertModel'
def __getattr__(self, name: str) -> Any:
if name in self._objects:
return self._objects[name]
if name in self._modules:
value = self._get_module(name)
elif name in self._class_to_module.keys():
> module = self._get_module(self._class_to_module[name])
.venv/lib/python3.10/site-packages/transformers/utils/import_utils.py:1136:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers.models.albert' from '.../.venv/lib/python3.10/site-packages/transformers/models/albert/__init__.py'>
module_name = 'modeling_albert'
def _get_module(self, module_name: str):
try:
return importlib.import_module("." + module_name, self.__name__)
except Exception as e:
> raise RuntimeError(
f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
f" traceback):\n{e}"
) from e
E RuntimeError: Failed to import transformers.models.albert.modeling_albert because of the following error (look up to see its traceback):
E invalid escape sequence '\d' (modeling_utils.py, line 1825)
.venv/lib/python3.10/site-packages/transformers/utils/import_utils.py:1148: RuntimeError
```
The error ocours due to https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L1832 using an invalid escape sequence and therefore not being right python syntax.
The fix would be to use `reg = re.compile(r"(.*?)-\d{5}-of-\d{5}")` instead to disable escape sequences and write a plain regex instead.
### Expected behavior
I would expect to be able to import transformer models without it sometimes trowing a RuntimeError. | 04-25-2023 11:08:43 | 04-25-2023 11:08:43 | Solved in https://github.com/huggingface/transformers/pull/22936 |
transformers | 22,985 | closed | [i18n-<languageCode>] Translating docs to <languageName> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
-->
| 04-25-2023 10:46:15 | 04-25-2023 10:46:15 | |
transformers | 22,984 | closed | [`SAM`] Add sam doc | # What does this PR do?
As suggested by @LysandreJik offline, this PR adds a nice docstring for `SamModel` showing users how to leverage Auto API to run SAM
| 04-25-2023 10:24:08 | 04-25-2023 10:24:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failing tests seems to be unrelated, merging! |
transformers | 22,983 | closed | Using `auto_map` in `tokenizer_config.json` gives `TypeError: argument of type 'NoneType' is not iterable` | ### System Info
certifi==2022.12.7
charset-normalizer==3.1.0
cmake==3.26.3
filelock==3.12.0
fsspec==2023.4.0
huggingface-hub==0.14.0
idna==3.4
Jinja2==3.1.2
lit==16.0.2
MarkupSafe==2.1.2
mpmath==1.3.0
networkx==3.1
numpy==1.24.3
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
packaging==23.1
PyYAML==6.0
regex==2023.3.23
requests==2.28.2
sentencepiece==0.1.98
sympy==1.11.1
tokenizers==0.13.3
torch==2.0.0
tqdm==4.65.0
-e git+https://github.com/huggingface/transformers.git@073baf7f2289dbbf99e29f375e40c3e270ba6e85#egg=transformers
triton==2.0.0
typing-extensions==4.5.0
urllib3==1.26.15
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running the following...
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-10b-chinese", trust_remote_code=True)
```
Gave the error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jovyan/transformers/src/transformers/models/auto/tokenization_auto.py", line 692, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/jovyan/transformers/src/transformers/tokenization_utils_base.py", line 1812, in from_pretrained
return cls._from_pretrained(
File "/home/jovyan/transformers/src/transformers/tokenization_utils_base.py", line 1878, in _from_pretrained
init_kwargs["auto_map"] = add_model_info_to_auto_map(
File "/home/jovyan/transformers/src/transformers/utils/generic.py", line 563, in add_model_info_to_auto_map
auto_map[key] = [f"{repo_id}--{v}" if "--" not in v else v for v in value]
File "/home/jovyan/transformers/src/transformers/utils/generic.py", line 563, in <listcomp>
auto_map[key] = [f"{repo_id}--{v}" if "--" not in v else v for v in value]
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
Load tokenizer without errors.
## Analysis
- I suspect it has to do with `auto_map` in `tokenizer_config.json` [here](https://huggingface.co/THUDM/glm-10b-chinese/blob/main/tokenizer_config.json)
- The tokenizer loads fine with transformers version 4.27.0 | 04-25-2023 08:20:46 | 04-25-2023 08:20:46 | cc @sgugger seems like #22814 added
```python
if "auto_map" in init_kwargs and not _is_local:
# For backward compatibility with odl format.
if isinstance(init_kwargs["auto_map"], (tuple, list)):
init_kwargs["auto_map"] = {"AutoTokenizer": init_kwargs["auto_map"]}
init_kwargs["auto_map"] = add_model_info_to_auto_map(
init_kwargs["auto_map"], pretrained_model_name_or_path
)
```
I can take this on but you are more familiar with the changes
<|||||>Thanks for flagging! The PR linked above should fix this. |
transformers | 22,982 | closed | fixed small typo in code example | # What does this PR do?
Fixes a small typo in a code example in the single GPU inference docs. Renamed the `text` variable to `prompt` as `prompt` is used in the line below as parameter for the tokenizer.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger , @stevhliu and @MKhalusova
(sorry if tagging is too much for just this tiny repo) | 04-25-2023 08:15:38 | 04-25-2023 08:15:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,981 | closed | Cannot train language-modeling using Luke model | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.14.0
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.0a0+bd13bc6 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to try to fine-tuning Luke model via run_mlm.py in example folder. I use the standard script in examples, then I use following code to start train:
`
pip install git+https://github.com/huggingface/transformers
python /gxtq-ner-ws/run_mlm.py \
--output_dir=/gxtq-ner-ws/luke_large_6_pretrained_v2/ \
--model_type=luke \
--model_name_or_path=studio-ousia/luke-large-lite \
--do_train \
--per_device_train_batch_size 16 \
--num_train_epochs 6 \
--train_file=/gxtq-ner-ws/lm_training_data_v2.txt \
--save_total_limit 1 \
--save_steps 10000 \
`
Then I got following error :
[INFO|trainer.py:1776] 2023-04-25 06:57:12,367 >> Number of trainable parameters = 147,342,943
0%| | 0/5814 [00:00<?, ?it/s]Traceback (most recent call last):
File "./run_language_modeling_v4.py", line 657, in <module>
main()
File "./run_language_modeling_v4.py", line 606, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1930, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2718, in training_step
loss.backward()
File "/opt/conda/lib/python3.8/site-packages/torch/_tensor.py", line 399, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
0%| | 0/5814 [00:00<?, ?it/s]
I also tried to run it in cpu env. here is the error :
Traceback (most recent call last):
File "./run_language_modeling_v4.py", line 657, in <module>
main()
File "./run_language_modeling_v4.py", line 606, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1930, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2700, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2732, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/luke/modeling_luke.py", line 1375, in forward
mlm_loss = self.loss_fn(logits.view(-1, self.config.vocab_size), labels.view(-1))
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1111, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1163, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2961, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
IndexError: Target -100 is out of bounds.
### Expected behavior
train model as expected. | 04-25-2023 08:08:03 | 04-25-2023 08:08:03 | It looks like the Luke model is not compatible out of the box with those examples since the person who contributed it decided to use -1 as an index in the cross-entropy loss instead of -100 that we use everywhere else.
Might be worth fixing though it's a breaking change @amyeroberts @ArthurZucker what do you think?
In the meantime, a workaround is to replace the -100 used for padding labels in the example by -1 to use it with Luke.<|||||>@sgugger Yes, I'd agree, I think it's better to update to be in line with the rest of the library. <|||||>> ntime, a workaround is to replace the -10
Thanks @sgugger for the information, however, I am new to NLP. could you please tell me where should I change to use this workaround? |
transformers | 22,980 | closed | Trainer failing during _save_checkpoint "cannot pickle '_thread.lock' object" with skip_memory_metrics=True | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-5.19.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=len(classes)).to('cuda')
training_args = TrainingArguments(
output_dir="./results",
overwrite_output_dir=True,
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
optim="adamw_torch",
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
no_cuda=False,
skip_memory_metrics=True
)
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
trainer.train()
```
Produces the following error:
```
TypeError Traceback (most recent call last)
/tmp/ipykernel_54606/4032920361.py in <module>
----> 1 trainer.train()
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1660 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1661 )
-> 1662 return inner_training_loop(
1663 args=args,
1664 resume_from_checkpoint=resume_from_checkpoint,
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2019
2020 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
-> 2021 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
2022
2023 if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
2289
2290 if self.control.should_save:
-> 2291 self._save_checkpoint(model, trial, metrics=metrics)
2292 self.control = self.callback_handler.on_save(self.args, self.state, self.control)
2293
~/anaconda3/lib/python3.9/site-packages/transformers/trainer.py in _save_checkpoint(self, model, trial, metrics)
2405 # Save the Trainer state
2406 if self.args.should_save:
-> 2407 self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME))
2408
2409 # Save RNG state in non-distributed training
~/anaconda3/lib/python3.9/site-packages/transformers/trainer_callback.py in save_to_json(self, json_path)
95 def save_to_json(self, json_path: str):
96 """Save the content of this instance in JSON format inside `json_path`."""
---> 97 json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n"
98 with open(json_path, "w", encoding="utf-8") as f:
99 f.write(json_string)
~/anaconda3/lib/python3.9/dataclasses.py in asdict(obj, dict_factory)
1073 if not _is_dataclass_instance(obj):
1074 raise TypeError("asdict() should be called on dataclass instances")
-> 1075 return _asdict_inner(obj, dict_factory)
1076
1077
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1080 result = []
1081 for f in fields(obj):
-> 1082 value = _asdict_inner(getattr(obj, f.name), dict_factory)
1083 result.append((f.name, value))
1084 return dict_factory(result)
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1108 # generator (which is not true for namedtuples, handled
1109 # above).
-> 1110 return type(obj)(_asdict_inner(v, dict_factory) for v in obj)
1111 elif isinstance(obj, dict):
1112 return type(obj)((_asdict_inner(k, dict_factory),
~/anaconda3/lib/python3.9/dataclasses.py in <genexpr>(.0)
1108 # generator (which is not true for namedtuples, handled
1109 # above).
-> 1110 return type(obj)(_asdict_inner(v, dict_factory) for v in obj)
1111 elif isinstance(obj, dict):
1112 return type(obj)((_asdict_inner(k, dict_factory),
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1110 return type(obj)(_asdict_inner(v, dict_factory) for v in obj)
1111 elif isinstance(obj, dict):
-> 1112 return type(obj)((_asdict_inner(k, dict_factory),
1113 _asdict_inner(v, dict_factory))
1114 for k, v in obj.items())
~/anaconda3/lib/python3.9/dataclasses.py in <genexpr>(.0)
1111 elif isinstance(obj, dict):
1112 return type(obj)((_asdict_inner(k, dict_factory),
-> 1113 _asdict_inner(v, dict_factory))
1114 for k, v in obj.items())
1115 else:
~/anaconda3/lib/python3.9/dataclasses.py in _asdict_inner(obj, dict_factory)
1114 for k, v in obj.items())
1115 else:
-> 1116 return copy.deepcopy(obj)
1117
1118
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/anaconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
268 if state is not None:
269 if deep:
--> 270 state = deepcopy(state, memo)
271 if hasattr(y, '__setstate__'):
272 y.__setstate__(state)
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/anaconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/anaconda3/lib/python3.9/copy.py in _deepcopy_list(x, memo, deepcopy)
203 append = y.append
204 for a in x:
--> 205 append(deepcopy(a, memo))
206 return y
207 d[list] = _deepcopy_list
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/anaconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
268 if state is not None:
269 if deep:
--> 270 state = deepcopy(state, memo)
271 if hasattr(y, '__setstate__'):
272 y.__setstate__(state)
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/anaconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/anaconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_thread.lock' object
```
### Expected behavior
Training and eval proceed smoothly. I think that Trainer is trying to save the checkpoint and failing then. I'd like to complete training/eval and be able to load from a non-corrupt checkpoint. | 04-25-2023 07:23:44 | 04-25-2023 07:23:44 | I also ran this with `no_cuda=True` and received the same error. <|||||>Your code example doesn't define multiple objects, so I can't really tell what's wrong. Please give us a minimal reproducer we can execute.<|||||>Sorry about that--I've put everything into this repo if that is easier: https://github.com/galenballew/bert-multiclass
I'll also repeat it here too:
```python
# Dependencies
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from torch.utils.data import DataLoader
from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification, Trainer, TrainingArguments, AdamW
from tqdm import tqdm
import torch
import tools
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
train_texts, train_labels = tools.read_data("train")
val_texts, val_labels = tools.read_data("val")
test_texts, test_labels = tools.read_data("test")
train_texts = train_texts.tolist()
val_texts = val_texts.tolist()
test_texts = test_texts.tolist()
# Create integer class labels instead of strings
classes = tools.labels(train_labels).tolist()
train_labels = tools.relabel(train_labels, classes)
val_labels = tools.relabel(val_labels, classes)
test_labels = tools.relabel(test_labels, classes)
class IntentDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
"""
To support the indexing such that dataset[i] can be used to get the i-th sample
"""
# item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item = {key: val[idx].clone().detach() for key, val in self.encodings.items()}
item['label'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
"""
Returns the size of the dataset.
"""
return len(self.labels)
def compute_metrics(eval_pred):
accuracy = load("accuracy")
precision = load("precision")
f1 = load("f1")
recall = load("recall")
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
accuracy.compute(predictions=predictions, references=labels)
precision.compute(predictions=predictions, references=labels, average="micro")
f1.compute(predictions=predictions, references=labels, average="micro")
recall.compute(predictions=predictions, references=labels, average="micro")
return {"accuracy": accuracy, "precision": precision, "f1": f1, "recall": recall}
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
train_encodings = tokenizer(train_texts, padding=True, truncation=True, return_tensors="pt")
val_encodings = tokenizer(val_texts, padding=True, truncation=True, return_tensors="pt")
test_encodings = tokenizer(test_texts, padding=True, truncation=True, return_tensors="pt")
# Turn the encodings and labels to a dataset object
train_dataset = IntentDataset(train_encodings, train_labels)
val_dataset = IntentDataset(val_encodings, val_labels)
test_dataset = IntentDataset(test_encodings, test_labels)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=len(classes)).to('cuda')
training_args = TrainingArguments(
output_dir="./results",
overwrite_output_dir=True,
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
optim="adamw_torch",
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
no_cuda=False,
skip_memory_metrics=True
)
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
trainer.train()
```<|||||>Could you also print us `trainer.state`? The error comes from the fact it is not JSON-serializable so it would help to know which object in it is not serializable. Thanks!<|||||>`trainer.state` directly after instantiation:
```
TrainerState(epoch=None, global_step=0, max_steps=0, num_train_epochs=0, total_flos=0, log_history=[], best_metric=None, best_model_checkpoint=None, is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None)
```
Added this and am including entire output, not just the state. Either the behavior changed or adding try/except is causing a slightly different output:
```
try:
trainer.train()
except:
print("\n\n")
print("********************")
print("\n\n")
print(trainer.state)
print("\n\n")
print("********************")
print("\n\n")
```
```
Trainer is attempting to log a value of "EvaluationModule(name: "accuracy", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
predictions (`list` of `int`): Predicted labels.
references (`list` of `int`): Ground truth labels.
normalize (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True.
sample_weight (`list` of `float`): Sample weights Defaults to None.
Returns:
accuracy (`float` or `int`): Accuracy score. Minimum possible value is 0. Maximum possible value is 1.0, or the number of examples input, if `normalize` is set to `True`.. A higher score means higher accuracy.
Examples:
Example 1-A simple example
>>> accuracy_metric = evaluate.load("accuracy")
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0])
>>> print(results)
{'accuracy': 0.5}
Example 2-The same as Example 1, except with `normalize` set to `False`.
>>> accuracy_metric = evaluate.load("accuracy")
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], normalize=False)
>>> print(results)
{'accuracy': 3.0}
Example 3-The same as Example 1, except with `sample_weight` set.
>>> accuracy_metric = evaluate.load("accuracy")
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], sample_weight=[0.5, 2, 0.7, 0.5, 9, 0.4])
>>> print(results)
{'accuracy': 0.8778625954198473}
""", stored examples: 0)" of type <class 'evaluate_modules.metrics.evaluate-metric--accuracy.f887c0aab52c2d38e1f8a215681126379eca617f96c447638f751434e8e65b14.accuracy.Accuracy'> for key "eval/accuracy" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.
Trainer is attempting to log a value of "EvaluationModule(name: "precision", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
predictions (`list` of `int`): Predicted class labels.
references (`list` of `int`): Actual class labels.
labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
- 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
- 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
- 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
sample_weight (`list` of `float`): Sample weights Defaults to None.
zero_division (`int` or `string`): Sets the value to return when there is a zero division. Defaults to 'warn'.
- 0: Returns 0 when there is a zero division.
- 1: Returns 1 when there is a zero division.
- 'warn': Raises warnings and then returns 0 when there is a zero division.
Returns:
precision (`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better.
Examples:
Example 1-A simple binary example
>>> precision_metric = evaluate.load("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
>>> print(results)
{'precision': 0.5}
Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
>>> precision_metric = evaluate.load("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
>>> print(round(results['precision'], 2))
0.67
Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
>>> precision_metric = evaluate.load("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
>>> print(results)
{'precision': 0.23529411764705882}
Example 4-A multiclass example, with different values for the `average` input.
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')
>>> print(results)
{'precision': 0.2222222222222222}
>>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')
>>> print(results)
{'precision': 0.3333333333333333}
>>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')
>>> print(results)
{'precision': 0.2222222222222222}
>>> results = precision_metric.compute(predictions=predictions, references=references, average=None)
>>> print([round(res, 2) for res in results['precision']])
[0.67, 0.0, 0.0]
""", stored examples: 0)" of type <class 'evaluate_modules.metrics.evaluate-metric--precision.4e7f439a346715f68500ce6f2be82bf3272abd3f20bdafd203a2c4f85b61dd5f.precision.Precision'> for key "eval/precision" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.
Trainer is attempting to log a value of "EvaluationModule(name: "f1", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
predictions (`list` of `int`): Predicted labels.
references (`list` of `int`): Ground truth labels.
labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`, and the order of the labels if `average` is `None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
- 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
- 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
- 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
sample_weight (`list` of `float`): Sample weights Defaults to None.
Returns:
f1 (`float` or `array` of `float`): F1 score or list of f1 scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher f1 scores are better.
Examples:
Example 1-A simple binary example
>>> f1_metric = evaluate.load("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
>>> print(results)
{'f1': 0.5}
Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
>>> f1_metric = evaluate.load("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
>>> print(round(results['f1'], 2))
0.67
Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
>>> f1_metric = evaluate.load("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
>>> print(round(results['f1'], 2))
0.35
Example 4-A multiclass example, with different values for the `average` input.
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = f1_metric.compute(predictions=predictions, references=references, average="macro")
>>> print(round(results['f1'], 2))
0.27
>>> results = f1_metric.compute(predictions=predictions, references=references, average="micro")
>>> print(round(results['f1'], 2))
0.33
>>> results = f1_metric.compute(predictions=predictions, references=references, average="weighted")
>>> print(round(results['f1'], 2))
0.27
>>> results = f1_metric.compute(predictions=predictions, references=references, average=None)
>>> print(results)
{'f1': array([0.8, 0. , 0. ])}
Example 5-A multi-label example
>>> f1_metric = evaluate.load("f1", "multilabel")
>>> results = f1_metric.compute(predictions=[[0, 1, 1], [1, 1, 0]], references=[[0, 1, 1], [0, 1, 0]], average="macro")
>>> print(round(results['f1'], 2))
0.67
""", stored examples: 0)" of type <class 'evaluate_modules.metrics.evaluate-metric--f1.0ca73f6cf92ef5a268320c697f7b940d1030f8471714bffdb6856c641b818974.f1.F1'> for key "eval/f1" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.
Trainer is attempting to log a value of "EvaluationModule(name: "recall", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
- **predictions** (`list` of `int`): The predicted labels.
- **references** (`list` of `int`): The ground truth labels.
- **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.
- **pos_label** (`int`): The class label to use as the 'positive class' when calculating the recall. Defaults to `1`.
- **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- `'binary'`: Only report results for the class specified by `pos_label`. This is applicable only if the target labels and predictions are binary.
- `'micro'`: Calculate metrics globally by counting the total true positives, false negatives, and false positives.
- `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between precision and recall.
- `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
- **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.
- **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
- `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.
- `0`: If there is a zero division, the return value is `0`.
- `1`: If there is a zero division, the return value is `1`.
Returns:
- **recall** (`float`, or `array` of `float`): Either the general recall score, or the recall scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher recall means that more of the positive examples have been labeled correctly. Therefore, a higher recall is generally considered better.
Examples:
Example 1-A simple example with some errors
>>> recall_metric = evaluate.load('recall')
>>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])
>>> print(results)
{'recall': 0.6666666666666666}
Example 2-The same example as Example 1, but with `pos_label=0` instead of the default `pos_label=1`.
>>> recall_metric = evaluate.load('recall')
>>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], pos_label=0)
>>> print(results)
{'recall': 0.5}
Example 3-The same example as Example 1, but with `sample_weight` included.
>>> recall_metric = evaluate.load('recall')
>>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]
>>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)
>>> print(results)
{'recall': 0.55}
Example 4-A multiclass example, using different averages.
>>> recall_metric = evaluate.load('recall')
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = recall_metric.compute(predictions=predictions, references=references, average='macro')
>>> print(results)
{'recall': 0.3333333333333333}
>>> results = recall_metric.compute(predictions=predictions, references=references, average='micro')
>>> print(results)
{'recall': 0.3333333333333333}
>>> results = recall_metric.compute(predictions=predictions, references=references, average='weighted')
>>> print(results)
{'recall': 0.3333333333333333}
>>> results = recall_metric.compute(predictions=predictions, references=references, average=None)
>>> print(results)
{'recall': array([1., 0., 0.])}
""", stored examples: 0)" of type <class 'evaluate_modules.metrics.evaluate-metric--recall.e40e6e98d18ff3f210f4d0b26fa721bfaa80704b1fdf890fa551cfabf94fc185.recall.Recall'> for key "eval/recall" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.
Exception ignored in: <function BaseFileLock.__del__ at 0x7fb2db3b1160>
Traceback (most recent call last):
File "/home/master/anaconda3/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__
self.release(force=True)
File "/home/master/anaconda3/lib/python3.9/site-packages/datasets/utils/filelock.py", line 304, in release
with self._thread_lock:
AttributeError: 'UnixFileLock' object has no attribute '_thread_lock'
********************
TrainerState(epoch=1.0, global_step=944, max_steps=1888, num_train_epochs=2, total_flos=256413353347800.0, log_history=[{'loss': 0.084, 'learning_rate': 1.4703389830508477e-05, 'epoch': 0.53, 'step': 500}, {'eval_loss': 0.2768215239048004, 'eval_accuracy': EvaluationModule(name: "accuracy", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
predictions (`list` of `int`): Predicted labels.
references (`list` of `int`): Ground truth labels.
normalize (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True.
sample_weight (`list` of `float`): Sample weights Defaults to None.
Returns:
accuracy (`float` or `int`): Accuracy score. Minimum possible value is 0. Maximum possible value is 1.0, or the number of examples input, if `normalize` is set to `True`.. A higher score means higher accuracy.
Examples:
Example 1-A simple example
>>> accuracy_metric = evaluate.load("accuracy")
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0])
>>> print(results)
{'accuracy': 0.5}
Example 2-The same as Example 1, except with `normalize` set to `False`.
>>> accuracy_metric = evaluate.load("accuracy")
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], normalize=False)
>>> print(results)
{'accuracy': 3.0}
Example 3-The same as Example 1, except with `sample_weight` set.
>>> accuracy_metric = evaluate.load("accuracy")
>>> results = accuracy_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0], sample_weight=[0.5, 2, 0.7, 0.5, 9, 0.4])
>>> print(results)
{'accuracy': 0.8778625954198473}
""", stored examples: 0), 'eval_precision': EvaluationModule(name: "precision", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
predictions (`list` of `int`): Predicted class labels.
references (`list` of `int`): Actual class labels.
labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
- 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
- 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
- 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
sample_weight (`list` of `float`): Sample weights Defaults to None.
zero_division (`int` or `string`): Sets the value to return when there is a zero division. Defaults to 'warn'.
- 0: Returns 0 when there is a zero division.
- 1: Returns 1 when there is a zero division.
- 'warn': Raises warnings and then returns 0 when there is a zero division.
Returns:
precision (`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better.
Examples:
Example 1-A simple binary example
>>> precision_metric = evaluate.load("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
>>> print(results)
{'precision': 0.5}
Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
>>> precision_metric = evaluate.load("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
>>> print(round(results['precision'], 2))
0.67
Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
>>> precision_metric = evaluate.load("precision")
>>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
>>> print(results)
{'precision': 0.23529411764705882}
Example 4-A multiclass example, with different values for the `average` input.
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')
>>> print(results)
{'precision': 0.2222222222222222}
>>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')
>>> print(results)
{'precision': 0.3333333333333333}
>>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')
>>> print(results)
{'precision': 0.2222222222222222}
>>> results = precision_metric.compute(predictions=predictions, references=references, average=None)
>>> print([round(res, 2) for res in results['precision']])
[0.67, 0.0, 0.0]
""", stored examples: 0), 'eval_f1': EvaluationModule(name: "f1", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
predictions (`list` of `int`): Predicted labels.
references (`list` of `int`): Ground truth labels.
labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`, and the order of the labels if `average` is `None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
- 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
- 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
- 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
sample_weight (`list` of `float`): Sample weights Defaults to None.
Returns:
f1 (`float` or `array` of `float`): F1 score or list of f1 scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher f1 scores are better.
Examples:
Example 1-A simple binary example
>>> f1_metric = evaluate.load("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
>>> print(results)
{'f1': 0.5}
Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
>>> f1_metric = evaluate.load("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
>>> print(round(results['f1'], 2))
0.67
Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
>>> f1_metric = evaluate.load("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
>>> print(round(results['f1'], 2))
0.35
Example 4-A multiclass example, with different values for the `average` input.
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = f1_metric.compute(predictions=predictions, references=references, average="macro")
>>> print(round(results['f1'], 2))
0.27
>>> results = f1_metric.compute(predictions=predictions, references=references, average="micro")
>>> print(round(results['f1'], 2))
0.33
>>> results = f1_metric.compute(predictions=predictions, references=references, average="weighted")
>>> print(round(results['f1'], 2))
0.27
>>> results = f1_metric.compute(predictions=predictions, references=references, average=None)
>>> print(results)
{'f1': array([0.8, 0. , 0. ])}
Example 5-A multi-label example
>>> f1_metric = evaluate.load("f1", "multilabel")
>>> results = f1_metric.compute(predictions=[[0, 1, 1], [1, 1, 0]], references=[[0, 1, 1], [0, 1, 0]], average="macro")
>>> print(round(results['f1'], 2))
0.67
""", stored examples: 0), 'eval_recall': EvaluationModule(name: "recall", module_type: "metric", features: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)}, usage: """
Args:
- **predictions** (`list` of `int`): The predicted labels.
- **references** (`list` of `int`): The ground truth labels.
- **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.
- **pos_label** (`int`): The class label to use as the 'positive class' when calculating the recall. Defaults to `1`.
- **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- `'binary'`: Only report results for the class specified by `pos_label`. This is applicable only if the target labels and predictions are binary.
- `'micro'`: Calculate metrics globally by counting the total true positives, false negatives, and false positives.
- `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between precision and recall.
- `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
- **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.
- **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
- `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.
- `0`: If there is a zero division, the return value is `0`.
- `1`: If there is a zero division, the return value is `1`.
Returns:
- **recall** (`float`, or `array` of `float`): Either the general recall score, or the recall scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher recall means that more of the positive examples have been labeled correctly. Therefore, a higher recall is generally considered better.
Examples:
Example 1-A simple example with some errors
>>> recall_metric = evaluate.load('recall')
>>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])
>>> print(results)
{'recall': 0.6666666666666666}
Example 2-The same example as Example 1, but with `pos_label=0` instead of the default `pos_label=1`.
>>> recall_metric = evaluate.load('recall')
>>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], pos_label=0)
>>> print(results)
{'recall': 0.5}
Example 3-The same example as Example 1, but with `sample_weight` included.
>>> recall_metric = evaluate.load('recall')
>>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]
>>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)
>>> print(results)
{'recall': 0.55}
Example 4-A multiclass example, using different averages.
>>> recall_metric = evaluate.load('recall')
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = recall_metric.compute(predictions=predictions, references=references, average='macro')
>>> print(results)
{'recall': 0.3333333333333333}
>>> results = recall_metric.compute(predictions=predictions, references=references, average='micro')
>>> print(results)
{'recall': 0.3333333333333333}
>>> results = recall_metric.compute(predictions=predictions, references=references, average='weighted')
>>> print(results)
{'recall': 0.3333333333333333}
>>> results = recall_metric.compute(predictions=predictions, references=references, average=None)
>>> print(results)
{'recall': array([1., 0., 0.])}
""", stored examples: 0), 'eval_runtime': 4.3362, 'eval_samples_per_second': 714.904, 'eval_steps_per_second': 44.739, 'epoch': 1.0, 'step': 944}], best_metric=0.2768215239048004, best_model_checkpoint='./results/checkpoint-944', is_local_process_zero=True, is_world_process_zero=True, is_hyper_param_search=False, trial_name=None, trial_params=None)
********************
```<|||||>So your metrics are not floats, but one ends up being a whole scikit-learn module, this is why you have the issue. The code you pasted is actually super weird:
```
def compute_metrics(eval_pred):
accuracy = load("accuracy")
precision = load("precision")
f1 = load("f1")
recall = load("recall")
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
accuracy.compute(predictions=predictions, references=labels)
precision.compute(predictions=predictions, references=labels, average="micro")
f1.compute(predictions=predictions, references=labels, average="micro")
recall.compute(predictions=predictions, references=labels, average="micro")
return {"accuracy": accuracy, "precision": precision, "f1": f1, "recall": recall}
```
You compute the results on predictions and labels but don't store it anywhere, instead you return the metric functions (from `evaluate` I guess?) and not the computed values.<|||||>Great catch! I modified `compute_metrics()` to run successfully without any warnings:
```python
def compute_metrics(eval_pred):
accuracy = load("accuracy")
precision = load("precision")
f1 = load("f1")
recall = load("recall")
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
accuracy_ = accuracy.compute(predictions=predictions, references=labels)["accuracy"]
precision_ = precision.compute(predictions=predictions, references=labels, average="micro")["precision"]
f1_ = f1.compute(predictions=predictions, references=labels, average="micro")["f1"]
recall_ = recall.compute(predictions=predictions, references=labels, average="micro")["recall"]
return {"accuracy": accuracy_, "precision": precision_, "f1": f1_, "recall": recall_}
```
However, it doesn't seem like the [results make sense](https://github.com/galenballew/bert-multiclass/blob/main/Screenshot%20from%202023-04-27%2009-35-44.png). That being said, the original issue is definitely no longer an issue. I really appreciate your help--thank you! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.