repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 20,769 | closed | Add Swin backbone | # What does this PR do?
This PR adds Swin backbone, to be used with frameworks like OneFormer and UperNet.
Note: #20648 also included this but I'll add Swin backbone as a separate PR to make the other one smaller. | 12-14-2022 16:16:24 | 12-14-2022 16:16:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,768 | closed | Update tests: replace feature extractor tests with image processor | # What does this PR do?
Replaces all feature extractor references with image processor references in the `test_image_processing_xxx.py` files
* `feature_extractor = XxxFeatureExtractor()` -> `image_processor = XxxImageProcessor`
Requires https://github.com/huggingface/transformers/pull/20785 in order to replace `FeatureExtractionSavingTestMixin`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-14-2022 15:33:08 | 12-14-2022 15:33:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,767 | closed | Cache size limit for generation | ### Feature request
Add a `cache_limit` argument for `generate`, limiting the size of the cache (`past_key_values`).
### Motivation
In some contexts one might want to generate long sequences. When doing so, the system can easily run out of memory. Keeping the cache to a maximum size would allow users to have more control and tweak others parameters such as batch size or number of beams, to generate faster and take the most out of their hardware.
### Your contribution
I implemented it in GPT2 (PyTorch & TF, PR is ready), but I guess this could be implemented more broadly in `generate` so that every models could benefit it.
It might relate to #17574.
Waiting for your opinion on this, I can probably add it to `generate`. | 12-14-2022 14:38:04 | 12-14-2022 14:38:04 | cc @gante but this might be a bitt too niche for us to incorporate in `generate`.<|||||>Hey @Natooz 👋
Thank you for raising this issue! It may be helpful in some situations. However, since it is the first time I'm seeing a request for this, the answer depends on your implementation :) I'm on board if it consists of a few line changes (say, <100 in total for all models). Otherwise, the maintenance costs are too high.
Regardless of your answer and the decision on this issue, it's useful for us to have issues like this, to gauge demand and guide our future work 🤗 <|||||>Hi @gante, thanks for the feedback !
This is because the feature might be a very niche (eg story or music gen) that I asked if you would prefer it implemented per model (was thinking of `prepare_inputs_for_generation`) or more in a more DRY way, probably a method called in each decoding method at each step.
Here is how I implemented it for GPT2 in PyTorch (TF is pretty much the same) in [`prepare_inputs_for_generation`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L986), it stands in a few lines:
```Python
# past is retrieved from model_kwargs, and is of shape (L,2,N,NH,T,DH)
# (layer, keys/values, batch, attn_head, seq, dim_head)
# with two first dims as tuple, dim 2:-1 is a fixed length tensor
cache_limit = kwargs.get("cache_limit", None)
# check if using cache and a limit, and that the current cache does exceed it
# dim -2 of cache is always the same across all layers and kv, so checking past[0][0]
if past and cache_limit and past[0][0].shape[-2] > cache_limit:
# reducing the time / seq dimension (-2)
past = [[kv[..., -cache_limit:, :] for kv in layer] for layer in past]
# we need to update the attention_mask as it is incremented in _update_model_kwargs_for_generation
if attention_mask is not None:
attention_mask = attention_mask[:, -cache_limit - 1:]
# no need to update position_ids here as kwargs[attention_mask].shape[1]
# is the nb of all past positions / tokens that have been processed so far
```
Now for implementing it in `GenerationMixin`, it would require that 1) all concerned models use the same cache shape `(L,2,N,NH,T,DH)` (is it the case?), and; 2) it does not mess with positional encoding. In GPT2, positions are created on the fly from the shape of the attention_mask, itself [incremented](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L708) at each decoding step.
For this direction, and if we don't want to touch any method overridden by models, I think the best solution would be to store
position_id in `model_kwargs`, that would be updated at each step.
I'll give a try to the `GenerationMixin` direction and come back ! 👨💻<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,766 | closed | Add Universal Segmentation class + mapping | # What does this PR do?
This PR adds the `AutoModelForUniversalSegmentation` class and corresponding mapping. Models that can be added to this mapping include DETR, MaskFormer, Mask2Former and OneFormer.
To do:
- [x] update pipeline
@sgugger for some reason make fixup complains:
```
Traceback (most recent call last):
File "/Users/nielsrogge/Documents/python_projecten/transformers/utils/check_repo.py", line 827, in <module>
check_repo_quality()
File "/Users/nielsrogge/Documents/python_projecten/transformers/utils/check_repo.py", line 816, in check_repo_quality
check_models_are_in_init()
File "/Users/nielsrogge/Documents/python_projecten/transformers/utils/check_repo.py", line 369, in check_models_are_in_init
raise Exception(f"The following models should be in the main init: {','.join(models_not_in_init)}.")
Exception: The following models should be in the main init: MaskFormerForUniversalSegmentation.
``` | 12-14-2022 13:33:12 | 12-14-2022 13:33:12 | I think the error message is pretty clear on what to do. The model would also need to be added to the doc page of maskformer if we go through with this.
I'm not convinced we should however. While adding a new auto-model API could make sense (I'd wait to have more than one model though), renaming the model class a year after the model has been released is not something we should do (the same way we keep `GPTLMHeadModel` for instance). <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I'm unsure why you keep changing more of the `MaskFormerForInstanceSegmentation`. I think I may have been unclear in my previous comments. I am against changing that name in a model that has been around for 10 months now, for the same reason we did not change the name of `GPTLMHeadModel` or `BertLMHeadModel` even if those are not ideal.<|||||>@sgugger I'll keep the old names, but add them to the same (new) mapping. |
transformers | 20,765 | closed | Fix attribute error problem | When the `use_legacy_prediction_loop` parameter is used , `Trainer.predict` will raise an error `AttributeError: 'PredictionOutput' object has no attribute 'num_samples'`
# What does this PR do?
1. Use `EvalLoopOutput` instead of original `PredictionOutput` of `self.prediction_loop`, this makes the return formats of `self.prediction_loop` same as `self.evaluation_loop`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-14-2022 09:43:36 | 12-14-2022 09:43:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,764 | closed | Install torch-tensorrt 1.3.0 | # What does this PR do?
It turns out that the issue mentioned in #20758 could be fixed by installing newer version of `torch-tensorrt (1.3.0)` (the pre-installed one in the base image was `1.1.0a0`).
Notice that before our CI using torch 1.13.0, we didn't have `torch-tensorrt` installed in the docker image (for daily CI with stable release of torch/deepspeed). But guess we can have it anyway. | 12-14-2022 09:28:38 | 12-14-2022 09:28:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,763 | closed | Fix bug in ChineseCLIPTextPooler | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I think the ChineseCLIPTextPooler should be consistent with https://github.com/huggingface/transformers/blob/main/src/transformers/models/chinese_clip/modeling_chinese_clip.py#L1374:
- bias=False
- don't use activation function
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-14-2022 09:16:27 | 12-14-2022 09:16:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @xiaohu2015! I think the pooling layer inside `ChineseCLIPTextModel` has nothing to do with the projection layer `text_projection` in ` ChineseCLIPModel`. And the model definition should be the one used by the model author which I believe the current one is the correct version.
<|||||>> Hi, @xiaohu2015! I think the pooling layer inside `ChineseCLIPTextModel` has nothing to do with the projection layer `text_projection` in ` ChineseCLIPModel`. And the model definition should be the one used by the model author which I believe the current one is the correct version.
But CLIPTextModel is consistent with CLIPModel<|||||>> But CLIPTextModel is consistent with CLIPModel
This doesn't mean the relation of ChineseCLIPTextModel/ChineseCLIPModel should be the same as CLIPTextModel/CLIPModel. It is the author of ChineseCLIPModel who decided this way to construct the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am going to close this thread. But don't hesitate to comment if you have any further question, @xiaohu2015 |
transformers | 20,762 | closed | Even more validation. | # What does this PR do?
Addresses this comment:
https://github.com/huggingface/transformers/pull/20729#issuecomment-1350351272
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 12-14-2022 08:55:51 | 12-14-2022 08:55:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,761 | closed | The reasons for offseting the position embedding ids by 2 for OPT Model. | ### System Info
transformers 4.20.1
### Who can help?
@sgugger
@stevhliu
@gante
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The source code in modeling_opt.py (HuggingFace)
```python
class OPTLearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
"""
def __init__(self, num_embeddings: int, embedding_dim: int):
# OPT is set up so that if padding_idx is specified then offset the embedding ids by 2
# and adjust num_embeddings appropriately. Other models don't have this hack
self.offset = 2
super().__init__(num_embeddings + self.offset, embedding_dim)
def forward(self, attention_mask: torch.LongTensor, past_key_values_length: int = 0):
"""`input_ids_shape` is expected to be [bsz x seqlen]."""
attention_mask = attention_mask.long()
# create positions depending on attention_mask
positions = (torch.cumsum(attention_mask, dim=1).type_as(attention_mask) * attention_mask).long() - 1
# cut positions if `past_key_values_length` is > 0
positions = positions[:, past_key_values_length:]
return super().forward(positions + self.offset)
```
The source code in metaseq
```python
class LearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
Padding ids are ignored by either offsetting based on padding_idx
or by setting padding_idx to None and ensuring that the appropriate
position ids are passed to the forward function.
"""
def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int):
super().__init__(num_embeddings, embedding_dim, padding_idx)
if self.padding_idx is not None:
self.max_positions = self.num_embeddings - self.padding_idx - 1
else:
self.max_positions = self.num_embeddings
def forward(
self,
input: Tensor,
incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
positions: Optional[Tensor] = None,
):
"""Input is expected to be of size [bsz x seqlen]."""
assert (positions is None) or (
self.padding_idx is None
), "If positions is pre-computed then padding_idx should not be set."
# we cannot use incremental state here because we must be aware of
# padding.
if positions is None and self.padding_idx is not None:
positions = utils.make_positions(input, self.padding_idx)
return F.embedding(
positions,
self.weight,
self.padding_idx,
self.max_norm,
self.norm_type,
self.scale_grad_by_freq,
self.sparse,
)
```
### Expected behavior
Why do we need to conduct offset for the positional embedding here and why it is 2? I haven't found the same setting in the source code of meta. | 12-14-2022 08:24:47 | 12-14-2022 08:24:47 | Hey! That's an interesting question.
The main reason behind this is that the original code uses `nn.Embedding(..., ..., padding_idx)` where the argument is passed to the `torch module`. The `padding_idx` can be found in the tokenizer and is `1`.
Now in transformers we realised that using the `nn.Embedding`'s native `padding_idx` is actually not very good for training. There is a big thread where you can learn more about why here : #10200.
Since we are not doing this, we need to manually update the indexes, so we have the padding index plus the `-1` that is already there. That sums up to a shift of `2`
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,760 | closed | Patch for FlanT5-XXL 8bit support | # What does this PR do?
Fixes #20287 .
In #20287 , 3 patches were proposed here: https://github.com/huggingface/transformers/issues/20287#issuecomment-1342219429
* Patch 3 is already covered by https://github.com/huggingface/transformers/pull/20683
* I found patch 2 is actually unnecessary, because there's already a cast to float16 here: https://github.com/younesbelkada/transformers/blob/68a894a5875bfd958b8254afd3bbb23db9c2e813/src/transformers/models/t5/modeling_t5.py#L258-L260 which also applies in this case as we keep `self.wo` in `float32`.
* This PR contains the patch 1, adjusted so it only applies a cast if the `hidden_states` actually has a different `dtype` from the `wo` weights.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [n/a] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada @sgugger
| 12-14-2022 02:32:18 | 12-14-2022 02:32:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks so much for the fix @larsmennen ! I would personally advocate to focus only on `T5`, and we can add these patches later on if we figure out that the same issue occur for all subsidiary models! Can you revert the changes for longt5/perceiver & switch (ideally also keep the copy mechanism, so maybe add the `# Copied from` statements but use another model as t5 as reference (for e.g. for perceiver `# Copied from transformers.src.models.longt5. ...`) Also don't forget to run the styling changes ;) (`make fixup`) Thanks again!
That makes sense! done |
transformers | 20,759 | closed | Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' ) | ### System Info
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.11.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/taosy/.huggingface/token
- Has saved token ?: False
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce this issue:
1. git clone https://github.com/huggingface/transformers
2. cd transformers
3. python examples/pytorch/language-modeling/run_clm.py --model_name_or_path EleutherAI/gpt-j-6B --dataset_name the_pile --dataset_config_name enron_emails --do_eval --output_dir /tmp/output --overwrite_output_dir
### Expected behavior
1. This issue looks like due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst " couldn't be reached.
2. Is there another way to download the dataset "the_pile" ?
3. Is there another way to cache the dataset "the_pile" but not let the hg to download it when runtime ? | 12-14-2022 01:32:52 | 12-14-2022 01:32:52 | This is an issue with the dataset, so you should probably open this in the Datasets repo :-)<|||||>> This is an issue with the dataset, so you should probably open this in the Datasets repo :-)
Thanks for your help |
transformers | 20,758 | closed | Uninstall `torch_tensorrt` in `DeepSpeed` CI image for now | # What does this PR do?
Since we update CI to use PyTorch 1.13, it uses a base image `nvcr.io/nvidia/pytorch:22.04-py3` which contains `torch_tensorrt`.
This causes the CI failing from the test collection - i.e. the whole test suite fails from the beginning.
This PR uninstalls `torch_tensorrt` for now (previously, this is not installed for DeepSpeed CI - the stable release version), so at least the CI could run (and to see if any test would fail with PyTorch 1.13).
We will have to work on the `torch_tensorrt` issue though.
#### Current error message
```bash
==================================== ERRORS ====================================
______________ ERROR collecting tests/deepspeed/test_deepspeed.py ______________
ImportError while importing test module '/workspace/transformers/tests/deepspeed/test_deepspeed.py'.
``` | 12-13-2022 18:50:37 | 12-13-2022 18:50:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,757 | closed | Rework automatic code samples in docstrings | # What does this PR do?
This PR reworks the automatic code sample docstrings in two ways:
First, use the auto-classes for the preprocessing. As was decided internally, we want to document the model class used, but use the auto classes for preprocessing so users are not confused when a given model uses the tokenizer/feature extractor/image processor/processor of another.
Second, we don't want to showcase `hf-internal-testing` models in the docstrings. Those are tiny random models and it confuses users more than it helps. However when using the standard checkpoint we get doctest problems, so this PR removes the output/loss from the code example when it shouldn't be tested.
Two examples are shown with BERT and DeBERTaV2, I can add more models to the PR if it suits everyone. | 12-13-2022 17:28:41 | 12-13-2022 17:28:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,756 | closed | min_new_tokens option in generate() implementation | ### Feature request
Similarly to the `max_new_tokens`, a `min_new_tokens` option would count only the newly generated tokens, ignoring the tokens of the input sequence (prompt) in decoder only models.
### Motivation
The option `min_length` of the `generate()` method might be ambiguous for decoder only models. It is not clear if decoder only models consider the length of the input (prompt) for the `min_length` condition or only the newly generated tokens.
In Encoder Decoder (seq2seq) it is clear though.
### Your contribution
Not that I remember. But I could test it. | 12-13-2022 17:22:32 | 12-13-2022 17:22:32 | cc @gante <|||||>Hi @gonced8 👋 Thank you for raising this issue!
This is the same as [this issue](https://github.com/huggingface/transformers/issues/20614) (which is slightly older). I'm closing this issue to avoid duplication of comments/efforts, and [this particular comment](https://github.com/huggingface/transformers/issues/20614#issuecomment-1361225567) might be of your interest :) |
transformers | 20,755 | closed | [CI-Test] Fixes but also skips the mT5 tests | # What does this PR do?
Silences the `MT5` tests. They are irrelevant as the model behind is T5, which function well.
I want to delete everything but prefer to ask if there is a reason why we have these?
cc @sgugger | 12-13-2022 17:14:44 | 12-13-2022 17:14:44 | My opinion is that we should have test file(s) for each model type we have. MT5 has its own modeling file and model type, so we should keep it.
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>mT5 does not really have its own modeling file since it's just T5. I'm happy if the tests for that model are contained to a slow integration test to not bloat the CI, like what is done for similar models (Camembert for instance).<|||||>I am OK with the decision, but just want to point out that we will lose these models for tiny model creation, which is required for ONNX testing and pipeline testing (in the future). I just want every party involved agree this.
cc @LysandreJik, @Narsil and @lewtun <|||||>Ok with the decision as well! |
transformers | 20,754 | closed | Fixing the pipeline tutorial test. | # What does this PR do?
This is just #20746 with one more fix. The CI would not run after I push a commit to that PR. Sorry @Narsil ! | 12-13-2022 16:49:19 | 12-13-2022 16:49:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,753 | closed | Summarization Pipeline not outputting both text and token_ids | ### System Info
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
summariser = pipeline("summarization", model="sshleifer/distilbart-cnn-12-6")
output = summariser("this is a test input", return_text=True, return_tensors=True)
print(output)
```
Returns (ignoring length warning message):
```
[{'summary_token_ids': tensor([ 2, 0, 42, 16, 10, 1296, 8135, 8135, 31, 5, 2730, 9,
42, 1566, 479, 152, 16, 45, 5, 78, 86, 52, 348, 450,
42, 1905, 11, 10, 1296, 422, 30, 5, 2730, 4, 42, 16,
5, 78, 9, 63, 761, 4, 152, 16, 5, 200, 9, 10,
651, 9, 1296, 1237, 30, 5, 7601, 9, 5, 1040, 479, 2])}]
```
### Expected behavior
According to the `SummarizationPipeline` [documentation](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.SummarizationPipeline.__call__) I would expect a list of dictionaries where each dictionary has both `summary_text` and `summary_token_ids` elements. Instead, only one is returned, even though both `return_text=True` and `return_tensors=True` in the call to the summariser.
I have done a little digging and think it's coming from the way these arguments are handled. In the `_sanitize_parameters` method [here](https://github.com/huggingface/transformers/blob/30d8919ab13dcc212cb00fbb4d3e969aee6c3fc5/src/transformers/pipelines/text2text_generation.py#L73), the `postprocess_params["return_type"]` gets set to either ` ReturnType.TENSORS` or ` ReturnType.TEXT` (or a different `return_type` specified as an input arg).
The `postprocess` method of the `Text2TextGenerationPipeline` [here](https://github.com/huggingface/transformers/blob/30d8919ab13dcc212cb00fbb4d3e969aee6c3fc5/src/transformers/pipelines/text2text_generation.py#L195) then takes the `return_type` arg, but can only ouput either the `summary_text` or `summary_token_ids`, and not both.
I could have a go at raising a PR for this if you'd like.
Many thanks.
| 12-13-2022 16:18:23 | 12-13-2022 16:18:23 | Realised that this has been addressed in another issue, apologies. |
transformers | 20,752 | closed | RuntimeError: false INTERNAL ASSERT FAILED at "../c10/cuda/CUDAGraphsC10Utils.h":73, please report a bug to PyTorch. Unknown CUDA graph CaptureStatus32742 | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is my code sample
```python
X_train=list(train['clean_review'])
X_test=list(test['clean_review'])
y_train=list(train['rating'])
y_test=list(test['rating'])
model_name="distilbert-base-uncased"
tokenizer=DistilBertTokenizer.from_pretrained(model_name)
model=DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=2)
X_train_tokenized=tokenizer(X_train, padding=True, truncation=True, max_length=512)
X_test_tokenized=tokenizer(X_test, padding=True, truncation=True, max_length=512)
class Dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels=None):
self.encodings=encodings
self.labels=labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
if self.labels:
item["labels"]=torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.encodings["input_ids"])
train_dataset=Dataset(X_train_tokenized, y_train)
test_dataset=Dataset(X_test_tokenized, y_test)
args=TrainingArguments(
output_dir="output",
evaluation_strategy="steps",
eval_steps=500,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
seed=0,
load_best_model_at_end=True,)
trainer=Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],)
trainer.train()
```
### Expected behavior
I'm trying to create a classifier to classify the drug review. The outcome labels are 2: Positive or Negative. For this, I'm using transformer ```DistilBert-base-uncased```. However, when the ```Trainer``` function is run, it produces the following error:
```RuntimeError: false INTERNAL ASSERT FAILED at "../c10/cuda/CUDAGraphsC10Utils.h":73, please report a bug to PyTorch. Unknown CUDA graph CaptureStatus32742```.
I've searched into this a lot, but I haven't been able in finding a solution. Please help........ | 12-13-2022 15:21:02 | 12-13-2022 15:21:02 | We won't really be able to help you since your reproducer does not contain any way to get the dataset you're using.
We have a whole chapter around debugging the training pipeline in [our online course](https://huggingface.co/course/chapter8/4?fw=pt).<|||||>I had the same error in a similar setting. I installed CUDA 12.0 and now I get this error:
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
What can I do?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,751 | closed | TrOCR base-harge-stage1 Processor issue | Hi everyone,
I am trying to run [trocr large base stage1](https://huggingface.co/microsoft/trocr-large-stage1)
code can be found [here with Google Collab ](https://drive.google.com/file/d/1_yzu2iTWW5AWpkBFv6M590rTPEY6vmbU/view?usp=sharing)
can anyone tell what the possible ways to this issue are during loading processor for features extraction
`processor_base_large_stage1 = TrOCRProcessor.from_pretrained('microsoft/trocr-large-stage1')`
the issue :
`Exception Traceback (most recent call last)
[<ipython-input-7-30e2e2fd6c43>](https://localhost:8080/#) in <module>
----> 1 processor_base_large_stage1 = TrOCRProcessor.from_pretrained('microsoft/trocr-large-stage1')
6 frames
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in __init__(self, *args, **kwargs)
109 elif fast_tokenizer_file is not None and not from_slow:
110 # We have a serialization from tokenizers which let us directly build the backend
--> 111 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
112 elif slow_tokenizer is not None:
113 # We need to convert a slow tokenizer to build the backend
Exception: No such file or directory (os error 2)` | 12-13-2022 14:27:35 | 12-13-2022 14:27:35 | cc @NielsRogge <|||||>Hi,
Thanks for your interest in TrOCR! This is possibly linked to #15283<|||||>Thanks @NielsRogge & @sgugger
In this [#15283](https://github.com/huggingface/transformers/issues/15283) Cannot solve my issue may I miss something else?
But, what about using a different processor like a small stage1 or large handwritten while they are both initialized from Beit & RoBERTa models by updating the above line with
processor_base_large_stage1 = `TrOCRProcessor.from_pretrained('microsoft/trocr-base-stage1')`
no errors here, But does this will affect results when finetuning for a different language?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>The solution which is working for me I save the processor on a [collab ](https://drive.google.com/file/d/1_yzu2iTWW5AWpkBFv6M590rTPEY6vmbU/view?usp=sharing) by following `processor.save_pretrained("./processor") ` and then I am using it in my own environment (remote cluster server)
`processor = TrOCRProcessor.from_pretrained("/processor/") #microsoft/trocr-large-stage1`
Transformers version: 4.27.0 installed with new venv <|||||>I see the issue occurs because that model repo doesn't have fast tokenizer files. One can load the slow (Python-based) tokenizer as follows:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/trocr-large-stage1", use_fast=False)
```<|||||>Update, solved this issue by following the guide here: https://discuss.huggingface.co/t/convert-slow-xlmrobertatokenizer-to-fast-one/20876.
This works now!
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/trocr-large-stage1")
``` |
transformers | 20,750 | closed | Module 'keras.engine.data_adapter' has no attribute 'expand_1d' with non dummy loss | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.1+cu102 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example code with a non dummy loss:
```python
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
from tensorflow.keras.optimizers import Adam
from datasets import load_dataset
import tensorflow as tf
import numpy as np
dataset = load_dataset("glue", "cola")
dataset = dataset["train"] # Just take the training split for now
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = dict(tokenizer(dataset["sentence"], return_tensors="np", padding=True))
labels = np.array(dataset["label"]) # Label is already an array of 0 and 1
# Load and compile our model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
# Lower learning rates are often better for fine-tuning transformers
model.compile(optimizer=Adam(3e-5), loss='binary_crossentropy')
model.fit(tokenized_data, labels)
```
```python
Traceback (most recent call last):
File "test_mirrored.py", line 22, in <module>
model.fit(tokenized_data, labels)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_file1a59fb96.py", line 15, in tf__train_function
retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1476, in train_step
data = data_adapter.expand_1d(data)
AttributeError: in user code:
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function *
return step_function(self, iterator)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step **
outputs = model.train_step(data)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1476, in train_step
data = data_adapter.expand_1d(data)
AttributeError: module 'keras.engine.data_adapter' has no attribute 'expand_1d'
```
### Expected behavior
Training succesfully. | 12-13-2022 13:21:06 | 12-13-2022 13:21:06 | cc @Rocketknight1 and @gante <|||||>Reproduced this issue locally, seems to be an issue with TF 2.11 and doesn't occur in previous versions. Checking it out now!<|||||>@ZJaume @mcclunatic the fix has been merged - please try installing transformers from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git` and see if the issue is resolved. If you encounter any further problems, please reopen this issue and let me know!<|||||>@Rocketknight1 I've just tested it in my notebook and the issue is indeed resolved! Thanks so much for fixing this so quickly!<|||||>came across this issue experiencing the same thing. upgraded from the primary branch worked for me as well 🚀 |
transformers | 20,749 | closed | fix missing () in is_flaky | # What does this PR do?
#20739 uses `is_flaky` without `()` at the end. This will make the actual test (that being decorated) not running at all.
This PR adds `()`.
| 12-13-2022 12:41:14 | 12-13-2022 12:41:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,748 | closed | The wrong expression in function "squad_convert_example_to_features" | https://github.com/huggingface/transformers/blob/d4bf9ee1ff0e85cb24feec4dd160af39b623d4b9/src/transformers/data/processors/squad.py#L253
The type of `span["input_ids"]` is `List`, and the `tokenizer.pad_token_id` equals 0, so `span["input_ids"] == tokenizer.pad_token_id` returns bool type result: `False`. It leads to `pad_token_indices = np.where(False)` to be empty array. This may not be the ideal outcome. Because the latter expression
https://github.com/huggingface/transformers/blob/d4bf9ee1ff0e85cb24feec4dd160af39b623d4b9/src/transformers/data/processors/squad.py#L258
uses the `pad_token_indices` to generate p_mask, which should be a `np.ndarray` that sets the padding item in it to 1.
I think the right expression may be as follow:
```Python
pad_token_indices = np.where(np.array(span["input_ids"]) == tokenizer.pad_token_id)
```
`np.array(span["input_ids"])` make a `np.ndarray`, which can compare with `tokenizer.pad_token_id`, and generate `np.ndarray` result. It make sence of expression `p_mask[pad_token_indices] = 1`.
My English is poor, I hope you can understand my words. Thanks! | 12-13-2022 10:27:09 | 12-13-2022 10:27:09 | This is all legacy code that we are not maintaining anymore though.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,747 | open | Add GPT-2-climate | ### Model description
GPT-2 was pretrained on a climate change-related corpus consisting of over 500 thousand abstracts of top climate scientists' articles from trustable sources covering large temporal and spatial scales. The climate-gpt-2 model could further be used for downstream tasks in the climate change domain, including Classification, Fact-checking, and text generation (climate change-related texts).
paper: https://www.climatechange.ai/papers/neurips2022/27
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
@seashr | 12-13-2022 09:49:29 | 12-13-2022 09:49:29 | Hey @saeedashraf are we expecting to work on integrating this in Hugging Face? If so then I'll be interested in helping out.<|||||>Hi Manish,
Ye ... we would like to fully integrate this.
On Mon, Jan 9, 2023 at 5:47 PM Manish ***@***.***> wrote:
> Hey @saeedashraf <https://github.com/saeedashraf> are we expecting to
> work on integrating this in Hugging Face? If so then I'll be interested in
> helping out.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/20747#issuecomment-1375935797>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AFLUPIJHVKSB5GHO53JETW3WRQ6LTANCNFSM6AAAAAAS47TOKU>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>Okay, so the GPT model itself is available in HuggingFace. Do we wish to incorporate this dataset? or just the training objective? |
transformers | 20,746 | closed | Fixing the pipeline tutorial test. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-13-2022 09:38:46 | 12-13-2022 09:38:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Well it works suddently! I will merge this one. |
transformers | 20,745 | closed | "Loading weights from local directory" | I've been stuck in "Loading weights from local directory" when run the code(https://github.com/huggingface/transformers/blob/main/examples/research_projects/mlm_wwm/run_chinese_ref.py). My cmd for running run_chinese_ref.py is below:


| 12-13-2022 05:49:59 | 12-13-2022 05:49:59 | I download the LTP model from https://huggingface.co/LTP/small/tree/main . And put it in the LTP-small local folder.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,744 | closed | Add config for generating ONNX models for table-transformers | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Allows generating ONNX models for table-transformers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge - This adds a small bit on top of the work you did in https://github.com/huggingface/transformers/pull/19614 to allow generating ONNX models for table transfomers (e.g. `python -m transformers.onnx --model="microsoft/table-transformer-structure-recognition" table/`)
Without this addition, I was getting this error:
```
KeyError: "table-transformer is not supported yet.
Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet_v1', 'mobilenet_v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos']
are supported. If you want to support table-transformer please propose a PR or open up an issue.
``` | 12-13-2022 05:32:55 | 12-13-2022 05:32:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20744). All of your documentation changes will be reflected on that endpoint.<|||||>Note that we don't accept new model exports in Transformers, as we moved that part into the Optimum library. You should open a PR to add support there :-)<|||||>> Note that we don't accept new model exports in Transformers, as we moved that part into the Optimum library. You should open a PR to add support there :-)
Thank you @sgugger, will do! |
transformers | 20,743 | closed | Issue with Tokenizer (fast) splitting `<mask>` into constituent added special tokens despite mask token in vocab and in special tokens map | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- Huggingface_hub version: 0.11.0.rc0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to Reproduce Behavior:
1.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("./tok", use_fast= True)
tokenizer(tokenizer.mask_token, add_special_tokens=False)
```
Evaluates to `{'input_ids': [11, 10], 'attention_mask': [1, 1]}`
3.
```
tokenizer_slow = AutoTokenizer.from_pretrained("./tok", use_fast= False)
tokenizer_slow(tokenizer_slow.mask_token, add_special_tokens=False)
```
Evaluates to `{'input_ids': [4], 'attention_mask': [1]}` (as expected).
Not that in either case, mask_token is `<mask>` and corresponds to mask_token_id 4.
Note also that the directory `tok` contains merges.txt, special_tokens_map.json, tokenizer_config.json, tokenizer.json, and vocab.json. Note that additional_special_tokens and vocab contain `{... "m":11, "s":10 ,...}`, so I believe the Rust tokenizer is considering these special tokens before considering the `<mask>` token.
### Expected behavior
`tokenizer_slow(tokenizer_slow.mask_token, add_special_tokens=False)['input_ids'] == tokenizer(tokenizer.mask_token, add_special_tokens=False)['input_ids'] == [4]` would evaluate to `True`.
| 12-12-2022 21:28:26 | 12-12-2022 21:28:26 | Interesting, this is part of a series of bug we have with different behaviours between fast and slow. Thanks for posting.<|||||>Thank you for your response @ArthurZucker . I would be happy to provide details about instantiation and behavior if needed.<|||||>Just to be able to reproduce correctly could you tell me which tokenizer are you using? <|||||>RobertaTokenizerFast <|||||>Could you push your tokenizer to the hub? I can't really reproduce this now<|||||>I also faced the same issue when trained using the ByteLevelBPETokenizer suggested in https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb#scrollTo=IMnymRDLe0hi
<b>Tokenizer training</b>:
```
tokenizer = ByteLevelBPETokenizer()
tokenizer.train_from_iterator(iterator=LIST_OF_STRINGS, vocab_size=52000, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
```
<b>Tokenizer use</b>:
```
tokenizer = RobertaTokenizerFast(vocab_file="<VOCAB_FILE_PATH>",
merges_file="<MERGES_FILE_PATH>",
max_len=512)
```
This tokenizer gives me: ```['<s>', '<', 'mask', '>', '</s>']``` when i use:
```
tokenizer.convert_ids_to_tokens(tokenizer.encode(tokenizer.mask_token))
```
is there a known fix to this ? I am using python 3.8, transformers 4.24.0 and tokenizers 0.13.1<|||||>I will have a look thanks 😉 <|||||>This will be related to the `tokenizer` library as both reports include `fast`. Not stale! <|||||>Thanks for your patience 🤗
1. In the current state, it is not a problem with the tokenizer itself as:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast= True)
tokenizer(tokenizer.mask_token, add_special_tokens=False)
```
correctly outputs `50264`.
2. Regarding the training of the tokenizer, the notebook works well for me and I cannot reproduce the issue that you are getting. Are you sure that you properly saved the vocabulary and merges with `tokenizer.save_model()` (using the rust tokenizer) ?

<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,742 | closed | Add docs xlm roberta | # What does this PR do?
Fixes [20055](https://github.com/huggingface/transformers/issues/20055)
I have opened a fresh PR to add resources for XLM-RoBERTa.
1. Added a link to classification task fine-tuning using Habana Gaudi blog
2. Fix typos
3. Remove TF and Flax XLMRobertaForCausalLM
4. Update my branch to reflect the latest main branch
Thanks for your help reviewing the changes! @stevhliu | 12-12-2022 21:27:07 | 12-12-2022 21:27:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,741 | closed | T5ForConditionalGeneration with BetterTransformer | ### Feature request
Got this error when trying out BetterTransformer:
"The Better Transformers implementation for the model T5ForConditionalGeneration has not been implemented yet. Please open an issue requesting the addition of this model with its `BetterTransformer`implementation."
### Motivation
I would like to speed up some text2text models with BetterTransformer
### Your contribution
I can at least test the feature once it is ready | 12-12-2022 20:43:23 | 12-12-2022 20:43:23 | I think it needs to be implemented in optimum, so you should move the issue there. cc @younesbelkada <|||||>Moving to optimum |
transformers | 20,740 | closed | Use tf.keras.Input to build TF models instead of actual Tensors | This PR does three things!
### Rework signatures for `serving()`
We use `None` for almost every dimension in our `serving()` signatures, which indicates that the dimension is variable. However, in several cases, the dimension is static **for a given model, but not for that whole model class.**
For example, an image model might have an attribute like `config.num_channels`. Once this value is known, the channels dimension of the input must have this value. Therefore, the serving signature should reflect it too, to ensure the exported functions work correctly!
Right now we use `tf.function` as a decorator on the `serving()` method, but when we do this then the serving signature must be fully static. This PR instead removes the decorator, and creates `self.serving` by calling `tf.function` in the `__init__` when the `config` is available and these values are known. This change should be 100% transparent from the user's perspective, but will fix several issues.
### Use the serving signature instead of dummy inputs
Now that we have correct serving signatures, we can use them to build our models! This PR changes `from_pretrained` to use the serving signature instead of dummy inputs. Because the serving signature can have `None` dimensions, we cannot actually build real `Tensors` from that shape. However, we can create `tf.keras.Input` placeholders with that shape, and pass these through the model instead.
This is quite a significant change, because using placeholder inputs converts the build process from an eager forward pass into a TF compilation. However, this ensures that the model save signature (used when exporting a `SavedModel` has correct variable dimensions, which we were only able to do with very hacky calls to `_set_save_spec()` until now. This change surfaced several bugs, but most model classes had no issues with it.
### Clean up building and `name_scope`
Almost all of the issues that resulted from building-by-compilation are caused by our use of `name_scope` or `variable_scope`. I believe the issue is that TF creates variables in a slightly different way in an eager versus a compilation context, and I **think** this should be resolvable by refactoring our uses of `tf.name_scope` or `tf.compat.v1.variable_scope` to `tf.name_scope(use_resource=True)`, as all eager variables are `ResourceVariable` by default.
Also, compilation is slower than an eager forward pass when the inputs are small. However, we had several inefficiencies in `from_pretrained`, including a repeated build step that I don't think was necessary. By removing that, speed should be similar or even better than it was before!
- [x] Move serving signatures to a model `@property` called `serving_signature`.
- [x] Change the `build_with_dummies()` to use that serving signature
- [x] Delete the `serving()` method on all models.
- [x] In the base `init()`, create `self.serving` by compiling `self.eager_serving` with `self.serving_signature`
- [ ] Find models with failing builds and fill in the static dimensions for them.
- [ ] Allow custom signatures in `from_pretrained`
- [x] Check that tests using composite models pass
- [x] ~Deprecate dummies entirely?~ | 12-12-2022 18:34:01 | 12-12-2022 18:34:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20740). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this for now - there's a whole pile of issues with ensuring weight names line up that I haven't been able to fully resolve. I might come back to this, but the effort:reward ratio isn't really favourable right now! |
transformers | 20,739 | closed | Add decorator for flaky Donut tests | # What does this PR do?
Tests occasionally fail for Donut. Adding decorator to handle whilst waiting for a resolution to be merged in.
Issue raised here: https://github.com/huggingface/transformers/issues/20738
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-12-2022 18:01:44 | 12-12-2022 18:01:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>`is_flaky` has to be used as `is_flaky()`, otherwise the actual tests won't be run, and will always pass. |
transformers | 20,738 | open | Flaky feature extraction tests for Donut | ### System Info
Note: I have observed with my own setup, but most recently this was seen in a CI environment
- `transformers` version: 4.26.0.dev0
- Platform: macOS-13.0.1-arm64-arm-64bit
- Python version: 3.9.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.14.0.dev20221118 (False)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: Noe
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pytest tests/models/donut/test_feature_extraction_donut.py::DonutFeatureExtractionTest
### Expected behavior
Tests don't randomly fail e.g. like in [this CI run](https://app.circleci.com/pipelines/github/huggingface/transformers/53575/workflows/40f5b896-c941-4a9f-8946-26f54bb14505/jobs/644204).
There is an issue occurring when the amount to pad is being calculated - some dimensions are having negative padding found. | 12-12-2022 18:00:22 | 12-12-2022 18:00:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This is not yet resolved. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,737 | closed | RWKV4neo | ### Model description
RWKV - Receptance Weighted Key Value
RWKV is a Sequence to Sequence Model that takes the best features of Generative PreTraining (GPT) and Recurrent Nueral Networks (RNN) that performs Language Modelling (LM). This is used to generate text Auto Regressive manner (AR).
This is a hybrid model.
It has Transformer Level Performance without the quadratic attention mechanism. It borrows ideas from Attention Free Transformers, meaning the attention is a linear in complexity. Allowing for infinite context through the hidden state in RWKV_RNN.
There are two models for RWKV, they are refered to as modes.
RWKV_RNN: This mode is designed for running inference quickly.
RWKV_GPT: This mode is for training or fine tuning your model quickly.
In the first pass we will be implementing RWKV_RNN Although we can weight share to have RWKV_GPT generate the inital context for RWKV_RNN.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
- [ ] Scaffolding
- [ ] API Discussion
### Provide useful links for the implementation
More from the Research and Development Repository: https://github.com/BlinkDL/RWKV-LM | 12-12-2022 17:07:58 | 12-12-2022 17:07:58 | @ArthurZucker
@younesbelkada
https://github.com/huggingface/transformers/issues/17230#issuecomment-1346779059<|||||>This is super cool !! 🔥
Really looking forward to it
Also as discussed with @leondz , [we'll probably also include a blogpost](https://github.com/huggingface/transformers/issues/17230#issuecomment-1338060393) explaining this new architecture!
Happy to help for the implementation and the blogpost! <|||||>Ok I am gonna just set this up on my linux machine m1 setup isn't ready I spent 2 hours on this gonna try again tomorrow sorry D: <|||||>> Ok I am gonna just set this up on my linux machine m1 setup isn't ready I spent 2 hours on this gonna try again tomorrow sorry D:
scratch that, it also didn't work, mainly because of my limited harddisk space. Gonna retry mac...<|||||>```
Building wheels for collected packages: transformers, onnx
Building editable for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.26.0.dev0-0.editable-py3-none-any.whl size=31899 sha256=d4b123bfb17b8f11ab811da95d253a066a98638c492d76f4d00afa509dfc6d71
Stored in directory: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-ephem-wheel-cache-w5ucgfda/wheels/52/2c/02/9b0e2ee52910e61c69011870086c52ab4eaaa554c34005f48f
Building wheel for onnx (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [76 lines of output]
You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.
running bdist_wheel
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - failed
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc - broken
CMake Error at /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/cmake/data/share/cmake-3.25/Modules/CMakeTestCCompiler.cmake:70 (message):
The C compiler
"/usr/bin/cc"
is not able to compile a simple test program.
It fails with the following output:
Change Dir: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeScratch/TryCompile-cd2Kpc
Run Build Command(s):/usr/bin/make -f Makefile cmTC_a6349/fast &&
You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.
CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
CMakeLists.txt:17 (project)
-- Configuring incomplete, errors occurred!
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log".
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py", line 332, in <module>
setuptools.setup(
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 325, in run
self.run_command("build")
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py", line 223, in run
self.run_command("cmake_build")
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py", line 209, in run
subprocess.check_call(cmake_args)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for onnx
Running setup.py clean for onnx
Successfully built transformers
Failed to build onnx
Installing collected packages: onnx, tf2onnx, nbformat, matplotlib, markdown, jaxlib, jax, huggingface-hub, gql, google-auth, GitPython, Flask, docker, cryptography, cliff, botocore, arrow, alembic, aiohttp, accelerate, transformers, timm, s3transfer, pyOpenSSL, optuna, librosa, kubernetes, jinja2-time, ipython, google-auth-oauthlib, dash, csvw, chex, APScheduler, tensorboard, optax, hf-doc-builder, datasets, dash-bootstrap-components, cookiecutter, clldutils, boto3, tensorflow-macos, sigopt, segments, flax, evaluate, codecarbon, phonemizer
Attempting uninstall: onnx
Found existing installation: onnx 1.13.0
Uninstalling onnx-1.13.0:
Successfully uninstalled onnx-1.13.0
Running setup.py install for onnx ... error
error: subprocess-exited-with-error
× Running setup.py install for onnx did not run successfully.
│ exit code: 1
╰─> [78 lines of output]
You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.
running install
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - failed
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc - broken
CMake Error at /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/cmake/data/share/cmake-3.25/Modules/CMakeTestCCompiler.cmake:70 (message):
The C compiler
"/usr/bin/cc"
is not able to compile a simple test program.
It fails with the following output:
Change Dir: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeScratch/TryCompile-WGiugk
Run Build Command(s):/usr/bin/make -f Makefile cmTC_3370b/fast &&
You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.
CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
CMakeLists.txt:17 (project)
-- Configuring incomplete, errors occurred!
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log".
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py", line 332, in <module>
setuptools.setup(
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/install.py", line 546, in run
self.run_command('build')
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py", line 223, in run
self.run_command("cmake_build")
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py", line 209, in run
subprocess.check_call(cmake_args)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
Rolling back uninstall of onnx
Moving to /Users/michaelchung/Code/transformers/.env/bin/backend-test-tools
from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-rs3gpves/backend-test-tools
Moving to /Users/michaelchung/Code/transformers/.env/bin/check-model
from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-rs3gpves/check-model
Moving to /Users/michaelchung/Code/transformers/.env/bin/check-node
from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-rs3gpves/check-node
Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx-1.13.0.dist-info/
from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx-1.13.0.dist-info
Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx/
from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> onnx
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
```
Fails at building wheels for transformers and onnx
I installed onnx manually using pip3
```
absl-py==1.3.0
aiosignal==1.3.1
appdirs==1.4.4
appnope==0.1.3
asttokens==2.2.1
astunparse==1.6.3
async-timeout==4.0.2
attrs==22.1.0
audioread==3.0.0
autopage==0.5.1
Babel==2.11.0
backcall==0.2.0
backoff==1.11.1
beautifulsoup4==4.11.1
binaryornot==0.4.4
black==22.3.0
cachetools==5.2.0
certifi==2022.12.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==2.1.1
click==8.1.3
cmaes==0.9.0
cmake==3.25.0
cmd2==2.4.2
colorama==0.4.6
coloredlogs==15.0.1
colorlog==6.7.0
commonmark==0.9.1
contourpy==1.0.6
cycler==0.11.0
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-table==5.0.0
decorator==5.1.1
dill==0.3.4
distlib==0.3.6
dlinfo==1.2.1
dm-tree==0.1.7
exceptiongroup==1.0.4
execnet==1.9.0
executing==1.2.0
faiss-cpu==1.7.3
fastjsonschema==2.16.2
filelock==3.8.2
fire==0.5.0
flake8==6.0.0
flatbuffers==2.0.7
fonttools==4.38.0
frozenlist==1.3.3
fsspec==2022.11.0
fugashi==1.1.2a6
gast==0.4.0
gitdb==4.0.10
google-pasta==0.2.0
graphql-core==3.2.3
grpcio==1.51.1
h5py==3.7.0
humanfriendly==10.0
hypothesis==6.61.0
idna==3.4
importlib-metadata==4.13.0
iniconfig==1.1.1
ipadic==1.0.0
isodate==0.6.1
isort==5.11.2
itsdangerous==2.1.2
jedi==0.18.2
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
jsonschema==4.17.3
jupyter_core==5.1.0
kenlm==0.1
keras==2.11.0
keras-nlp==0.3.1
kiwisolver==1.4.4
language-tags==1.1.0
libclang==14.0.6
llvmlite==0.39.1
lxml==4.9.2
Mako==1.2.4
MarkupSafe==2.1.1
matplotlib-inline==0.1.6
mccabe==0.7.0
mpmath==1.2.1
msgpack==1.0.4
multidict==6.0.3
multiprocess==0.70.12.2
mypy-extensions==0.4.3
nltk==3.8
numba==0.56.4
numpy==1.23.5
oauthlib==3.2.2
onnx==1.13.0
onnxconverter-common==1.13.0
onnxruntime==1.13.1
onnxruntime-tools==1.7.0
opt-einsum==3.3.0
packaging==22.0
pandas==1.5.2
parameterized==0.8.1
parso==0.8.3
pathspec==0.10.3
pbr==5.11.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.3.0
Pint==0.16.1
plac==1.3.5
platformdirs==2.6.0
plotly==5.11.0
pluggy==1.0.0
pooch==1.6.0
portalocker==2.0.0
poyo==0.5.0
prettytable==3.5.0
prompt-toolkit==3.0.36
protobuf==3.19.6
psutil==5.9.4
ptyprocess==0.7.0
pure-eval==0.2.2
py-cpuinfo==9.0.0
py3nvml==0.2.7
pyarrow==10.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycodestyle==2.10.0
pycparser==2.21
pyctcdecode==0.4.0
pyflakes==3.0.1
Pygments==2.13.0
pygtrie==2.5.0
pyknp==0.6.1
pylatexenc==2.10
pynvml==11.4.1
pyparsing==3.0.9
pyperclip==1.8.2
pypng==0.20220715.0
pyrsistent==0.19.2
pytest==7.2.0
pytest-timeout==2.1.0
pytest-xdist==3.1.0
python-dateutil==2.8.2
python-slugify==7.0.0
pytz==2022.6
pytz-deprecation-shim==0.1.0.post0
PyYAML==5.4.1
ray==2.2.0
rdflib==6.2.0
regex==2022.10.31
requests==2.28.1
requests-oauthlib==1.3.1
requests-toolbelt==0.10.1
resampy==0.4.2
responses==0.18.0
rfc3986==1.5.0
rich==11.2.0
rjieba==0.1.11
rouge-score==0.1.2
rsa==4.9
sacrebleu==1.5.1
sacremoses==0.0.53
safetensors==0.2.6
scikit-learn==1.2.0
scipy==1.8.1
sentencepiece==0.1.97
six==1.16.0
smmap==5.0.0
sortedcontainers==2.4.0
soundfile==0.11.0
soupsieve==2.3.2.post1
SQLAlchemy==1.4.45
stack-data==0.6.2
stevedore==4.1.1
SudachiDict-core==20221021
SudachiPy==0.6.6
sympy==1.11.1
tabulate==0.9.0
tenacity==8.1.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorboardX==2.5.1
tensorflow-estimator==2.11.0
tensorflow-metal==0.7.0
tensorstore==0.1.28
termcolor==2.1.1
text-unidecode==1.3
threadpoolctl==3.1.0
timeout-decorator==0.5.0
tokenizers==0.13.2
tomli==2.0.1
toolz==0.12.0
torch==1.13.0
torchaudio==0.13.0
torchvision==0.14.0
tqdm==4.64.1
traitlets==5.7.1
typing_extensions==4.4.0
tzdata==2022.7
tzlocal==4.2
unidic==1.1.0
unidic-lite==1.0.8
uritemplate==4.1.1
urllib3==1.26.13
virtualenv==20.17.1
wasabi==0.10.1
wcwidth==0.2.5
websocket-client==1.4.2
Werkzeug==2.2.2
wrapt==1.14.1
xmltodict==0.13.0
xxhash==3.1.0
yarl==1.8.2
zipp==3.11.0
```
I am on 3.9.11
when running
conda install -c conda-forge onnxruntime
it says all the components were installed
<|||||>hmmm I accepted the xcode license agreement.
It's built transformers I think it fails to build onnx, but whats odd is it uninstalls onnx. Maybe I can do with out it and remove it out of setup.py?
```
Building wheels for collected packages: transformers, onnx
Building editable for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.26.0.dev0-0.editable-py3-none-any.whl size=31899 sha256=e63397c8685e05971af9d4767e872200eb8393ed081e37582f8d24760daaca08
Stored in directory: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-ephem-wheel-cache-w75hif9g/wheels/52/2c/02/9b0e2ee52910e61c69011870086c52ab4eaaa554c34005f48f
Building wheel for onnx (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [67 lines of output]
fatal: not a git repository (or any of the parent directories): .git
running bdist_wheel
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']
-- The C compiler identification is AppleClang 14.0.0.14000029
-- The CXX compiler identification is AppleClang 14.0.0.14000029
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /Users/michaelchung/Code/transformers/.env/bin/python3 (found version "3.9.11")
-- Found PythonLibs: /opt/homebrew/Frameworks/Python.framework/Versions/3.11/lib/libpython3.11.dylib (found version "3.9.11")
Generated: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/onnx/onnx-ml.proto
CMake Error at CMakeLists.txt:299 (message):
Protobuf compiler not found
Call Stack (most recent call first):
CMakeLists.txt:330 (relative_protobuf_generate_cpp)
-- Configuring incomplete, errors occurred!
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log".
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py", line 332, in <module>
setuptools.setup(
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 325, in run
self.run_command("build")
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py", line 223, in run
self.run_command("cmake_build")
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py", line 209, in run
subprocess.check_call(cmake_args)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for onnx
Running setup.py clean for onnx
Successfully built transformers
Failed to build onnx
Installing collected packages: onnx, tf2onnx, nbformat, matplotlib, markdown, jaxlib, jax, huggingface-hub, gql, google-auth, GitPython, Flask, docker, cryptography, cliff, botocore, arrow, alembic, aiohttp, accelerate, transformers, timm, s3transfer, pyOpenSSL, optuna, librosa, kubernetes, jinja2-time, ipython, google-auth-oauthlib, dash, csvw, chex, APScheduler, tensorboard, optax, hf-doc-builder, datasets, dash-bootstrap-components, cookiecutter, clldutils, boto3, tensorflow-macos, sigopt, segments, flax, evaluate, codecarbon, phonemizer
Attempting uninstall: onnx
Found existing installation: onnx 1.13.0
Uninstalling onnx-1.13.0:
Successfully uninstalled onnx-1.13.0
Running setup.py install for onnx ... error
error: subprocess-exited-with-error
× Running setup.py install for onnx did not run successfully.
│ exit code: 1
╰─> [55 lines of output]
fatal: not a git repository (or any of the parent directories): .git
running install
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']
Generated: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/onnx/onnx-ml.proto
CMake Error at CMakeLists.txt:299 (message):
Protobuf compiler not found
Call Stack (most recent call first):
CMakeLists.txt:330 (relative_protobuf_generate_cpp)
-- Configuring incomplete, errors occurred!
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log".
See also "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py", line 332, in <module>
setuptools.setup(
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/install.py", line 546, in run
self.run_command('build')
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py", line 223, in run
self.run_command("cmake_build")
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py", line 209, in run
subprocess.check_call(cmake_args)
File "/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
Rolling back uninstall of onnx
Moving to /Users/michaelchung/Code/transformers/.env/bin/backend-test-tools
from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-1ylzipwt/backend-test-tools
Moving to /Users/michaelchung/Code/transformers/.env/bin/check-model
from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-1ylzipwt/check-model
Moving to /Users/michaelchung/Code/transformers/.env/bin/check-node
from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-1ylzipwt/check-node
Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx-1.13.0.dist-info/
from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx-1.13.0.dist-info
Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx/
from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> onnx
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
```<|||||>Alright dev environment installed, the key here was not to use conda but miniforge<|||||>Ok draft created <|||||>Do you know how to convert .pth model to config.json/pytorch_model.bin for RWKV4neo?<|||||>I have a conversion script + draft that has consistent logit ordering with the official implementation here:
conversion script: https://github.com/tensorpro/transformers/blob/rwkv_draft/src/transformers/models/rwkv4_neo/convert_rwkv_original_pytorch_checkpoint_to_pytorch.py
model in torch: https://github.com/tensorpro/transformers/blob/rwkv_draft/src/transformers/models/rwkv4_neo/modeling_rwkv4_neo.py
I can clean it up and turn it into a PR if that would help?<|||||>Sure, there is also #22797 that should be in a pretty good state! I'm about to review it but feel free to add your touch to it if you feel like it! <|||||>Oh cool, that one looks awesome!<|||||>> I have a conversion script + draft that has consistent logit ordering with the official implementation here:
>
> conversion script: https://github.com/tensorpro/transformers/blob/main/src/transformers/models/rwkv4_neo/convert_rwkv_original_pytorch_checkpoint_to_pytorch.py model in torch: https://github.com/tensorpro/transformers/blob/main/src/transformers/models/rwkv4_neo/modeling_rwkv4_neo.py
>
> I can clean it up and turn it into a PR if that would help?
@tensorpro I could really use your scripts but I get 404 when I try to access those links. :/<|||||>Ah sorry, the links broke when I changed the branch I was working in. I edited the comment to point to the right branch
That said, you may want to use the code in #22797 since it will be closer to the official HF version and already supports CUDA accelerated WKV. |
transformers | 20,736 | closed | rename `layoutlm_job` to `exotic_models_job` | # What does this PR do?
This job now contains tests for `nat` and `dinat` models. Rename the job to make it more clear. | 12-12-2022 17:03:03 | 12-12-2022 17:03:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,735 | closed | Fix AdamWeightDecay for TF 2.11 | Fixes #20724 | 12-12-2022 16:56:36 | 12-12-2022 16:56:36 | @gante take a quick look at this one too, please! Our `AdamWeightDecay` doesn't work with the new Optimizer base class in TF 2.11, so I moved it onto the legacy class for now. When I have more time I should clean this up properly, and possibly just use [the official AdamW](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/AdamW) instead.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The same issue occurred with the Summarization tutorial. [https://huggingface.co/course/chapter7/5?fw=tf#models-for-text-summarization](url)<|||||>Thanks @el-profesor-04 - can you confirm that the issue goes away with you install the latest version of `transformers` from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`? |
transformers | 20,734 | open | `AddedToken` 's argument are ignored when called in `add_tokens` 's method of slow tokenizers | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The explanations of the bug and its reproduction are contained in the following google colab: https://colab.research.google.com/drive/19SS6Tzlgo0vntFtM6ZsCYq8BNZ5Dy1cS?usp=sharing
### Expected behavior
I would expect the fast and slow tokenizers to treat the `AddedToken`'s arguments in the same way.
I think the loss of information for the slow tokenizer occurs at this line: https://github.com/huggingface/transformers/blob/a413c725d40027134f28f92974ad7d61751f5640/src/transformers/tokenization_utils.py#L411 | 12-12-2022 15:38:54 | 12-12-2022 15:38:54 | I have not dropped this, its still on my TODO list. There are a lot of linked issues!<|||||>(The update is gonna take longer as I am refactoring the tokenizers)<|||||>I had the same issue. It seems that `AddedToken` is implemented [here](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/tokenization_utils_base.py#L80), if the package `tokenizers` is not available. However, the python implementation using dataclasses does not behave the same way as the rust implementation in `tokenizers`. |
transformers | 20,733 | closed | Verify that a test in `LayoutLMv3` 's tokenizer is checking what we want | I'm taking the liberty of opening an issue to share a question I've been keeping in the corner of my head, but now that I'll have less time to devote to `transformers` I prefer to share it before it's forgotten.
In the PR where the `LayoutLMv3` model was added, I was not very sure about the target value used for one of the tests that had to be overridden (the value was 1 in one of the previous commits and then changed to 0). The comment I am referring to is this one: https://github.com/huggingface/transformers/pull/17060#discussion_r872265358 .
Might be of interest to @ArthurZucker | 12-12-2022 15:17:36 | 12-12-2022 15:17:36 | Thanks! Will try to get familiar with this 😉 |
transformers | 20,732 | closed | Convert tokenizer outputs for Keras in doc example | The TF examples in `training.mdx` don't turn the tokenizer outputs into a `dict`, so Keras gets confused.
Fixes #20709 | 12-12-2022 15:07:34 | 12-12-2022 15:07:34 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20732). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,731 | closed | Disambiguate test for required_input in tokenization base file. | # What does this PR do?
This is a prime example of why we should never rely on Python bool conversion magic, as it often fails with array-like types.
Fixes #19136 | 12-12-2022 14:36:19 | 12-12-2022 14:36:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,730 | closed | Fix AutoModelTest.test_model_from_pretrained | # What does this PR do?
From @Narsil
> Hmmmm it's because of safetensors is installed in that environement and it's loading the safetensors weights which have indeed 1 less unexpected keys (the duplicated key was removed)
I tried `gpt2` and `roberta-base`, they also have similar issues, on some keys among `missing_keys`, `unexpected_keys` etc..
I decided to stop trying and just give a condition. | 12-12-2022 13:16:52 | 12-12-2022 13:16:52 | The other way to fix would be to modify the original files to remove the duplicated layer. I think that was suggested by @sgugger .
I personally am ok with the proposed fix, it does seem the easiest route.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,729 | closed | Adding ValueError when incompatible parameters are used. | # What does this PR do?
Adding ValueError when imcompatible parameters are used.
https://github.com/huggingface/transformers/pull/20662#pullrequestreview-1210120116
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 12-12-2022 13:08:57 | 12-12-2022 13:08:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20729). All of your documentation changes will be reflected on that endpoint.<|||||>@Narsil for what I read in #20662 , it seems what actually should imcompatible may `return_text` and `return_tensors` instead of `return_text` and `return_full_text`?<|||||>Oh, them too, will create a new PR. But `return_text` and `return_full_text` are also exclusive I think. |
transformers | 20,728 | closed | Redundant to_channel_dimension_format() call makes preprocessing fail in case the image has height of 1 pixel | In the `resize()` function in image_transforms.py, the line 267: I think `image = to_channel_dimension_format(image, ChannelDimension.LAST)` is redundant as this conversion is also applied in the following `to_pil_image()`.
This redundant call actually makes the clip preprocessing fail in special cases. The problem can be reproduced with the following code snippet:
```python
import torch
from transformers.models.clip import CLIPFeatureExtractor
vision_processor = CLIPFeatureExtractor.from_pretrained('openai/clip-vit-large-patch14')
images = [
torch.rand(size=(3, 2, 10), dtype=torch.float),
torch.rand(size=(3, 10, 1), dtype=torch.float),
torch.rand(size=(3, 1, 10), dtype=torch.float)
]
for image in images:
processed_image = vision_processor(images=image, return_tensors="pt")['pixel_values']
print(processed_image.shape)
assert processed_image.shape == torch.Size([1, 3, 224, 224])
```
The last image has a height of 1 pixel.
The second call to `to_channel_dimesion_format()` will transpose the image, and the height dimension is wrongly treated as the channels dimension afterwards. Because of this, the following normalize() step will result in an exception.
An image of height 1 pixel honestly doesn't make much sense, but it happened in my training on visual genome region descriptions and took me a while to track down the problem.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- vision models: @amyeroberts and @NielsRogge
-->
| 12-12-2022 11:29:12 | 12-12-2022 11:29:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @amyeroberts <|||||>sure thing! |
transformers | 20,727 | closed | Add custom stop token ids for generation | ### Update (using eos_token_id instead): https://github.com/huggingface/transformers/pull/20727#issuecomment-1355219288
# What does this PR do?
Hi 🤗 team!
This adds stop token ids inside, e.g. `model.generate(..., stop_token_ids=[10, 25])`, and syntactic sugar for the generation pipelines, e.g. `pipeline(..., stop_tokens=['\n'])`. When the generation detects the specified token ids for all examples in the batch, it will stop.
Rationale
* It's common to set a stop id/token for text generation tasks. For example for dialogue, we may want to stop it when the speaker changes.
* It's convenient to have arguments for stop tokens similar to `max_new_tokens` without digging into `StoppingCriterion`.
* Some servers like [DeepSpeed MII](https://github.com/microsoft/DeepSpeed-MII/issues/109) uses gRPC and it's difficult to pass `StoppingCriteria` objects.
## Usage Example
```python
# in pipeline
prompt = """Hello I believe in"""
text_generator = pipeline("text-generation", model="hf-internal-testing/tiny-random-gpt2", stop_tokens=[' fe'])
text_generator(prompt)
# from generate
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("hf-internal-testing/tiny-random-gpt2")
gpt2_model = GPT2LMHeadModel.from_pretrained("hf-internal-testing/tiny-random-gpt2").to(torch_device)
input_ids = gpt2_tokenizer(prompt, return_tensors="pt").input_ids.to(torch_device)
stop_token_ids = gpt2_tokenizer.encode(" fe")
gpt2_model.generate(input_ids=input_ids, stop_token_ids=stop_token_ids)
```
## How to Test
```shell
pytest tests/generation/test_stopping_criteria.py::StoppingCriteriaTestCase::test_stop_token_id_criteria
pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_stop_token_ids_stopping_criteria
pytest tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_stop_token_ids_stopping_criteria
pytest tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_stop_tokens_stopping_criteria
```
## Related PR(s)
There is a `stop_sequence` argument for the `TextGeneration` pipeline: https://github.com/huggingface/transformers/pull/18444
But it's limited to a single token, only in the text generation pipeline, and overwrites `eos_token_id`. Instead, we use `StoppingCriteria` directly.
This PR is a bit overlapping with above, so please let me know if this approach is not optimal.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). **(☢️ noting i've tried to update the docs from the instructions, but they don't seem correct)**
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @Narsil
| 12-12-2022 05:20:03 | 12-12-2022 05:20:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante <|||||>Think we could actually allow `eos_token_id` to be both an integer and a list of integers no ? Both in the config and in the input.
<|||||>Hi @tokestermw 👋
Like my colleagues, I also think this would be a helpful feature! I also agree with @patrickvonplaten, allowing the existing argument (`eos_token_id`) to also accept list of integers would result in a cleaner interface and fewer lines of code to maintain :) It is also to port to TF/FLAX, which do not use `StoppingCriterion`.
In a nutshell, if `eos_token_id` can be a list of integers, we can replace [the existing check](https://github.com/huggingface/transformers/blob/26dd041c6e45379141302e2d293ab4cd9cf805d4/src/transformers/generation/utils.py#L2154) with
```python
unfinished_sequences = unfinished_sequences.mul((sum(next_tokens == i for i in eos_token_id)).long())
```
as long as we always cast `eos_token_id` to a list before the generation loop. In other words, 2 lines of change (per generation method) would probably do the trick!
@tokestermw WDYT?<|||||>Got it thanks for the suggestion! I can certainly make it so we use
eos_token_id.
It is also to port to TF/FLAX, which do not use StoppingCriterion.
ah good to know :)
I can look at this again this weekend
On Fri, Dec 16, 2022 at 8:59 AM, Joao Gante ***@***.***>
wrote:
> Hi @tokestermw <https://github.com/tokestermw> [image: 👋]
>
> Like my colleagues, I also think this would be a helpful feature! I also
> agree with @patrickvonplaten <https://github.com/patrickvonplaten>,
> allowing the existing argument (eos_token_id) to also accept list of
> integers would result in a cleaner interface and fewer lines of code to
> maintain :) It is also to port to TF/FLAX, which do not use
> StoppingCriterion.
>
> In a nutshell, if eos_token_id can be a list of integers, we can replace the
> existing check
> <https://github.com/huggingface/transformers/blob/26dd041c6e45379141302e2d293ab4cd9cf805d4/src/transformers/generation/utils.py#L2154>
> with
>
> unfinished_sequences = unfinished_sequences.mul((sum(next_tokens == i for i in eos_token_id)).long())
>
> as long as we always cast eos_token_id to a list before the generation
> loop. In other words, 2 lines of change (per generation method) would
> probably do the trick!
>
> @tokestermw <https://github.com/tokestermw> WDYT?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/20727#issuecomment-1355219288>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABEA3R6MBS7EMZQVFCS4POTWNSNXPANCNFSM6AAAAAAS3PYPPE>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>Hi @gante,
* Made `eos_token_id` into `Union[int, List[int]]` type. I convert into a list at the beginning of the respective functions. Also, looks like `eos_token_id` is used in a few more places, e.g. `beam_search.py`.
* Some parts where we insert the `eos_token_id`, I only insert the first token id, [here](https://github.com/tokestermw/transformers/pull/1/files#diff-f208583b02617e3684c26a002cea6640d42ddd04bbd49eb55fa8f8460e701586R857) and [here](https://github.com/tokestermw/transformers/pull/1/files#diff-26783ca033d92b4ce2e01eb691dbf5b05dd972b0cbfdc69fc726cf77a9dcb011R1343)
You can see the changes here: https://github.com/tokestermw/transformers/pull/1/files
If this change looks good, I can merge into this PR, and start polishing (fixing tests, docs, remove dead code, etc.).
thanks!
<|||||>@tokestermw that's a comprehensive set of changes, it looks great to me! ❤️ <|||||>I was actually thinking more about directly adapting all those lines: https://github.com/huggingface/transformers/blob/17292440c069118fbdb992b9a17da2098fab5b87/src/transformers/generation/utils.py#L2154
To **also** accept a list of `eos_token_ids` so that we don't even need a new criterion class for this but can just make it work out of the box by passing `model.generate(..., eos_token_id=[0, 1])`<|||||>Awesome this looks nice to me, @gante @sgugger ok for you?<|||||>Is this feature already available on the transformers version available through pip (4.25.1)? I have tried enabling it and the generation continued on even though I set `eos_token_id`, in my case to
tokenizer.encode('\n', return_tensors='pt')[1]
(I'm also not sure why 2 integers are returned by `encode` instead of just 1)
**EDIT**
Nevermind, I got it working
n = tokenizer.encode('\n', return_tensors='pt')[0][1]
output = model.generate(input_ids, eos_token_id=n).cuda() |
transformers | 20,726 | closed | Enable `decoder_attention_mask` in `generate` function | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The documentation for `model_kwargs` in the generate function states that model specific keyword arguments will be forwarded to the `forward` function of the model.
https://github.com/huggingface/transformers/blob/4cf38148dc98b3df1df6eb2f06e4f02448026b19/src/transformers/generation/utils.py#L1216-L1219
However, this is currently not the case for the model specific keyword argument `decoder_attention_mask` when it is passed to the `generate` function.
This PR makes the necessary adjustments such that `decoder_attention_mask` can be passed to `generate` and will be used as an optional input in the `forward` function. More precisely, it is now possible to specify `decoder_input_ids` and `decoder_attention_mask` in the `generate` function for `BartForConditionalGeneration` such that some tokens in `decoder_input_ids` are masked.
To illustrate the change, we can use the (slightly modified) mask filling example from https://huggingface.co/docs/transformers/model_doc/bart#mask-filling
```python
import torch
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
sentence = "UN Chief Says There Is No <mask> in Syria"
batch = tokenizer(sentence, return_tensors="pt")
padding_size = 3
decoder_input_ids = torch.tensor(
[[model.config.decoder_start_token_id] + padding_size * [model.config.pad_token_id] + [model.config.bos_token_id]],
dtype=torch.long
)
decoder_attention_mask = torch.where(decoder_input_ids==model.config.pad_token_id, 0, 1)
generated_ids = model.generate(input_ids=batch["input_ids"], use_cache=False, max_new_tokens=20,
decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask)
decoded = tokenizer.batch_decode(generated_ids)
for seq in decoded:
print(seq)
```
Note that `use_cache=False` is required when `decoder_input_ids` is manually specified.
Output without PR:
```
</s><pad><pad><pad><s><s>,</s>
```
Clearly, padding tokens are not ignored because `decoder_attention_mask` is not forwarded to the `forward` function. Hence, the strange output.
Output with PR:
```
</s><pad><pad><pad><s>UN Chief Says There Is No Plan B for Peace in Syria</s>
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://discuss.huggingface.co/t/is-t5-expected-to-ignore-padding-tokens-in-decoder-input-ids-when-decoder-attention-mask-is-not-provided/10271
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-11-2022 16:20:44 | 12-11-2022 16:20:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey! Make sure to run `make fixup` to pass all the quality tests 😉 <|||||>Thank you both for the quick replies. I’m having difficulties setting up the development environment on my M1 Mac, similar to what is mentioned in #18355. I managed to run `make fixup` but there was still an error regarding TensorFlow. However, one style correction has been made. Please let me know if you have any other suggestions.<|||||>Style is okay now, but it seems like the `repo-consistency` is not. Running `make repo-consistency` should solve this <|||||>Thanks for the guidance. A small change was required for `repo-consistency`. I think it should be fine now.<|||||>To add some context, I needed this feature when implementing the _self-debiasing_ method from [this paper](https://arxiv.org/abs/2103.00453) with `BartForConditionalGeneration`.
This method uses a prefix in order to change the probability distribution of the generated tokens towards a specific bias. Given this biased probability distribution, it is possible to adjust the probability distribution of the text without the prefix using `LogitsProcessor` such that the generated text is less biased. The following images are from the original paper:
<p float="left">
<img src="https://user-images.githubusercontent.com/51292066/207309163-73dd40a1-99ad-449d-8220-1c2fcb5863ec.png" width="400" />
<img src="https://user-images.githubusercontent.com/51292066/207308497-3f93fe02-a162-4f27-b7a1-6fee0a5861db.png" width="400" />
</p>
When using this method with `BartForConditionalGeneration` we need to make sure that the generated tokens start at the same position for the text with and without the prefix because they are both processed in the same batch. To achieve this, it is necessary to manually specify `decoder_input_ids` where padding is applied to the left and padding tokens are ignored with `decoder_attention_mask`.
To illustrate this with an example, let's take the text `This guy is a jerk because he never listens` and mask the rude word jerk. As encoder input for `BartForConditionalGeneration`, we use
```
['This guy is a<mask> because he never listens', 'The following text contains rude, disrespectful, or unreasonable language:\nThis guy is a<mask> because he never listens']
```
As decoder input, we use
```
['', ''The following text contains rude, disrespectful, or unreasonable language:\n']
```
Padding is applied to the decoder input such that the decoder starts generating new tokens at the same position for both inputs. For `decoder_input_ids`, we get
```
[[2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [2, 0, 133, 511, 2788, 6308, 21820, 6, 26401, 6, 50, 24941, 2777, 35, 50118]]
```
and ignore padding by using the following `decoder_attention_mask`
```
[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
```
<|||||>Hi @samuelpullely 👋
Thank you for the PR and the use-case example with self-debiasing! The PR looks good to me, but it's missing one small detail: tests 🤗 If possible, I'd like to ask you to add an integration test, perhaps in the [Bart integration tests section](https://github.com/huggingface/transformers/blob/76d02feadbc99bbccd86e67b02728338a2469f22/tests/models/bart/test_modeling_bart.py#L841). It can be a copy of your example above -- it would go a long way ensuring we don't regress. Let me know if you need a hand.<|||||>Hi @gante, thanks for your feedback! I’ve added an integration test. Please let me know if I should adjust anything.<|||||>@sgugger can we merge this PR? :) |
transformers | 20,725 | closed | Add TVLT | # What does this PR do?
Add TVLT to transformers. This PR implements a original version of the TVLT: Textless Vision-Language Transformer from the original paper from https://arxiv.org/abs/2209.14156
I have provided the model weights here https://huggingface.co/TVLT/tvlt-base
# Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Yes.
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? It is discussed on slack.
- [x] Did you make sure to update the documentation with your changes? I have added the documentation. Please check if they make sense.
- [x] Did you write any new necessary tests? Yes. Please check if they make sense.
# Who can review?
Anyone. | 12-11-2022 09:26:41 | 12-11-2022 09:26:41 | Thanks a lot for contributing this model! One first remark is that we tried to avoid all-capitals now in model classes to make all model names consistent across the library, so could you rename your classes to `TvltModel`, `TvltConfig` etc... (like `BertModel`, `BertConfig`).
Leaving you in the hands of @sanchit-gandhi and @ArthurZucker to finish cleaning the PR and all the tests.<|||||>@NielsRogge @ArthurZucker
I have a question.
I converted the pixel_feature_extractor to imageprocessor class. Should I write a tester for imageprocessor but I couldn't find any example of it from other models.
Feature extractor will be deprecated in v5 so I don't have to create one with mother class imageprocessor?<|||||>Another question is that ProcessorMixin is multimodal processor but initialization requires tokenizer
But TVLT only takes in vision and audio. Is there anyway or a processor that can handle this?
Would it be good to propose to support combinations of two feature extractor in initialization of ProcessorMixin?<|||||>Hey!
1. With regards to testing, you should indeed add a tester. While the `FeatureExctractor` were replaced with `ImageProcessors` I think we are still using the `test_feature_extraction_...`
2. I think it indeed makes sense to support multiple `FeatureExtractors`, and it might also be good to support multiple tokenizers. We could also create a `MultiModalProcessorMixin` class, but it might be a bit too big for this PR.
I think the best solution for now is to just make your class inherit from `PushToHubMixin` and copy the relevant functions accordingly! <|||||>> With regards to testing, you should indeed add a tester. While the FeatureExctractor were replaced with ImageProcessors I think we are still using the test_feature_extraction_...
We can call them `test_image_processing_xxx.py` now, see #20716 as an example
> I think it indeed makes sense to support multiple FeatureExtractors, and it might also be good to support multiple tokenizers. We could also create a MultiModalProcessorMixin class, but it might be a bit too big for this PR.
I think the best solution for now is to just make your class inherit from PushToHubMixin and copy the relevant functions accordingly!
There's no need for a MultiModalProcessorMixin class I think, as the Processor class is exactly meant for multi-modal models. I think we just need to update processing_utils.py to handle the audio modality as well, cc @amyeroberts <|||||>@NielsRogge @zinengtang Yes, a single `TVLTProcessor` class is the way to go to handle processing of multiple modalities. These processor classes already handle audio e.g. [wav2vec2](https://github.com/huggingface/transformers/blob/d1d3ac94033b6ea1702b203dcd74beab68d42d83/src/transformers/models/wav2vec2/processing_wav2vec2.py#L62) so I don't think anything needs to be done to the `ProcessorMixin`. It should work provided `attributes` is modified e.g. [for CLIPProcessor](https://github.com/huggingface/transformers/blob/d1d3ac94033b6ea1702b203dcd74beab68d42d83/src/transformers/models/clip/processing_clip.py#L38). <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>A summary of what I have committed following the comments:
1. I addressed the comments by all reviewers. (Let me know if I miss anything.)
2. Checked all files and makefile and passed the auto checks.
3. Currently, the model input names are _pixel_value_ and _audio_value_. But audio models seem to be all using _input_features_. Should we update to _input_features_? But this makes inputs names combo a little more confusing. But I am good with either options.
4. I added TvltForAudioVisualClassification. AudioVisualClassification could be good for many tasks like video emotion analysis or video-audio retrieval. For this, I am thinking about adding it as a new task. Audiovisual models is a growing community. This could be good for new audiovisual or vision+audio models in the future.
Let me know if you have any suggestions. @ydshieh @ArthurZucker @NielsRogge <|||||>> This is astounding work, @zinengtang!
>
> You seem to be very familiar with the Transformers code base already :D amazing job.
>
> I'll let @amyeroberts review the image processor, and asking our audio experts @sanchit-gandhi @ArthurZucker regarding the naming of the audio modality (whether we can go for audio_values or whether we should use something else).
>
> After that should be good to merge!
Thanks for the review! :)
You mentioned ViTMAE is more similar but it does not implement attention masks arguments. So, I reverted back to ViLT.
Let me know if you have any other suggestions<|||||>> @zinengtang Thanks for this PR and adding this model! I've just made a few comments regarding the image processor. Overall looks good, mainly just a few formatting points.
Thanks so much for your review. They look good to me! I resolved all changes and you may check if you want. :)<|||||>Hi @zinengtang, could you push a commit to trigger the CI? Seems like not all tests are run and many are failing.
After that, I'll assign one team member for a final review.
Thanks!<|||||>Hi @zinengtang, thanks for the amazing job! Btw I guess that some files were accidentally added in the vilt model directory (https://github.com/huggingface/transformers/pull/20725/files#diff-5342d82acaa480e404377ccc91a49b5203a3119b02c0dac89bcf147cb32e950e). Could you please check?
<img width="370" alt="image" src="https://user-images.githubusercontent.com/18069263/213285759-fd9aab68-1df9-4ad9-99c8-f7ac9ea4f5f9.png">
<|||||>> Please make your sure you address all comments (if there is a reason they don't work, please reply!) otherwise we won't be able to merge this.
Regarding this issue, I replied earlier that vitmae does not have implementation of attention_mask and similar models like videomae/vit also do not have. Therefore, I used ViLT.<|||||>> Regarding this issue, I replied earlier that vitmae does not have implementation of attention_mask and similar models like videomae/vit also do not have. Therefore, I used ViLT.
@amyeroberts here is my older comments. Let me know if you have any questions.<|||||>@zinengtang Great, thank you for clarifying :) <|||||>@sanchit-gandhi I have a question. How is it possible to move all masking code to TvltForPreTraining? The masking is done in encoding phase and therefore can only be done in TvltModel.<|||||>Another question is that @amyeroberts suggests me to put the random part in feature extractor. But it seems [wav2vec](https://github.com/huggingface/transformers/blob/f0fc7912980234f3711b261f13b4e77fa7a43fb5/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L133), as @sanchit-gandhi suggests, uses random in model forward phase. <|||||>@zinengtang My suggestion was to make sure any randomness in the image processor could be controlled by the user -- a seed could be passed in and each instance has its own state -- however the randomness was already there. I agree with @sanchit-gandhi that moving it out into the modeling stage would be better and should make things cleaner :) <|||||>Full motivations for moving the stochastic MAE masking into the modelling code can be found here: https://github.com/huggingface/transformers/pull/20725#discussion_r1090692949
@zinengtang I see your point! So the way we can arrange it is:
* We compute all the masks/permutations inside `TvltForPreTraining`
* We pass these masks/permutations to `TvltModel`
In general, we can try and keep **as much** of the MAE training code in `TvltForPreTraining`. Where we require it in `TvltModel`, we can add it accordingly!<|||||>Thanks @NielsRogge I have addressed the comments. For audio-based VQA, it kinda depends on the users on how to use them but usually audio input should be the query/question.
@amyeroberts @sanchit-gandhi thanks for the explanation! I have moved the masking generation to inside modeling file. Feel free to check if they match your expectation.<|||||>@amyeroberts Hey thanks for your suggestions!
But we cannot move masks to TvltForPreTraining since it has to be done in TvltModel. If we move masks from embedding to TvltModel, then we will have to break down TvltEmbeddings and directly use TvltPixelEmbeddings and TvltAudioEmbeddings.
Let me know if these make sense.<|||||>> If we move masks from embedding to TvltModel, then we will have to break down TvltEmbeddings and directly use TvltPixelEmbeddings and TvltAudioEmbeddings.
@zinengtang Yes, I think it makes sense to directly use `TvltPixelEmbeddings` and `TvltAudioEmbeddings` instead of having the `TvltEmbeddings` class. <|||||>OK I found that all my past reviews are 'pending' and maybe they were never sent out lol, which is my bad.
Anyway, I addressed the comments and let me know if they make sense @amyeroberts.<|||||>@NielsRogge what do you think about current state. Is there anything else left to address? Thanks!<|||||>@NielsRogge now I addressed the remaining comments. It makes sense to me that TvltForQuestionAnswering is not needed since it is the same as TvltForAudioVisualClassification.<|||||>@zinengtang Thanks for adding these final changes! I've left a comment on `test_feature_extraction_tvlt.py` on how to resolve one of the current issues on the circleCI run. The failing wav2vec2 tests have been resolved on main - rebasing should resolve them on this branch.
Once all the tests are green I think we're good to merge! <|||||>@amyeroberts Sounds great. Btw there seems to be a fail from other models
FAILED tests/models/hubert/test_modeling_tf_hubert.py::TFHubertRobustModelTest::test_dataset_conversion
Do you think it comes from this branch or main branch?<|||||>@zinengtang It's coming from main, not this branch :) I've just pushed a change on main - #21643 - which should resolve this temporarily for now. Could you rebase to add it to this branch? <|||||>@amyeroberts Now it passed the tests! Thanks so much for the help/suggestions all the way. :)<|||||>@zinengtang Thanks for all your work on adding this model, it's great to have it added to the repo! Make sure to spread the word about it being available :) |
transformers | 20,724 | closed | Tutorial on token classification throws casting error in Tensorflow 2.11 | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.9.0
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada, the tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` throws the following error in Tensorflow 2.11 but not in Tensorflow 2.9:
`(0) UNIMPLEMENTED: Cast string to float is not supported
[[{{node Cast_1}}]]
(1) CANCELLED: Function was cancelled before it was started
0 successful operations.
`
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` for Tensorflow
### Expected behavior
Training should start, but it does not. | 12-11-2022 04:16:58 | 12-11-2022 04:16:58 | Can you please specify where is the error happen? Which step?<|||||>The error is thrown after `model.fit`<|||||>Okay I'll have a look<|||||>Also cc @Rocketknight1 <|||||>Yeah, I should probably take this one. Investigating!<|||||>Managed to reproduce it.<|||||>This is actually a problem with our `AdamWeightDecay`, likely caused by the change in Keras optimizers in 2.11. Figuring out a fix now. |
transformers | 20,723 | closed | Add model resources for ViT | # What does this PR do?
Adds resources on ViT according to https://github.com/huggingface/transformers/issues/20055
Fixes https://github.com/huggingface/transformers/issues/20055 (partially)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-10-2022 16:46:09 | 12-10-2022 16:46:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,722 | closed | Very small edit to change name to OpenAI GPT | # What does this PR do?
Adjusts a small typo in OpenAI GPT documentation
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-10-2022 16:08:05 | 12-10-2022 16:08:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20722). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,721 | closed | Question-answering example datasets | In the [examples
](https://github.com/huggingface/transformers/tree/0bae286de94f7131b4a2db3f85754b0961c4aaf5/examples/pytorch/question-answering)
Could you please add possible datasets for each script? Neither in the scripts or in the read me I can find this information. especially for long inputs, as I saw from the scripts that you used a tokenizer for that. Also, it would be great if you could provide us please, with the possible models for each script if possible. | 12-10-2022 12:37:37 | 12-10-2022 12:37:37 | The examples are given as just that: examples. You will always need to slightly adapt them when changing the dataset as the data format might not be exactly the same. So there is no official list of possible datasets apart from the ones showcased in the README. |
transformers | 20,720 | closed | Made LUKE Tokenizer independent from RoBERTa | Related to #19303
Removed dependency on RoBERTa tokenizer in LUKE tokenizer.
Hi @sgugger, could you check it? | 12-10-2022 10:51:27 | 12-10-2022 10:51:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,719 | closed | fsdp fix | # What does this PR do?
1. FSDP fix: Fixes https://github.com/huggingface/transformers/issues/18767. If this argument is not specified and ``module`` is on CPU, FSDP issues a warning mentioning that this argument can be specified for faster initialization. | 12-10-2022 06:34:45 | 12-10-2022 06:34:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for the fix! Can you expand a bit the description so that users going back to your PR understand what's happening?
Done, Thanks!
|
transformers | 20,718 | closed | class BeamSearchScorer's finalize function | null | 12-10-2022 05:36:39 | 12-10-2022 05:36:39 | |
transformers | 20,717 | closed | Added resources for albert architecture | # What does this PR do?
Co-authored-by: Adia Wu <[email protected]>
Co-authored-by: Mollerup23 <[email protected]>
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20055
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu @younesbelkada | 12-10-2022 00:34:34 | 12-10-2022 00:34:34 | I tried
`pip install ".[docs]"`
`doc-builder build transformers .\docs\source\en\model_doc\albert.mdx --build_dir ~/tmp/test-build`
to pass the currently failing _Build PR Documentation_ CI/CD test.
However, I still fail. May I ask for any further suggestion?<|||||>@michaelbenayoun ,
Hello, this is Adia who is one of the co-authors of this PR. Do you mind if I ask for any suggestion to solve Build PR Documentation / build / build_pr_documentation (pull_request) CI/CD test? Thank you very much for your time.
Best,
Adia Wu<|||||>Hi,
Not a specialist on the doc-builder but why did you inline all the `[[autodoc]]` lines? Also [this](https://github.com/huggingface/transformers/pull/20717/files#diff-338f8110a3c55b13adb4cdc9d5cf54fb5f5ee66f7242654a9c9fb53e49c99104L140) seems weird, here the plan is to document the `__call__` method, not to put `call` in bold.<|||||>Hello @michaelbenayoun, I think auto-formatting with Visual Studio Code resulted in inlining those [[autodoc]] code blocks. Do you think it is better to change those parts?
<|||||>You should set it back to the original version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,716 | closed | Add BLIP | # What does this PR do?
BLIP is a model from salesforce, capable of performing Visual question answering, image captioning and image-text retrieval. This model has been also used in several Stable-diffusion finetuned variants, such as Pokemon stable diffusion or Naruto Stable diffusion to generate text descriptions from images in order to create text-image paired dataset.
Original repo: https://github.com/salesforce/BLIP
- [x] add integration tests
- [x] Push weights
- [x] document everything
Users would be able to use Blip for three main usecases:
1- Conditional Generation (Image captioning):
```
from PIL import Image
import requests
from transformers import BlipForConditionalGeneration, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
text = "a picture of" # the prefix is optional
inputs = processor(image, text, return_tensors="pt")
output = model.generate(**inputs)
print(processor.decode(output[0], skip_special_tokens=True))
>>> a picture of a woman and a dog sitting in a beach
```
1- bis Conditional Generation (Image captioning with no prefix!):
```
from PIL import Image
import requests
from transformers import BlipForConditionalGeneration, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
inputs = processor(image, return_tensors="pt")
output = model.generate(**inputs)
print(processor.decode(output[0], skip_special_tokens=True))
>>> an image of a woman and a dog sitting in a beach
```
2- Visual question answering
```
from PIL import Image
import requests
from transformers import BlipForQuestionAnswering, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForVisualQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
question = ["How many dogs are in this image?"]
inputs = processor(image, text, return_tensors="pt")
output = model.generate(**inputs)
print(processor.decode(output[0], skip_special_tokens=True))
>>> 1
```
3- Image text retrieval (score matching)
```
import torch
from PIL import Image
import requests
from transformers import BlipForQuestionAnswering, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-vqa-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
question = ["A picture of a woman with a dog sitting in a beach"]
inputs = processor(image, question, return_tensors="pt")
out_itm = model(**inputs, use_itm_head=True)
out = model(**inputs, use_itm_head=False)
print(out) # cosine similarity score
>>> 0.21
print(torch.nn.functional.softmax(out_itm[0], dim=1)[:, 1])
>>> 0.46
```
cc @NielsRogge
Fixes https://github.com/salesforce/LAVIS/issues/64 | 12-09-2022 23:16:24 | 12-09-2022 23:16:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The PR is in a good shape! Would love to have a first round of review! 💪 <|||||>Thanks so much @sgugger @NielsRogge for your review, I can confirm the model weights and model cards are all up!<|||||>Thanks very much for the review @NielsRogge !! <|||||>The failing CI test seems to be related to https://github.com/huggingface/transformers/pull/20790 <|||||>Awesome contribution! Thank you @younesbelkada.
Just noticed Salesforce released BLIP2. Not sure how much work it would be to port to huggingface.
https://github.com/salesforce/LAVIS/tree/main/projects/blip2
|
transformers | 20,715 | closed | Replaces xxx_required with requires_backends | # What does this PR do?
Removes `torch_required` and `tf_required` decorators and replaces with more generic `requires_backends`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-09-2022 20:14:30 | 12-09-2022 20:14:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,714 | closed | AutoTokenizer.from_pretrained is confused by custom model configs | ### System Info
transformers: 4.23.1, OS: macOS x86, Python: 3.9
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When loading a tokenizer with `AutoTokenizer.from_pretrained("bert-base-uncased", config=my_model_config_instance)` it fails in tokenization_auto.py line 640 with:
```
File "venv/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 640, in <genexpr>
f"Model type should be one of {', '.join(c.__name__ for c in TOKENIZER_MAPPING.keys())}."
AttributeError: 'NoneType' object has no attribute '__name__'
```
I registered the custom model & config using:
```python
CONFIG_MAPPING.register("my_model", MyModelConfig)
TOKENIZER_MAPPING.register(MyModelConfig, (BertTokenizer, BertTokenizerFast))
MODEL_MAPPING.register(MyModelConfig, MyModelModel)
SPECIAL_MODEL_TYPE_TO_MODULE_NAME["my_model"] = "my_model.modeling_my_model"
```
### Expected behavior
I'd expect it to load `bert-base-uncased`. We always supply the config as we use this code for multiple models in `transformers` as well as our custom models and the loading code can't differentiate the two at the point it happens.
I think it's occurring because `configuration_auto.config_class_to_model_type()` doesn't respect `SPECIAL_MODEL_TYPE_TO_MODULE_NAME`, so it looks for a class called `MyModelConfig` in `CONFIG_MAPPING_NAMES` when it should be looking for `my_model.modeling_my_model.MyModelConfig`, and also `CONFIG_MAPPING_NAMES` and/or `config_class_to_model_type` doesn't seem to respect additional configs loaded in via `CONFIG_MAPPING.register()`.
The error still occurs if I download the files for the `bert-base-uncased` tokenizer and use the path to it on disk rather than the name `bert-base-uncased`. | 12-09-2022 19:30:17 | 12-09-2022 19:30:17 | I’ll have a look thanks for posting <|||||>Bump for the stale bot.<|||||>Any updates on this?<|||||>I did not have time to dive on this, cc @younesbelkada do you think you can have a look,?<|||||>If there's some guidance or documentation on how unified the configuration stuff should be for custom models then I can look at fixing places where it's inconsistent, but I'm not sure how consistent your team want it to be, because I don't understand the full set of constraints.<|||||>I would mostly focus on
```python
Note that the first argument used when registering your custom config to [AutoConfig](https://huggingface.co/docs/transformers/v4.26.0/en/model_doc/auto#transformers.AutoConfig) needs to match the model_type of your custom config, and the first argument used when registering your custom models to any auto model class needs to match the config_class of those models.
```
from [this](https://huggingface.co/docs/transformers/custom_models), but you probably already looked there<|||||>Yeah I think we're hitting the right points in the register calls, but then the loading side isn't fully respecting the custom dicts (or the register call is missing adding to a custom dict, not sure which). I can try to take a look next week.<|||||>Thanks for the issue @Craigacp
A workaround to this is to hack the function `config_class_to_model_type` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/configuration_auto.py#L566-L571):
```python
def config_class_to_model_type(config):
"""Converts a config class name to the corresponding model type"""
for key, cls in CONFIG_MAPPING_NAMES.items():
if cls == config:
return key
return None
```
To use `CONFIG_MAPPING` instead of `CONFIG_MAPPING_NAMES` as `CONFIG_MAPPING` gets updated when calling `.register`.
You can apply this quick hack with the snippet below without having to change the source code:
```python
from transformers import AutoTokenizer, PretrainedConfig, PreTrainedModel, CONFIG_MAPPING, TOKENIZER_MAPPING, BertTokenizer, BertTokenizerFast, MODEL_MAPPING
from transformers.models.auto.configuration_auto import CONFIG_MAPPING_NAMES
class MyModel(PreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.config = config
self.my_custom_variable = config.my_custom_variable
class MyConfig(PretrainedConfig):
model_type = "my_model"
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.vocab_size = 1000
self.my_custom_variable = 42
config = MyConfig()
model = MyModel(config)
CONFIG_MAPPING.register("my_model", MyConfig.__name__)
CONFIG_MAPPING_NAMES["my_model"] = MyConfig.__name__
MyConfig.register_for_auto_class()
TOKENIZER_MAPPING.register(MyConfig, (BertTokenizer, BertTokenizerFast))
MODEL_MAPPING.register(MyConfig, MyModel)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", config=config)
```
But I am clearly unsure about this approach here, also I don't really understand what do you want to achieve exactly, could you maybe share with us more details on what you are trying yo achieve?
Thanks
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,713 | closed | Add a progress bar for large model loading | # What does this PR do?
As requested in #20669, this PR adds a progress bar when loading large models. The progress bar can be removed with `transformers.utils.disable_progress_bar()`.
Fixes #20669 | 12-09-2022 17:00:22 | 12-09-2022 17:00:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,712 | closed | Add vision requirement to image transforms | # What does this PR do?
Addresses issue when importing functions in the `image_transforms` module if Pillow isn't installed. Namely:
* functions which don't require Pillow have errors when importing. This was because other objects were only imported if Pillow was available e.g. `ChannelDimension`. Resolved by rearranging imports.
* Unuseful error message hiding that the issue is with Pillow in the environment. Resolved by adding a `vision_required` decorator to allow for errors only on select functions.
Fixes #20627
This needs to be merged in before the transforms can be removed from the init in #20704
Snippet below shows new behaviour when running in an environment without Pillow installed:
```
>>> import numpy as np
>>>
>>> # Show we can import both methods without issue
>>> from transformers.image_transforms import rescale
>>> from transformers.image_transforms import center_crop
>>>
>>> img = np.random.randint(0, 256, (3, 244, 360))
>>>
>>> # We can call rescale successfully without having Pillow
>>> rescale_img = rescale(img, 10)
>>> print(rescale_img.shape)
(3, 244, 360)
>>> # center_crop call raises error
>>> cropped_img = center_crop(img, (100, 100))
Traceback (most recent call last):
File "/Users/amyroberts/code/transformers/../scripts/test_image_transforms_imports.py", line 10, in <module>
cropped_img = center_crop(img, (100, 100))
File "/Users/amyroberts/code/transformers/src/transformers/utils/import_utils.py", line 1073, in wrapper
raise ImportError(f"Method `{func.__name__}` requires Pillow.")
ImportError: Method `center_crop` requires Pillow.
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 12-09-2022 16:33:46 | 12-09-2022 16:33:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>A quick question: If we add `requires_backends(center_crop, ["vision"])` in some methods in `src/transformers/image_transforms.py`, shouldn't we do the same in `src/transformers/image_utils.py`, say, `load_image`?
<|||||>> A quick question: If we add `requires_backends(center_crop, ["vision"])` in some methods in `src/transformers/image_transforms.py`, shouldn't we do the same in `src/transformers/image_utils.py`, say, `load_image`?
Yep. I'll add those in too. |
transformers | 20,711 | closed | add model resources for CPMAnt | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- **introduction**: [CPM-Ant](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) is an open-source Chinese pre-trained language model (PLM) with 10B parameters.
- **task**: We add code, tests and docs for cpmant model. The model can be used for text generation with "text-generation" pipeline.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-09-2022 16:14:30 | 12-09-2022 16:14:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20711). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks for addressing some of the comments! Some tests are still failing for few reasons 1- You are assigning an attribute that uses `torch` on the config file, I suspect this is not needed 2- You need to make sure that integration tests are not failing, I suggest you to run `pytest tests/models/cpmant/test_modeling_cpmant.py` and understand why these tests are failing. Thanks again for your efforts!
Some new tests have been added in ```test_modeling_cpmant.py``` . |
transformers | 20,710 | closed | Change a logic in pipeline test regarding TF | # What does this PR do?
- **Currently**, the tiny models in pipeline tests are created using `model_class(config)`, and for TF models, the weights are not created at this point. Then we set the device (**which turns to be CPU instead of GPU!**), and the wegiths are created in CPU context --> we get the expected exceptions.
- In the future, we will use the tiny model from Hub repos
- The models are loaded using `from_pretrained`
- If GPU available, those weights are initialized in GPU context automatically for TF models, including embedding layers
- From what @Rocketknight1 says at the end, we won't get the expected exceptions in this situation (TF, layer weights loaded in GPU context)
**In order to use the tiny models from the Hub without any pipeline test failure, we will have to skip this check under the above described situation.**
From @Rocketknight1
> Embedding layers in Keras have different behaviours on CPU and GPU when you pass invalid indices. On CPU, the layer checks inputs and throws an error if you're out of range, but on GPU you just get a zeros tensor as output with no error | 12-09-2022 15:58:27 | 12-09-2022 15:58:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante to this one - he's been working on a way to check for invalid indices even for embedding layers on GPU |
transformers | 20,709 | closed | Keras finetune examples cannot generate hashable key | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.1+cu102 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Use the Keras [example](https://huggingface.co/docs/transformers/training#train-a-tensorflow-model-with-keras)
```python
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
from tensorflow.keras.optimizers import Adam
from datasets import load_dataset
import tensorflow as tf
import numpy as np
dataset = load_dataset("glue", "cola")
dataset = dataset["train"] # Just take the training split for now
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True)
labels = np.array(dataset["label"]) # Label is already an array of 0 and 1
# Load and compile our model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
# Lower learning rates are often better for fine-tuning transformers
model.compile(optimizer=Adam(3e-5))
model.fit(tokenized_data, labels)
```
```python
All model checkpoint layers were used when initializing TFBertForSequenceClassification.
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.
Traceback (most recent call last):
File "../test_mirrored.py", line 22, in <module>
model.fit(tokenized_data, labels)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_cache.py", line 117, in lookup
dispatch_key = self._dispatch_table.dispatch(key)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/type_dispatch.py", line 78, in dispatch
if request in self._dispatch_table:
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_cache.py", line 77, in __hash__
return hash((self.call_context, self.function_type))
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_type.py", line 246, in __hash__
return hash(
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_type.py", line 106, in __hash__
return hash((self.name, self.kind, self.optional, self.type_constraint))
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/trace_type/default_types.py", line 207, in __hash__
return hash(self.components)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/trace_type/default_types.py", line 207, in __hash__
return hash(self.components)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/trace_type/default_types.py", line 584, in __hash__
return hash((self.identifier, self.base))
ValueError: Cannot generate a hashable key for IteratorSpec(({'input_ids': TensorSpec(shape=(None, 47), dtype=tf.int64, name=None), 'token_type_ids': TensorSpec(shape=(None, 47), dtype=tf.int64, name=None), 'attention_mask': TensorSpec(shape=(None, 47), dtype=tf.int64, name=None)}, TensorSpec(shape=(None,), dtype=tf.int64, name=None)),) because the _serialize() method returned an unsupproted value of type <class 'transformers.tokenization_utils_base.BatchEncoding'>
```
I also had to change `dataset['text']` to `dataset['example']`
### Expected behavior
Train succesfully. | 12-09-2022 15:39:44 | 12-09-2022 15:39:44 | cc @Rocketknight1 <|||||>Derp, that's a problem with the example, I'll push a fix! The problem is that Keras recognizes `dict` objects but not our `BatchEncoding` returned by the `tokenizer`, even though `BatchEncoding` is a subclass of `dict`.
If you replace the last line with `model.fit(dict(tokenized_data), labels)` it should work.<|||||>Should be fixed by #20732<|||||>@Rocketknight1 I'm getting " 'dict' object is not callable" typeError when I used this solution, any idea why?<|||||>@baburz Sorry for the delay! Can you paste the exact code you ran? |
transformers | 20,708 | closed | Fix rendering issue in quicktour | # What does this PR do?
Fixes #20700 I think the two new lines is what makes the problem happen. | 12-09-2022 15:23:35 | 12-09-2022 15:23:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ok so the extra line is what causes the issue, but then `black` is unhappy with the formatting.<|||||>Ok, so I just split the content in two blocks at the end, since `doc-builder` wants no more than one empty lines and `black` wants two :man_shrugging: |
transformers | 20,707 | open | [Whisper] Fix multilingual tokeniser | # What does this PR do?
Fixes #20703
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-09-2022 14:56:13 | 12-09-2022 14:56:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20707). All of your documentation changes will be reflected on that endpoint.<|||||>This looks good, could you also push the normalizer file? 😉 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, when we can expect this to be merged? The wrong normalisation basically makes everything that Whisper models gives from languages other than English useless. Thanks<|||||>Hey, no real timeline. Was dropped as this was not requested. If you really need this however, feel free to take the PR over ! 🤗 <|||||>Hey @thomas-ferraz,
What you can do as a temporary workaround is set `_normalize=False` when you decode with the tokenizer (the default behaviour:
```python
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
```
If you don't need normalisation, you can stop here - your transcriptions won't be normalised.
If you need normalisation, you can manually normalise using the `BasicTextNormalizer` (the **multilingual** normaliser) as follows:
```python
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
normalizer = BasicTextNormalizer()
norm_transcriptions = [normalizer(input_str) for input_str in transcription]
```
Hope that helps!<|||||>This should definitely be re-opened and merged. <|||||>Will take a look when I get the chance! |
transformers | 20,706 | closed | [Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem! | Thanks to all of you, Transformers just passed 75k :star2: last week!
Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face ecosystem. There were 25 new releases of `transformers`, 21 new releases of `datasets`, 13 new releases of `accelerate`.
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
[**hf.co/oss-survey**](https://docs.google.com/forms/d/e/1FAIpQLSf4xFQKtpjr6I_l7OfNofqiR8s-WG6tcNbkchDJJf5gYD72zQ/viewform?usp=sf_link)
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! 🤗 | 12-09-2022 14:42:53 | 12-09-2022 14:42:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>multi-gpu pre-training in one machine for Large GPT from scratch without horovod (**Model Parallelism**)<|||||>Closing as this survey has wrapped up |
transformers | 20,705 | closed | [`ViTHybrid`] fix last `accelerate` slow test | # What does this PR do?
FIxes the last test that I forgot to run on multi gpu setup ! Sorry for the multiple iterations
Fixes:
1- https://github.com/huggingface/transformers/actions/runs/3650247864
2- in `test_model_parallelism` the test splits the model into different sub-modules, with respect to the class attribute `_no_split_modules`. Sometimes a manual device assignment is needed to avoid errors such as `tensors not on the same device`
3- Similar to https://github.com/huggingface/transformers/pull/19912 (check the last modification)
cc @sgugger @ydshieh | 12-09-2022 14:34:01 | 12-09-2022 14:34:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,704 | closed | Remove image_transforms functions from init | # What does this PR do?
Removes functions in the `image_transforms` library from `src/transformers/_init__.py`.
The functionality this module provides are utils for processing images and are not the primary goal of the library.
Address an issue raised in #20627 - which highlighted some functions were in the init and others weren't.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-09-2022 14:26:06 | 12-09-2022 14:26:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,703 | closed | [Whisper] Multilingual Tokeniser uses wrong normaliser | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.5.1 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
### Who can help?
@sanchit-gandhi @ArthurZucker (cc @Vaibhavs10 for info)
### Reproduction
All of the English-only and multilingual Whisper models have `normalizer.json` files in their model repository's on the HF Hub, e.g. for Whisper tiny: https://huggingface.co/openai/whisper-tiny/blob/main/normalizer.json
This means that when we load the tokenisers for any of these models from pre-trained, we default to using the "English Text Normaliser" specified by this `normalizer.json` file:
https://github.com/huggingface/transformers/blob/7319850902ba9b2a44c36ccddd044f98abd9b597/src/transformers/models/whisper/tokenization_whisper.py#L300-L302
and then:
https://github.com/huggingface/transformers/blob/7319850902ba9b2a44c36ccddd044f98abd9b597/src/transformers/models/whisper/tokenization_whisper.py#L488
However, this English text normaliser should **only** be used for English-only models. A separate, "Basic Text Normaliser" should be used in the multilingual setting. This is in accordance with the official Whisper implementation and paper (see Appendix C on page 21 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)).
In short, the English normaliser is **too stringent** for multilingual languages, removing diacritics and other linguistic features that change the meaning of the words:
```python
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny")
input_str = "Ça va?"
norm_str = tokenizer._normalize(input_str)
print("Un-normalised:", input_str)
print("Normalised: ", norm_str)
```
**Print Output**:
```
Un-normalised: Ça va?
Normalised: ca va
```
We should not ever apply normalisation to the ASR output that changes the meaning of the words. This is not allowed when evaluating systems and is undesirable behaviour.
In the Whisper fine-tuning event, we've seen participants using the multilingual models, fine-tuning on multilingual languages, and normalising according to the default normaliser, thus giving erroneous and invalid evaluation results that have had to be repeated.
### Expected behavior
The basic text normaliser does not remove such diacritics, but does lower case and strip punctuation (as intended):
```python
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
normalizer = BasicTextNormalizer()
basic_norm_str = normalizer(input_str)
print("Un-normalised: ", input_str)
print("Basic normalised:", basic_norm_str)
```
**Print Output:**
```
Un-normalised: Ça va?
Basic normalised: ça va
```
We should revert to using the `BasicTextNormalizer` for the multilingual models. We can do this by:
1. Removing the English `normalizer.json` files from the multilingual Whisper models on the HF Hub
2. Defaulting to using the `BasicTextNormalizer` when the normaliser file is `None` | 12-09-2022 14:16:32 | 12-09-2022 14:16:32 | I see, that’s indeed correct. Adding it to my whisper todo, this will probably make the tests fail again <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,702 | closed | Fix bug: replace left over FE references | # What does this PR do?
Fixes failing Flava processor tests introduced by #20590 - replaces any exisiting feature extractor reference with image processors.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-09-2022 12:10:18 | 12-09-2022 12:10:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,701 | closed | [Tests] Improve test_attention_outputs | # What does this PR do?
I noticed several vision models overwrite `test_attention_outputs`, but we have a `has_attentions` flag exactly for this purpose.
Hence, I've adapted the test in `test_modeling_common.py` and `test_modeling_tf_common.p`y to improve this. | 12-09-2022 10:36:48 | 12-09-2022 10:36:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,699 | closed | Use `config.layer_norm_eps` in some `nn.LayerNorm`. | # What does this PR do?
⚠️⚠️⚠️ This changes the `eps` of those LayerNorm layers from (the default) `1e-5` to `1e-12`, and the outputs will have slightly differences before/after this PR. ⚠️⚠️⚠️
----
Similar to #20554, but this time instead of removing the attribute from config, we use `config.layer_norm_eps` in some `nn.LayerNorm`.
| 12-09-2022 09:42:33 | 12-09-2022 09:42:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Just to confirm:
> - these changes don't have an impact on the integration tests, right? This only results in different outputs when one would train these models from scratch?
It has impact even for integration tests: the change is the **constant** `eps` being changed, which affects the forward call both in inference as well as in training.
- technically we should check the `layer_norm_eps` for each paper to match the original code
I agree - but I am not sure, for recent models, if all these attributes are set according to the papers, or people just used add new model like templates ...<|||||>This is too breaking I think. We need to be more careful on new models added that this attribute is consistently used but I don't think we should touch old models like this as it will change the results of the forward.<|||||>OK! I will keep this list of models to skip in the WIP PR where we add a test for checking unused config attributes.<|||||>Close as it is too breaking! |
transformers | 20,698 | closed | How to use pipeline for Custom token-classification model? | **Model description**
I add simple custom `pytorch-crf layer` on top of `TokenClassification model`. It will make the model more robust.
I train the model successfully but when I test the mode. The folder doesn't have `config.json` file inside it. So the `pipeline` function gives error as
`Error:AttributeError: 'BERT_CRF' object has no attribute 'config'`
**CODE**
```
class BERT_CRF(nn.Module):
def __init__(self, bert_model, num_labels):
super(BERT_CRF, self).__init__()
self.bert = bert_model
self.dropout = nn.Dropout(0.25)
self.classifier = nn.Linear(768, num_labels)
self.crf = CRF(num_labels, batch_first = True)
def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask)
sequence_output = torch.stack((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4])).mean(dim=0)
sequence_output = self.dropout(sequence_output)
emission = self.classifier(sequence_output) # [32,256,17]
if labels is not None:
labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])
loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return [loss, prediction]
else:
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return prediction
```
```
tokenizer = AutoTokenizer.from_pretrained("fine-tuned_model",model_max_length=256)
bert_model = BertForTokenClassification.from_pretrained('spanbert_base',id2label=id2label,label2id=label2id)
bert_model.config.output_hidden_states=True
model = BERT_CRF(bert_model, num_labels=21)
model.load_state_dict(torch.load("fine-tuned_model/pytorch_model.bin"))
model.eval()
```
`token_classifier = pipeline("token-classification", model=model, aggregation_strategy="max",tokenizer=tokenizer,grouped_entities=True)`
`AttributeError: 'BERT_CRF' object has no attribute 'config'`
| 12-09-2022 06:22:34 | 12-09-2022 06:22:34 | You should use subclasses of `PreTrainedModel` (and `PretrainedConfig` if necessary). Here is the [documentation](https://huggingface.co/docs/transformers/create_a_model) on that and [here](https://huggingface.co/docs/transformers/add_new_pipeline) is an example of custom pipeline if you need it.<|||||>@sgugger , I am kind of in a similar situation, I looked up the custom `Pipeline` class, I wanted to understand if there is additional documentation of the methods in the class, I do see `_sanitary_parameter`, is it mostly to add preprocessing inputs ?, for context I am trying to train a custom NER model as well.<|||||>@sgugger , never mind I was able to write my own postprocess and preprocess function and also inherit a of the implementation from task specific classes, thank you.<|||||>> @sgugger , never mind I was able to write my own postprocess and preprocess function and also inherit a of the implementation from task specific classes, thank you.
Hi @rajathpatel23 could you please upload your code/notebook for reference?<|||||>Hi @pratikchhapolika , I will try to upload the the notebook soon over the weekend<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,697 | closed | Added resources on albert model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Co-author: @Adia Wu <[email protected]>
Fixes #20055
## Before submitting
- [o] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [o] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [o] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [o] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [o] Did you write any new necessary tests?
## Who can review?
@stevhliu @younesbelkada
| 12-09-2022 05:15:37 | 12-09-2022 05:15:37 | Thank you @younesbelkada ! Would you mind if I ask you how I can pass the
ci/circleci: tests_pipelines_tf tests?
I tried
`pip3 install --upgrade pip`
`pip3 install --upgrade tensorflow`<|||||>Thanks for your PR! Could you focus it solely on the new resources added? There are multiple changes that are not desired.<|||||>Thank you! Will try!
So, I deleted those undesired behaviors, and now I will look for more resources to add!
<|||||>Do you mind if I open a new Pull Request in order to contain only meaningful commits?<|||||>Yes please, that'd be great! |
transformers | 20,696 | closed | added model resources for CPMAnt | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- **introduction**: [CPM-Ant](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) is an open-source Chinese pre-trained language model (PLM) with 10B parameters.
- **task**: We add code, tests and docs for cpmant model. The model can be used for text generation with "text-generation" pipeline.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 12-09-2022 03:18:13 | 12-09-2022 03:18:13 | Add some description to the model. |
transformers | 20,695 | closed | NER example | https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/examples/pytorch/token-classification/run_ner_no_trainer.py#L601
I think modifying it this way would be better.
```python
true_predictions = [
[label_list[p] for (p, l) in zip(prediction, label) if p != -100]
for prediction, label in zip(predictions, labels)
]
``` | 12-09-2022 03:13:07 | 12-09-2022 03:13:07 | |
transformers | 20,694 | closed | A bug with subscript while maintaining beam_indices. | #### This is the position with this bug: https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/generation/utils.py#L2875
#### The problem caused by this bug:
I want to get all decoder_hidden_states (with dimension [max_seq_len * (1 + decoder_layers) * (num_beam * batch_size) * embedding_size]) and beam_indices to keep track of those hidden states of each word generated.
After some example early stop (\<eos\> is generated), some beam_indices will become zero incorrectly. This bug is caused by the wrong subscript (`beam_indices[beam_idx[i]]`) when the model update the beam_indices and it should be `beam_indices[i]`. | 12-09-2022 02:39:24 | 12-09-2022 02:39:24 | cc @gante, but note that there is little we can do to help without a reproducer of the bug.<|||||>Hey @aj666aj 👋
Can I have a script to reproduce your problem? I'll struggle to pinpoint the issue without it, which will make me deprioritize this issue :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,693 | closed | torch.jit.script for BertEmbeddings works on MacOS but fails on Linux | ### System Info
transformers : 4.23.1
torch : 1.13.0
python : 3.9.13
**Versions are the same on MacOS and Linux.**
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
###### On MacOS ######
import torch
from transformers.models.bert.modeling_bert import BertConfig, BertEmbeddings
config = BertConfig()
embed = BertEmbeddings(config)
scripted = torch.jit.script(embed)
print(scripted)
OUTPUT = """
/opt/anaconda3/envs/mobi/lib/python3.9/site-packages/torch/jit/annotations.py:299: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either.
warnings.warn("TorchScript will treat type annotations of Tensor "
RecursiveScriptModule(
original_name=BertEmbeddings
(word_embeddings): RecursiveScriptModule(original_name=Embedding)
(position_embeddings): RecursiveScriptModule(original_name=Embedding)
(token_type_embeddings): RecursiveScriptModule(original_name=Embedding)
(LayerNorm): RecursiveScriptModule(original_name=LayerNorm)
(dropout): RecursiveScriptModule(original_name=Dropout)
)
"""
```
```python
###### On Linux ######
import torch
from transformers.models.bert.modeling_bert import BertConfig, BertEmbeddings
config = BertConfig()
embed = BertEmbeddings(config)
scripted = torch.jit.script(embed)
print(scripted)
OUTPUT = """
/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/annotations.py:299: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either.
warnings.warn("TorchScript will treat type annotations of Tensor "
Traceback (most recent call last):
File "/ocean/projects/tra220029p/tejaswin/ViLT/script_bertembeddings.py", line 7, in <module>
scripted = torch.jit.script(embed)
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_script.py", line 1286, in script
return torch.jit._recursive.create_script_module(
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_recursive.py", line 476, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_recursive.py", line 393, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
'Optional[Tensor]' object has no attribute or method 'size'.:
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 212
input_shape = input_ids.size()
else:
input_shape = inputs_embeds.size()[:-1]
~~~~~~~~~~~~~~~~~~ <--- HERE
seq_length = input_shape[1]
"""
```
### Expected behavior
The code should work on both platforms. | 12-09-2022 02:32:25 | 12-09-2022 02:32:25 | Looks like the inconsistency comes from PyTorch, if all versions are the same.<|||||>Thanks @sgugger . I have a temporary workaround on Linux. Will raise it on the PyTorch forums.<|||||>> Thanks @sgugger . I have a temporary workaround on Linux. Will raise it on the PyTorch forums.
How did you solve this problem? I also encountered the same problem. Can you ask me for experience?
|
transformers | 20,700 | closed | [docs] transformers/quicktour rendering issue | There is a rendering issue in https://huggingface.co/docs/transformers/quicktour#trainer-a-pytorch-optimized-training-loop
step 5.
The markdown looks like [this](https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/docs/source/en/quicktour.mdx?plain=1#L445-L451)
```python
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"])
>>> dataset = dataset.map(tokenize_dataset, batched=True)
```
but `dataset = ...` renders as a separate triple-indented quote
| 12-09-2022 02:25:00 | 12-09-2022 02:25:00 | Transfering to the transformers repo as it's more relevant there cc @sgugger <|||||>Should be fixed by the PR mentioned above.<|||||>Probably not worth looking at more, but just by way of comparison, there is multiple returns on this page that seem to work (again referencing `dataset.map`):
https://huggingface.co/docs/transformers/training#prepare-a-dataset
https://github.com/huggingface/transformers/blob/74330083b5eea68eadf25bcbad3bcbb094f60c57/docs/source/en/training.mdx?plain=1#L50-L54
I have no idea if it matters, but when viewed as markdown, there are some sections of the quickstart that don't render right. E.g., see the
```
```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the...
```
in this [section](https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/docs/source/en/quicktour.mdx#autotokenizer)
<|||||>Thanks for raising this issue!
It isn't as important when viewed as Markdown because we use a special syntax (`<frameworkcontent>`) to generate the PyTorch and TensorFlow code blocks in our docs. |
transformers | 20,692 | closed | A (possible) bug of sentence permuation in BART pretraining script | ### System Info
None
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
As described in the [original BART paper](https://arxiv.org/pdf/1910.13461.pdf), the sentence permutation should be: A document is divided into sentences based on **full stops**, and these sentences are shuffled in a random order.
However, in the [examples/flax/language-modeling](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_bart_dlm_flax.py#L318), it uses pad token as the `end_sentence_mask`.
| 12-09-2022 01:27:45 | 12-09-2022 01:27:45 | Hey @StevenTang1998!
Great question! In our Flax pre-training script, use the PAD token as the end of sentence indicator:
https://github.com/huggingface/transformers/blob/76924384af6081e58460231c3c637f9c83cabf97/examples/flax/language-modeling/run_bart_dlm_flax.py#L626-L627
This way, the PAD token is appended after the end of a sentence (full stop, exclamation marks, question marks etc) and used to indicate sentence boundaries. So in splitting a document by the PAD token, we are in effect splitting sentences on end of sentence punctuation (equivalent to the original paper!).
Hope this addresses your concerns! Let me know if you have any questions and I'd be happy to answer 🤗<|||||>Thanks for kind reply! I understand it now!
But I have a concern that the PAD token may affect the results (through position embedding?) although it will not be attend to.<|||||>Hey @StevenTang1998! Glad to hear that clarified things! It should be fine since we build the attention mask based on the position of the PAD token ids :)<|||||>OK, thank you for your reply! Wish you a good day! |
transformers | 20,691 | closed | Doing data preprocessing in a separated run | ### System Info
I am trying to run the file run_speech_recognition_ctc.py on a custom dataset. I use the argument preprocessing_only to make the data preprocessing as a separated step. My question is how to start model (training as a second step) since there is no previous checkpoint.
Thanks in advance.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
none
### Expected behavior
none | 12-08-2022 21:19:47 | 12-08-2022 21:19:47 | cc @sanchit-gandhi <|||||>Hey @fahad7033! Cool to see that you're using the CTC example script for training 🤗 The argument `--preprocessing_only` will run the fine-tuning script up to the end of the dataset pre-processing: https://github.com/huggingface/transformers/blob/0ba94aceb6e1ab448e0acc896764a4496759cb14/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L656
Once this run has completed, disable the flag `--preprocessing_only` (remove it from your args or set `--preprocessing_only="False"`) and re-run the training script. This time, the training script will use the cached dataset (i.e. it will re-use the pre-processed dataset files that you prepared in your pre-processing run) and then commence training.
It's worth noting that using the `--preprocessing_only` flag is only recommended in distributed training when there is risk of a timeout. If this happens, we switch to a non-distributed set-up and set the `--preprocessing_only` flag. We can then go back to the distributed training set-up and have our dataset ready in cache for training.
If you are not running distributed training or aren't at risk of a timeout (i.e. a very large dataset), it'll be faster and easier for you just to run the script once without the `--preprocessing_only` argument.
Let me know if you have any other questions, happy to help!<|||||>Thank you so much sanchit-gandhi for your response.
I do have a large dataset, so it is better for me to do it in 2 steps.
After complete the running with --preprocessing_only="True", there is no cached file in the output directory, so when I re-run the script again with disabling --preprocessing_only the original dataset is processed again.
Also, I tried to review the script it seems the code processes the original dataset regardless the flag --preprocessing_only is true or false.
<|||||>When data preprocessing is completed I got this message:
INFO:__main__:Data preprocessing finished. Files cached at {'train': [], 'eval': []}<|||||>Hey @fahad7033! I've tried to reproduce this behaviour with a minimum working example.
System info:
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 2.0.0.dev20221210+cu117 (True)
Script uses a tiny subset of the LibriSpeech ASR dataset (~9MB) and fine-tunes on a tiny Wav2Vec2 CTC model:
```
python run_speech_recognition_ctc.py \
--dataset_name="hf-internal-testing/librispeech_asr_dummy" \
--model_name_or_path="hf-internal-testing/tiny-random-wav2vec2" \
--dataset_config_name="clean" \
--train_split_name="validation" \
--eval_split_name="validation" \
--output_dir="./" \
--max_steps="10" \
--per_device_train_batch_size="16" \
--per_device_eval_batch_size="16" \
--learning_rate="3e-4" \
--warmup_steps="5" \
--evaluation_strategy="steps" \
--length_column_name="input_length" \
--save_strategy="no" \
--eval_steps="5" \
--preprocessing_only="True" \
--preprocessing_num_workers="4" \
--freeze_feature_encoder \
--fp16 \
--overwrite_output_dir\
--group_by_length \
--do_train \
--do_eval \
```
<details>
<summary> Output: </summary>
```
12/19/2022 15:29:32 - INFO - __main__ - Data preprocessing finished. Files cached at
{'train': [{'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-dc486168c3937e95.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-53095567e8277865.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-e089d2a96576c6bb.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-6d3d1c061f60c29b.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00000_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00001_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00002_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00003_of_00004.arrow'}],
'eval': [{'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-dc486168c3937e95.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-53095567e8277865.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-e089d2a96576c6bb.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-6d3d1c061f60c29b.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00000_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00001_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00002_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00003_of_00004.arrow'}]}
```
</details>
We can see here that the dataset has been correctly prepared and cached, so the script is working for me with this toy example. Do you have a reproducible script that I could use to re-create your run? It's impossible for me to say what the issue is without being able to reproduce the error on my side!
Also re-iterating a point raised in my previous message: unless you're fine-tuning using a **large dataset** on **multiple GPUs**, there is no need to use the flag `--preprocessing_only`. For a large dataset on a single GPU, it's better not to use this flag and just run training directly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,690 | closed | Add missing image transforms to init | # What does this PR do?
Adds functionality in the image transforms library to the init to allow direct imports e.g.:
`from transformers import center_crop`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-08-2022 21:16:04 | 12-08-2022 21:16:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,689 | closed | Best model selection changed | ### System Info
```
OS: Ubuntu 18.04.2 LTS
python: 3.8.0
torch 1.10.1+cu113
torchaudio 0.10.1+cu113
torchvision 0.11.2+cu113
transformers 4.15.0
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am training an NLI model (but I am pretty sure this is task independent). Here are my training arguments
```
training_args = TrainingArguments(
output_dir=args.output, # output directory
num_train_epochs=1, # total # of training epochs
per_device_train_batch_size=args.batch_size, # batch size per device during training
per_device_eval_batch_size=args.batch_size, # batch size for evaluation
warmup_steps=0, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir=os.path.join(args.output, 'logs'), # directory for storing logs
logging_steps=args.eval_steps,
learning_rate=1e-05,
evaluation_strategy="steps",
save_strategy="steps",
eval_steps=args.eval_steps,
save_steps=args.eval_steps,
gradient_checkpointing=True,
load_best_model_at_end=True,
metric_for_best_model='loss', # default changed from this at some point?
disable_tqdm=True,
save_total_limit=args.save_total_limit,
)
```
With no argument for `metric_for_best_model` or with as above setting it explicitly to `loss` the results are the same and the trainer always chooses the best model based on eval loss not training loss
### Expected behavior
I expected it to save the model based n the best training loss, is there any way to do this? from earlier versions of ths library I had used that used to be the default behaviour but after looking the details it doesn't seem like that is possible any more. Is that intentional? Is there an option here that would get the training loss that I am just not using correctly?
https://github.com/huggingface/transformers/blame/e3cc4487fe66e03ec85970ea2db8e5fb34c455f4/src/transformers/trainer.py#L2228 | 12-08-2022 19:23:17 | 12-08-2022 19:23:17 | It was never possible, it has always been on the evaluation loss (or any other metric).<|||||>@sgugger thanks for getting back to me so quickly! Did the default metric change at any point? I am just trying to determine why the "best model" being saved is different when I haven't changed anything in my data/training arguments. I thought maybe it was selecting on the training loss before because that is the metric that looked closest to the checkpoint it selected before<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,688 | closed | skip `test_multi_gpu_data_parallel_forward` for `MaskFormerSwinModelTest` | # What does this PR do?
`MaskFormerSwinModel` outputs `hidden_states_spatial_dimensions`, which is ``tuple(tuple(int, int))``, and can't be collected by `nn.DataParallel` to form a final output value.
(When I remove this attribute, this test passes.) | 12-08-2022 18:34:24 | 12-08-2022 18:34:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,687 | closed | Update CI to PyTorch `1.13.0` | # What does this PR do?
The job runs with PT 1.13 show everything is fine except
```python
test_model_parallelism
(line 980) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:1)
```
Once #20686 being merged, we can merge this PR to use PT 1.13 for CI
| 12-08-2022 17:25:47 | 12-08-2022 17:25:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,686 | closed | Fix CIs for PyTorch 1.13 | # What does this PR do?
Before we can update CI to use PyTorch 1.13, there are a few things to fix.
This PR avoids `test_model_parallelism` to fail with some models, where `PyTorch 1.13` requires more strictly the indices to be on the same device of the indexed tensor.
| 12-08-2022 17:13:44 | 12-08-2022 17:13:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello and have a good time
Can anyone here help me with the "multi_label_classification" and "single _label_classification" functions?
I am writing a code for dynamic quantization, and I have encountered problems with these functions and their writing.<|||||>Hi @fkoeini
First of all, your question is unrelated to this PR, and here (this PR page) is not the correct place for your question above.
Also, from you description, [Hugging Face Forums](https://discuss.huggingface.co/)](https://discuss.huggingface.co/) is the place to post the question. On GitHub, it's for bugs or feature requests :-) Thank you! |
transformers | 20,685 | closed | Training ConformerCTC suitable for online inference | ### Feature request
Provide a mechanism (or docs if already possible) for training a Conformer model with CTC loss function, such that when inferring live using blocked buffered data, you get the same output as if passing the whole data in one go. Also, discuss if this is resilient to sample offsets.
### Motivation
I would like to use a Conformer model trained with CTC loss live using buffered data coming of a sensor.
### Your contribution
Maybe if i had guidance | 12-08-2022 16:03:51 | 12-08-2022 16:03:51 | Please use the [forums](https://discuss.huggingface.co) for such questions.<|||||>Ok sorry<|||||>@sgugger I added a post [here](https://discuss.huggingface.co/t/conformerctc-for-streaming/27480). i would greatly appreciate your input. |
transformers | 20,684 | closed | Enable bf16 option for XLA devices | # What does this PR do?
XLA devices like TPU and NeuronCore supports BF16 natively. This PR enables --bf16 option to work for XLA devices.
Since BF16 doesn't require gradient scaling, gradient scaling path is disabled for XLA devices.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 12-08-2022 15:46:38 | 12-08-2022 15:46:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I don't see how the gradient scaling path is disabled, could you share more information on that? A scaler is still defined at line 591.
Line 568 disables the code section that enables grad scaler when XLA device is detected (is_torch_tpu_available()).<|||||>This PR accidentally disabled gradient scaling when using FP16 on XLA devices. <|||||>Indeed. Do you want to make a PR with a fix @Lokiiiiii ? |
transformers | 20,683 | closed | Add `keep_in_fp32_modules` support | # What does this PR do?
This PR partially addresses #20287 - although half-precision and int8 conversion work extremely well for most of the models, for some architectures (e.g. T5) the casting leads in a drastic performance degradation.
This can be fixed by manually force-casting some modules in `float32`. For FLAN-T5, @larsmennen and @navjotts have found out that keeping only these weights in `fp32` enables to run largest models in fp16 or int8 with no performance degradation.
This PR introduces a new utils in `from_pretrained` method, termed as `keep_in_fp32_modules` that partially addresses this issue.
How this util works? For T5:
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-small", torch_dtype=torch.float16, keep_in_fp32_modules=["wo"])
print(model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype)
>>> torch.float32
```
When using `keep_in_fp32_modules` , `low_cpu_mem_usage` needs to be force-set to `True`. This is because if `low_cpu_mem_usage=False`, it is the function from pytorch `_load_from_state_dict` that is called under the hood on each sub-module. This function calls `copy_` from Pytorch which seems to keep the tensor in its native `dtype` regardless the `dtype` of the input module
```
import torch
param = torch.Tensor([0.1, 0.2, 0.3]).to(torch.float16)
to_copy_param = torch.Tensor([0.2, 0.1, 0.3]).to(torch.float32)
param.copy_(to_copy_param)
print(param.dtype)
>>> torch.float16
```
Keeping this as a draft for now as this util needs to be manually patched with fixes such as https://github.com/huggingface/transformers/issues/20287#issuecomment-1342219429 , otherwise users will encounter issues about incompatible `dtype` between input and weights
cc @sgugger | 12-08-2022 15:40:14 | 12-08-2022 15:40:14 | What about adding hooks on each converted module, that will take care of converting the input / output to the correct `dtype` ?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>As suggested in #20287 / model loaded in bfloat16 should keep their weights in `bfloat16` and not cast them in `fp32`. This is addressed in e3498da<|||||>Thanks @younesbelkada and @sgugger !! Tested this locally; can confirm this works with patch 1&2 from https://github.com/huggingface/transformers/issues/20287#issuecomment-1342219429
The only problem I encountered is that in:
https://github.com/huggingface/transformers/blob/0f75387d633619b8c0cf6955b07dac54d2e17473/src/transformers/modeling_utils.py#L2326
You get an error as `keep_in_fp32_modules` is an unexpected keyword to the underlying model class (locally i just added it quickly to test). Do you want to add this in so people can use it in their model class to determine where to apply patches like 1&2? Or alternatively don't pass it on and then people can just query the dtype.<|||||>Thanks so much @larsmennen for confirming that the tests pass! We should be close merging this 💪
I think that your failing test should be fixed with my latest commit ( cb89c42 ) but I am not sure, could you try again with the latest commit? 🙏 <|||||>hmm that doesn't fix it. I think you just need to pop the argument from model_kwargs, otherwise it gets passed to the underlying model (i'm assuming you don't want that? but cmiiw)
I.e. after
https://github.com/huggingface/transformers/blob/7d47df2e52c6b55d8f19e5b2bd8b5e472a4f0a82/src/transformers/modeling_utils.py#L1981
if you add
```py
keep_in_fp32_modules = kwargs.pop("keep_in_fp32_modules", None)
```
I tested w/ that modification on top of [7d47df2](https://github.com/huggingface/transformers/pull/20683/commits/7d47df2e52c6b55d8f19e5b2bd8b5e472a4f0a82) and that works! Thanks for the quick action @younesbelkada ! 🙏 <|||||>@larsmennen how are you loading your model ? The description above is slightly misleading as initially the plan was to add a kwarg when loading the model as follows:
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-small", torch_dtype=torch.float16, keep_in_fp32_modules=["wo"])
```
but now this is not needed, you should just load your model like:
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-small", device_map="auto", load_in_8bit=True])
```<|||||>@younesbelkada ah i see! I was passing the kwarg yes, so that explains.<|||||>@larsmennen this PR will be merged as soon as all the tests will be green !
Would you mind opening a PR addressing your suggestions (patch 1 & 2 from the discussion at #20287 )? <|||||>All slow tests from T5 (and BLOOM just in case we didn't break anything else) pass 🟢
Merging once the CI tests are green<|||||>I tried this one, latest version of transformers (27.4), cuda 10.2 and I get this error:
`model1a_CPU = T5ForConditionalGeneration.from_pretrained(model_path, low_cpu_mem_usage=True,torch_dtype=torch.float16, keep_in_fp32_modules=["wo"]).to("cuda")
TypeError: __init__() got an unexpected keyword argument 'keep_in_fp32_modules'`
<|||||>`keep_in_fp32_modules` is not an argument you can pass to `from_pretrained`, this is all done internally.<|||||>You need to do somthing like:
```python
from transformers import T5ForConditionalGeneration
T5ForConditionalGeneration._keep_in_fp32_modules = ["wo"]
# your code here
```<|||||>Except this is already done for T5 ;-) |
transformers | 20,682 | closed | Feature: Adding support for MultiWorkerMirroredStrategy in TensorFlow Training Args | # What does this PR do?
Adding support for MultiWorkerMirroredStrategy in TensorFlow Training Args. This is an existing stable Distribution strategy provided by TensorFlow.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @jplu
| 12-08-2022 15:18:48 | 12-08-2022 15:18:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20682). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,681 | closed | Whilelist Transformers private method in DummyObject | # What does this PR do?
Fixes #20671
As reported in #20671, calling `AutoModel.from_config` on a model with a missing specific soft dependency does not raise the appropriate error. This is because this method ends up calling `ModelClass._from_config` and the `DummyObject` class does not raise the error on all private attributes (basically because we need the `__xxx__` attribute to stay the same). This PR whitelists `_from_config` to fix the issue. | 12-08-2022 15:10:31 | 12-08-2022 15:10:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>You can't do that kind of check without triggerring recursion errors (cause it calls methods like `__getattribute__` inside that method ;-) ). |
transformers | 20,680 | closed | Fix expected values for TF-ESM tests | I computed the expected values for these tests on my local machine with TF32 enabled - my bad! This replaces them with the correct float32 expected outputs. | 12-08-2022 14:42:58 | 12-08-2022 14:42:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,679 | closed | [`ViTHybrid`] Fix `accelerate` slow tests | # What does this PR do?
Fixes failing `ViTHybrid` `accelerate` tests
Uses the same procedure as DPTHybrid for `backbone_featmap_shape` - Now all slow tests should pass :-)
cc @sgugger @ydshieh | 12-08-2022 14:21:07 | 12-08-2022 14:21:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,678 | closed | added model resources for CPMAnt | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-08-2022 14:05:43 | 12-08-2022 14:05:43 | Hi there, thanks for your PR! It contains a lot of modifications that have nothing to do with the title (bad rebase?) so you might need to open a fresh PR :-)<|||||>> Hi there, thanks for your PR! It contains a lot of modifications that have nothing to do with the title (bad rebase?) so you might need to open a fresh PR :-)
Should I pass all checks?
|
transformers | 20,677 | closed | Bump certifi from 2021.10.8 to 2022.12.7 in /examples/research_projects/decision_transformer | Bumps [certifi](https://github.com/certifi/python-certifi) from 2021.10.8 to 2022.12.7.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/9e9e840925d7b8e76c76fdac1fab7e6e88c1c3b8"><code>9e9e840</code></a> 2022.12.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b81bdb269f1edb791bcd4ec8a9d0c053758f961a"><code>b81bdb2</code></a> 2022.09.24</li>
<li><a href="https://github.com/certifi/python-certifi/commit/939a28ffc57b1613770f572b584745c7b6d43e7d"><code>939a28f</code></a> 2022.09.14</li>
<li><a href="https://github.com/certifi/python-certifi/commit/aca828a78e73235a513dff9ebc181a47ef7dbf7b"><code>aca828a</code></a> 2022.06.15.2</li>
<li><a href="https://github.com/certifi/python-certifi/commit/de0eae12a6d5794a4c1e33052af6717707ce1fcc"><code>de0eae1</code></a> Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b8eb5e9af9143b22b7f651942b393e369ed4c52a"><code>b8eb5e9</code></a> 2022.06.15.1</li>
<li><a href="https://github.com/certifi/python-certifi/commit/47fb7ab715965684e035292d2ad3386aabdc4d25"><code>47fb7ab</code></a> Fix deprecation warning on Python 3.11 (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/199">#199</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b0b48e059995f455ac1e79b3ad373ad4ef355516"><code>b0b48e0</code></a> fixes <a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/198">#198</a> -- update link in license</li>
<li><a href="https://github.com/certifi/python-certifi/commit/9d514b4cad79357071c89d7dc4dc1b4df72bb997"><code>9d514b4</code></a> 2022.06.15</li>
<li><a href="https://github.com/certifi/python-certifi/commit/4151e8849481f396537c34812068e89b32731e52"><code>4151e88</code></a> Add py.typed to MANIFEST.in to package in sdist (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/196">#196</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2021.10.08...2022.12.07">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 12-08-2022 14:04:28 | 12-08-2022 14:04:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,676 | closed | fix text config and model loading. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-08-2022 13:43:57 | 12-08-2022 13:43:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20676). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,675 | closed | [Backbones] Improve out features | # What does this PR do?
This PR makes sure backbones by default return the feature map of the last stage in case `config.out_features = None`. | 12-08-2022 13:16:54 | 12-08-2022 13:16:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,674 | closed | How to write a custom configuration for hugging face model for Token Classification | **Model description**
I add simple custom `pytorch-crf` layer on top of `TokenClassification model` for `NER`. It will make the model more robust.
I train the model and I get the error:
***** Running training *****
Num examples = 4
Num Epochs = 2
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 2
Gradient Accumulation steps = 1
Total optimization steps = 4
TypeError: __init__() missing 3 required positional arguments: 'id2label', 'label2id', and 'num_labels'
**Code**
from torchcrf import CRF
model_checkpoint = "spanbert"
tokenizer = BertTokenizer.from_pretrained(model_checkpoint,add_prefix_space=True)
bert_model = BertForTokenClassification.from_pretrained(
model_checkpoint,id2label=id2label,label2id=label2id)
bert_model.config.output_hidden_states=True
class BertClassifierConfig(PretrainedConfig):
model_type="BertForTokenClassification"
def __init__(self,id2label ,label2id,num_labels,**kwargs):
self.num_labels=num_labels
self.id2label=id2label
self.label2id=label2id
self.output_hidden_states=True
super().__init__(**kwargs)
**Model**
class BertForTokenClassification(PreTrainedModel):
config_class =BertClassifierConfig
def __init__(self, config,bert_model, num_labels):
super(BertForTokenClassification, self).__init__(config)
self.bert = bert_model
self.dropout = nn.Dropout(0.25)
self.classifier = nn.Linear(768, num_labels)
self.crf = CRF(num_labels, batch_first = True)
def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask)
sequence_output = torch.stack((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4])).mean(dim=0)
sequence_output = self.dropout(sequence_output)
emission = self.classifier(sequence_output) # [32,256,17]
labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])
if labels is not None:
loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return [loss, prediction]
else:
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return prediction
**Saving**
configuration = BertClassifierConfig(id2label ,label2id,num_labels=len(label2id))
model = BertForTokenClassification(configuration,bert_model, num_labels=len(label2id))
model.to(device)
args = TrainingArguments(
"test0000",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=2,
weight_decay=0.01,
per_device_train_batch_size=2,
# per_device_eval_batch_size=32
fp16=True
# bf16=True #Ampere GPU
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
# eval_dataset=train_data,
# data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
**Saving**
trainer.train()
trainer.save_model("modeltest")
AutoConfig.register("BertForTokenClassification", BertClassifierConfig)
AutoModel.register(BertClassifierConfig, BertForTokenClassification)
**ERROR**
***** Running training *****
Num examples = 4
Num Epochs = 2
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 2
Gradient Accumulation steps = 1
Total optimization steps = 4
TypeError: __init__() missing 3 required positional arguments: 'id2label', 'label2id', and 'num_labels'
| 12-08-2022 12:49:25 | 12-08-2022 12:49:25 | Please use the [forums](https://discuss.huggingface.co/) to help debug your code.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,673 | closed | Bump certifi from 2020.6.20 to 2022.12.7 in /examples/research_projects/visual_bert | [//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2020.6.20 to 2022.12.7.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/9e9e840925d7b8e76c76fdac1fab7e6e88c1c3b8"><code>9e9e840</code></a> 2022.12.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b81bdb269f1edb791bcd4ec8a9d0c053758f961a"><code>b81bdb2</code></a> 2022.09.24</li>
<li><a href="https://github.com/certifi/python-certifi/commit/939a28ffc57b1613770f572b584745c7b6d43e7d"><code>939a28f</code></a> 2022.09.14</li>
<li><a href="https://github.com/certifi/python-certifi/commit/aca828a78e73235a513dff9ebc181a47ef7dbf7b"><code>aca828a</code></a> 2022.06.15.2</li>
<li><a href="https://github.com/certifi/python-certifi/commit/de0eae12a6d5794a4c1e33052af6717707ce1fcc"><code>de0eae1</code></a> Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b8eb5e9af9143b22b7f651942b393e369ed4c52a"><code>b8eb5e9</code></a> 2022.06.15.1</li>
<li><a href="https://github.com/certifi/python-certifi/commit/47fb7ab715965684e035292d2ad3386aabdc4d25"><code>47fb7ab</code></a> Fix deprecation warning on Python 3.11 (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/199">#199</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b0b48e059995f455ac1e79b3ad373ad4ef355516"><code>b0b48e0</code></a> fixes <a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/198">#198</a> -- update link in license</li>
<li><a href="https://github.com/certifi/python-certifi/commit/9d514b4cad79357071c89d7dc4dc1b4df72bb997"><code>9d514b4</code></a> 2022.06.15</li>
<li><a href="https://github.com/certifi/python-certifi/commit/4151e8849481f396537c34812068e89b32731e52"><code>4151e88</code></a> Add py.typed to MANIFEST.in to package in sdist (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/196">#196</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2020.06.20...2022.12.07">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 12-08-2022 12:16:26 | 12-08-2022 12:16:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,672 | closed | Bump certifi from 2020.6.20 to 2022.12.7 in /examples/research_projects/lxmert | Bumps [certifi](https://github.com/certifi/python-certifi) from 2020.6.20 to 2022.12.7.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/9e9e840925d7b8e76c76fdac1fab7e6e88c1c3b8"><code>9e9e840</code></a> 2022.12.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b81bdb269f1edb791bcd4ec8a9d0c053758f961a"><code>b81bdb2</code></a> 2022.09.24</li>
<li><a href="https://github.com/certifi/python-certifi/commit/939a28ffc57b1613770f572b584745c7b6d43e7d"><code>939a28f</code></a> 2022.09.14</li>
<li><a href="https://github.com/certifi/python-certifi/commit/aca828a78e73235a513dff9ebc181a47ef7dbf7b"><code>aca828a</code></a> 2022.06.15.2</li>
<li><a href="https://github.com/certifi/python-certifi/commit/de0eae12a6d5794a4c1e33052af6717707ce1fcc"><code>de0eae1</code></a> Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b8eb5e9af9143b22b7f651942b393e369ed4c52a"><code>b8eb5e9</code></a> 2022.06.15.1</li>
<li><a href="https://github.com/certifi/python-certifi/commit/47fb7ab715965684e035292d2ad3386aabdc4d25"><code>47fb7ab</code></a> Fix deprecation warning on Python 3.11 (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/199">#199</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b0b48e059995f455ac1e79b3ad373ad4ef355516"><code>b0b48e0</code></a> fixes <a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/198">#198</a> -- update link in license</li>
<li><a href="https://github.com/certifi/python-certifi/commit/9d514b4cad79357071c89d7dc4dc1b4df72bb997"><code>9d514b4</code></a> 2022.06.15</li>
<li><a href="https://github.com/certifi/python-certifi/commit/4151e8849481f396537c34812068e89b32731e52"><code>4151e88</code></a> Add py.typed to MANIFEST.in to package in sdist (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/196">#196</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2020.06.20...2022.12.07">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 12-08-2022 11:51:47 | 12-08-2022 11:51:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,671 | closed | Calling `AutoModel.from_config()` method for a model requiring timm does not raise ImportError although it should | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.11.0.dev0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu)
- Jax version: 0.3.14
- JaxLib version: 0.3.14
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`pip uninstall timm`, and then:
```python
from transformers import AutoModel, AutoConfig
cfg = AutoConfig.from_pretrained("hf-internal-testing/tiny-random-detr")
model = AutoModel.from_config(cfg)
```
raising:
```
Traceback (most recent call last):
File "<tmp 1>", line 18, in <module>
model = AutoModel.from_config(cfg)
File "/home/fxmarty/hf_internship/transformers/src/transformers/models/auto/auto_factory.py", line 410, in from_config
return model_class._from_config(config, **kwargs)
File "/home/fxmarty/hf_internship/transformers/src/transformers/utils/import_utils.py", line 1008, in __getattribute__
return super().__getattribute__(key)
AttributeError: type object 'DetrModel' has no attribute '_from_config'
```
### Expected behavior
It should raise:
```
ImportError:
DetrModel requires the timm library but it was not found in your environment. You can install it with pip:
`pip install timm`. Please note that you may need to restart your runtime after installation.
```
as in https://github.com/huggingface/transformers/blob/main/src/transformers/utils/dummy_timm_and_vision_objects.py#L78 | 12-08-2022 11:04:14 | 12-08-2022 11:04:14 | Indeed, I can see why and it's an easy fix. Will make a PR in a couple of hours!<|||||>Can you try the PR mentioned above?<|||||>Works well thanks for the fix! |
transformers | 20,670 | closed | T5 for Q&A produces truncated sentence | Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions.
For example, I set both the max_length, max_input_length, max_output_length to 128.
How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question?
Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue?
Any suggestions are highly appreciated.
`
import pytorch_lightning as pl
from torch.utils.data import DataLoader
import torch
import numpy as np
import time
from pathlib import Path
from transformers import (
Adafactor,
T5ForConditionalGeneration,
T5Tokenizer,
get_linear_schedule_with_warmup
)
from torch.utils.data import RandomSampler
from question_answering.utils import *
class T5FineTuner(pl.LightningModule):
def __init__(self, hyparams):
super(T5FineTuner, self).__init__()
self.hyparams = hyparams
self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path)
self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path)
if self.hyparams.freeze_embeds:
self.freeze_embeds()
if self.hyparams.freeze_encoder:
self.freeze_params(self.model.get_encoder())
# assert_all_frozen()
self.step_count = 0
self.output_dir = Path(self.hyparams.output_dir)
n_observations_per_split = {
'train': self.hyparams.n_train,
'validation': self.hyparams.n_val,
'test': self.hyparams.n_test
}
self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()}
self.em_score_list = []
self.subset_score_list = []
data_folder = r'C:\Datasets\MedQuAD-master'
self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder)
def freeze_params(self, model):
for param in model.parameters():
param.requires_grad = False
def freeze_embeds(self):
try:
self.freeze_params(self.model.model.shared)
for d in [self.model.model.encoder, self.model.model.decoder]:
self.freeze_params(d.embed_positions)
self.freeze_params(d.embed_tokens)
except AttributeError:
self.freeze_params(self.model.shared)
for d in [self.model.encoder, self.model.decoder]:
self.freeze_params(d.embed_tokens)
def lmap(self, f, x):
return list(map(f, x))
def is_logger(self):
return self.trainer.proc_rank <= 0
def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None):
return self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
labels=labels
)
def _step(self, batch):
labels = batch['target_ids']
labels[labels[:, :] == self.tokenizer.pad_token_id] = -100
outputs = self(
input_ids = batch['source_ids'],
attention_mask=batch['source_mask'],
labels=labels,
decoder_attention_mask=batch['target_mask']
)
loss = outputs[0]
return loss
def ids_to_clean_text(self, generated_ids):
gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
return self.lmap(str.strip, gen_text)
def _generative_step(self, batch):
t0 = time.time()
generated_ids = self.model.generate(
batch["source_ids"],
attention_mask=batch["source_mask"],
use_cache=True,
decoder_attention_mask=batch['target_mask'],
max_length=128,
num_beams=2,
early_stopping=True
)
preds = self.ids_to_clean_text(generated_ids)
targets = self.ids_to_clean_text(batch["target_ids"])
gen_time = (time.time() - t0) / batch["source_ids"].shape[0]
loss = self._step(batch)
base_metrics = {'val_loss': loss}
summ_len = np.mean(self.lmap(len, generated_ids))
base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=targets)
em_score, subset_match_score = calculate_scores(preds, targets)
self.em_score_list.append(em_score)
self.subset_score_list.append(subset_match_score)
em_score = torch.tensor(em_score, dtype=torch.float32)
subset_match_score = torch.tensor(subset_match_score, dtype=torch.float32)
base_metrics.update(em_score=em_score, subset_match_score=subset_match_score)
# rouge_results = self.rouge_metric.compute()
# rouge_dict = self.parse_score(rouge_results)
return base_metrics
def training_step(self, batch, batch_idx):
loss = self._step(batch)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def training_epoch_end(self, outputs):
avg_train_loss = torch.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'avg_train_loss': avg_train_loss}
# return {'avg_train_loss': avg_train_loss, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs}
def validation_step(self, batch, batch_idx):
return self._generative_step(batch)
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
if len(self.em_score_list) <= 2:
average_em_score = sum(self.em_score_list) / len(self.em_score_list)
average_subset_match_score = sum(self.subset_score_list) / len(self.subset_score_list)
else:
latest_em_score = self.em_score_list[:-2]
latest_subset_score = self.subset_score_list[:-2]
average_em_score = sum(latest_em_score) / len(latest_em_score)
average_subset_match_score = sum(latest_subset_score) / len(latest_subset_score)
average_em_score = torch.tensor(average_em_score, dtype=torch.float32)
average_subset_match_score = torch.tensor(average_subset_match_score, dtype=torch.float32)
tensorboard_logs.update(em_score=average_em_score, subset_match_score=average_subset_match_score)
self.target_gen = []
self.prediction_gen = []
return {
'avg_val_loss': avg_loss,
'em_score': average_em_score,
'subset_match_socre': average_subset_match_score,
'log': tensorboard_logs,
'progress_bar': tensorboard_logs
}
def configure_optimizers(self):
model = self.model
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": self.hyparams.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = Adafactor(optimizer_grouped_parameters, lr=self.hyparams.learning_rate, scale_parameter=False,
relative_step=False)
self.opt = optimizer
return [optimizer]
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure=None,
on_tpu=False, using_native_amp=False, using_lbfgs=False):
optimizer.step(closure=optimizer_closure)
optimizer.zero_grad()
self.lr_scheduler.step()
def get_tqdm_dict(self):
tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]}
return tqdm_dict
def train_dataloader(self):
n_samples = self.n_obs['train']
train_dataset = get_dataset(tokenizer=self.tokenizer, data=self.train_data, num_samples=n_samples,
args=self.hyparams)
sampler = RandomSampler(train_dataset)
dataloader = DataLoader(train_dataset, sampler=sampler, batch_size=self.hyparams.train_batch_size,
drop_last=True, num_workers=4)
# t_total = (
# (len(dataloader.dataset) // (self.hyparams.train_batch_size * max(1, self.hyparams.n_gpu)))
# // self.hyparams.gradient_accumulation_steps
# * float(self.hyparams.num_train_epochs)
# )
t_total = 100000
scheduler = get_linear_schedule_with_warmup(
self.opt, num_warmup_steps=self.hyparams.warmup_steps, num_training_steps=t_total
)
self.lr_scheduler = scheduler
return dataloader
def val_dataloader(self):
n_samples = self.n_obs['validation']
validation_dataset = get_dataset(tokenizer=self.tokenizer, data=self.val_data, num_samples=n_samples,
args=self.hyparams)
sampler = RandomSampler(validation_dataset)
return DataLoader(validation_dataset, shuffle=False, batch_size=self.hyparams.eval_batch_size, sampler=sampler, num_workers=4)
def test_dataloader(self):
n_samples = self.n_obs['test']
test_dataset = get_dataset(tokenizer=self.tokenizer, data=self.test_data, num_samples=n_samples, args=self.hyparams)
return DataLoader(test_dataset, batch_size=self.hyparams.eval_batch_size, num_workers=4)
def on_save_checkpoint(self, checkpoint):
save_path = self.output_dir.joinpath("best_tfmr")
self.model.config.save_step = self.step_count
self.model.save_pretrained(save_path)
self.tokenizer.save_pretrained(save_path)
import os
import argparse
import pytorch_lightning as pl
from question_answering.t5_closed_book import T5FineTuner
if __name__ == '__main__':
os.environ['REQUESTS_CA_BUNDLE'] = r'C:\ProgramData\NORCE\cer\NORCE_CA.cer'
# os.environ["PL_TORCH_DISTRIBUTED_BACKEND"] = "gloo"
# nltk.download('punkt')
args_dict = dict(
output_dir="", # path to save the checkpoints
model_name_or_path='t5-large',
tokenizer_name_or_path='t5-large',
max_input_length=128,
max_output_length=256,
freeze_encoder=False,
freeze_embeds=False,
learning_rate=1e-5,
weight_decay=0.0,
adam_epsilon=1e-8,
warmup_steps=0,
train_batch_size=4,
eval_batch_size=4,
num_train_epochs=2,
gradient_accumulation_steps=10,
n_gpu=1,
resume_from_checkpoint=None,
val_check_interval=0.5,
n_val=4000,
n_train=-1,
n_test=-1,
early_stop_callback=False,
fp_16=False, # if you want to enable 16-bit training then install apex and set this to true
opt_level='O1',
# you can find out more on optimisation levels here https://nvidia.github.io/apex/amp.html#opt-levels-and-properties
max_grad_norm=1.0, # if you enable 16-bit training then set this to a sensible value, 0.5 is a good default
seed=101,
)
args_dict.update({'output_dir': 't5_large_MedQuAD_256', 'num_train_epochs': 100,
'train_batch_size': 8, 'eval_batch_size': 8, 'learning_rate': 1e-3})
# 'resume_from_checkpoint': 't5_trivia_qa_closedbook/checkpointepoch=53.ckpt'})
args = argparse.Namespace(**args_dict)
checkpoint_callback = pl.callbacks.ModelCheckpoint(dirpath=args.output_dir, monitor="em_score", mode="max", save_top_k=1)
## If resuming from checkpoint, add an arg resume_from_checkpoint
train_params = dict(
accumulate_grad_batches=args.gradient_accumulation_steps,
gpus=args.n_gpu,
max_epochs=args.num_train_epochs,
# early_stop_callback=False,
precision=16 if args.fp_16 else 32,
# amp_level=args.opt_level,
# resume_from_checkpoint=args.resume_from_checkpoint,
gradient_clip_val=args.max_grad_norm,
checkpoint_callback=checkpoint_callback,
val_check_interval=args.val_check_interval,
# accelerator='dp'
# logger=wandb_logger,
# callbacks=[LoggingCallback()],
)
model = T5FineTuner(args)
trainer = pl.Trainer(**train_params)
trainer.fit(model)
`
| 12-08-2022 10:55:59 | 12-08-2022 10:55:59 | It is so strange that the format of code does not look correct, even I have put them in ``.<|||||>Hi there, you should use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only :-)<|||||>> Hi there, you should use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only :-)
Hi, I am so sorry about it. I have actually asked the same question in the forums, but didn't get answers. So I just want to try my luck here. I will close the issue. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.