repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 21,169 | closed | Bump future from 0.18.2 to 0.18.3 in /examples/research_projects/lxmert | Bumps [future](https://github.com/PythonCharmers/python-future) from 0.18.2 to 0.18.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/PythonCharmers/python-future/releases">future's releases</a>.</em></p>
<blockquote>
<h2>v0.18.3</h2>
<p>This is a minor bug-fix release containing a number of fixes:</p>
<ul>
<li>Backport fix for bpo-38804 (c91d70b)</li>
<li>Fix bug in fix_print.py fixer (dffc579)</li>
<li>Fix bug in fix_raise.py fixer (3401099)</li>
<li>Fix newint bool in py3 (fe645ba)</li>
<li>Fix bug in super() with metaclasses (6e27aac)</li>
<li>docs: fix simple typo, reqest -> request (974eb1f)</li>
<li>Correct <strong>eq</strong> (c780bf5)</li>
<li>Pass if lint fails (2abe00d)</li>
<li>Update docker image and parcel out to constant variable. Add comment to update version constant (45cf382)</li>
<li>fix order (f96a219)</li>
<li>Add flake8 to image (046ff18)</li>
<li>Make lint.sh executable (58cc984)</li>
<li>Add docker push to optimize CI (01e8440)</li>
<li>Build System (42b3025)</li>
<li>Add docs build status badge to README.md (3f40bd7)</li>
<li>Use same docs requirements in tox (18ecc5a)</li>
<li>Add docs/requirements.txt (5f9893f)</li>
<li>Add PY37_PLUS, PY38_PLUS, and PY39_PLUS (bee0247)</li>
<li>fix 2.6 test, better comment (ddedcb9)</li>
<li>fix 2.6 test (3f1ff7e)</li>
<li>remove nan test (4dbded1)</li>
<li>include list test values (e3f1a12)</li>
<li>fix other python2 test issues (c051026)</li>
<li>fix missing subTest (f006cad)</li>
<li>import from old imp library on older python versions (fc84fa8)</li>
<li>replace fstrings with format for python 3.4,3.5 (4a687ea)</li>
<li>minor style/spelling fixes (8302d8c)</li>
<li>improve cmp function, add unittest (0d95a40)</li>
<li>Pin typing==3.7.4.1 for Python 3.3 compatiblity (1a48f1b)</li>
<li>Fix various py26 unit test failures (9ca5a14)</li>
<li>Add initial contributing guide with docs build instruction (e55f915)</li>
<li>Add docs building to tox.ini (3ee9e7f)</li>
<li>Support NumPy's specialized int types in builtins.round (b4b54f0)</li>
<li>Added r""" to the docstring to avoid warnings in python3 (5f94572)</li>
<li>Add <strong>subclasscheck</strong> for past.types.basestring (c9bc0ff)</li>
<li>Correct example in README (681e78c)</li>
<li>Add simple documentation (6c6e3ae)</li>
<li>Add pre-commit hooks (a9c6a37)</li>
<li>Handling of <strong>next</strong> and next by future.utils.get_next was reversed (52b0ff9)</li>
<li>Add a test for our fix (461d77e)</li>
<li>Compare headers to correct definition of str (3eaa8fd)</li>
<li><a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/322">#322</a> Add support for negative ndigits in round; additionally, fixing a bug so that it handles passing in Decimal properly (a4911b9)</li>
<li>Add tkFileDialog to future.movers.tkinter (f6a6549)</li>
<li>Sort before comparing dicts in TestChainMap (6126997)</li>
<li>Fix typo (4dfa099)</li>
<li>Fix formatting in "What's new" (1663dfa)</li>
<li>Fix typo (4236061)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/PythonCharmers/python-future/commit/af1db970b0879b59e7aeb798c27a623144561cff"><code>af1db97</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/613">#613</a> from PythonCharmers/lwan/0.18.3-release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/079ee9b75441d36447cec9981fa1b0032862f64d"><code>079ee9b</code></a> Prepare for 0.18.3 release</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/02f7a8143d5b68f50a1cca44d8f5a58c1925a515"><code>02f7a81</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/610">#610</a> from wshanks/wshanks-patch-1</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c91d70b34ef0402aef3e9d04364ba98509dca76f"><code>c91d70b</code></a> Backport fix for bpo-38804</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/80523f383fbba1c6de0551e19d0277e73e69573c"><code>80523f3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/569">#569</a> from jmadler/master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/5e5af71549c7a7fa0e28f881046e081e231e455d"><code>5e5af71</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/582">#582</a> from r3m0t/patch-6</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/17e4bbd7c676a9a8efd20601e51675c95f74b330"><code>17e4bbd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/PythonCharmers/python-future/issues/596">#596</a> from abjonnes/fix-print-trailing-comma</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/1b427ba70191927706282840835e31ae0733ee7e"><code>1b427ba</code></a> Merge branch 'xZise-official-count' into master</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/c8eb497336c76d300c6753b47c7f5de505660d7a"><code>c8eb497</code></a> Merge branch 'official-count' of <a href="https://github.com/xZise/python-future">https://github.com/xZise/python-future</a> into ...</li>
<li><a href="https://github.com/PythonCharmers/python-future/commit/dffc579dbb7c882fc01fa0c0dfa6b59acef7827d"><code>dffc579</code></a> Fix bug in fix_print.py fixer</li>
<li>Additional commits viewable in <a href="https://github.com/PythonCharmers/python-future/compare/v0.18.2...v0.18.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:42 | 01-18-2023 16:14:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21169). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,168 | closed | Bump torch from 1.9.0+cpu to 1.13.1 in /examples/flax/vision | Bumps [torch](https://github.com/pytorch/pytorch) from 1.9.0+cpu to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Appleโs new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidiaยฎ, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Appleยฎ silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intelยฎ VTuneโข Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Armยฎ Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/pytorch/pytorch/commits/v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:42 | 01-18-2023 16:14:42 | OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21168). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,167 | closed | Bump torch from 1.9.0+cpu to 1.13.1 in /examples/research_projects/jax-projects/hybrid_clip | Bumps [torch](https://github.com/pytorch/pytorch) from 1.9.0+cpu to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/releases">torch's releases</a>.</em></p>
<blockquote>
<h2>PyTorch 1.13.1 Release, small bug fix release</h2>
<p>This release is meant to fix the following issues (regressions / silent correctness):</p>
<ul>
<li>RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88669">#88669</a></li>
<li>Installation via pip on Amazon Linux 2, regression <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88869">#88869</a></li>
<li>Installation using poetry on Mac M1, failure <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88049">#88049</a></li>
<li>Missing masked tensor documentation <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89734">#89734</a></li>
<li>torch.jit.annotations.parse_type_line is not safe (command injection) <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88868">#88868</a></li>
<li>Use the Python frame safely in _pythonCallstack <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88993">#88993</a></li>
<li>Double-backward with full_backward_hook causes RuntimeError <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88312">#88312</a></li>
<li>Fix logical error in get_default_qat_qconfig <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88876">#88876</a></li>
<li>Fix cuda/cpu check on NoneType and unit test <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88854">#88854</a> and <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88970">#88970</a></li>
<li>Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/88504">#88504</a></li>
<li>Onnx operator_export_type on the new registry <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/87735">#87735</a></li>
<li>torchrun AttributeError caused by file_based_local_timer on Windows <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/85427">#85427</a></li>
</ul>
<p>The <a href="https://github-redirect.dependabot.com/pytorch/pytorch/issues/89855">release tracker</a> should contain all relevant pull requests related to this release as well as links to related issues</p>
<h2>PyTorch 1.13: beta versions of functorch and improved support for Appleโs new M1 chips are now available</h2>
<h1>Pytorch 1.13 Release Notes</h1>
<ul>
<li>Highlights</li>
<li>Backwards Incompatible Changes</li>
<li>New Features</li>
<li>Improvements</li>
<li>Performance</li>
<li>Documentation</li>
<li>Developers</li>
</ul>
<h1>Highlights</h1>
<p>We are excited to announce the release of PyTorch 1.13! This includes stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.</p>
<p>Summary:</p>
<ul>
<li>
<p>The BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.</p>
</li>
<li>
<p>Timely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidiaยฎ, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.</p>
</li>
<li>
<p>Previously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to <code>import functorch</code> and use functorch without needing to install another package.</p>
</li>
<li>
<p>PyTorch is offering native builds for Appleยฎ silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Stable</th>
<th>Beta</th>
<th>Prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Better Transformer<!-- raw HTML omitted --><!-- raw HTML omitted -->CUDA 10.2 and 11.3 CI/CD Deprecation <!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Enable Intelยฎ VTuneโข Profiler's Instrumentation and Tracing Technology APIs<!-- raw HTML omitted --><!-- raw HTML omitted -->Extend NNC to support channels last and bf16<!-- raw HTML omitted --><!-- raw HTML omitted -->Functorch now in PyTorch Core Library<!-- raw HTML omitted --><!-- raw HTML omitted -->Beta Support for M1 devices<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
<td><!-- raw HTML omitted --><!-- raw HTML omitted -->Armยฎ Compute Library backend support for AWS Graviton<!-- raw HTML omitted --><!-- raw HTML omitted --> CUDA Sanitizer<!-- raw HTML omitted --><!-- raw HTML omitted --></td>
</tr>
</tbody>
</table>
<p>You can check the blogpost that shows the new features <a href="https://pytorch.org/blog/PyTorch-1.13-release/">here</a>.</p>
<h1>Backwards Incompatible changes</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pytorch/pytorch/blob/master/RELEASE.md">torch's changelog</a>.</em></p>
<blockquote>
<h1>Releasing PyTorch</h1>
<!-- raw HTML omitted -->
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#general-overview">General Overview</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-a-release-branch-preparations">Cutting a release branch preparations</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cutting-release-branches">Cutting release branches</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchpytorch"><code>pytorch/pytorch</code></a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#pytorchbuilder--pytorch-domain-libraries"><code>pytorch/builder</code> / PyTorch domain libraries</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-pytorch">Making release branch specific changes for PyTorch</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#making-release-branch-specific-changes-for-domain-libraries">Making release branch specific changes for domain libraries</a></li>
</ul>
</li>
<li><a href="#drafting-rcs-release-candidates-for-pytorch-and-domain-libraries">Drafting RCs (https://github.com/pytorch/pytorch/blob/master/Release Candidates) for PyTorch and domain libraries</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-storage">Release Candidate Storage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#release-candidate-health-validation">Release Candidate health validation</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#cherry-picking-fixes">Cherry Picking Fixes</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#promoting-rcs-to-stable">Promoting RCs to Stable</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#additional-steps-to-prepare-for-release-day">Additional Steps to prepare for release day</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#modify-release-matrix">Modify release matrix</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#open-google-colab-issue">Open Google Colab issue</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-releases">Patch Releases</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-criteria">Patch Release Criteria</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#patch-release-process">Patch Release Process</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#triage">Triage</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#issue-tracker-for-patch-releases">Issue Tracker for Patch releases</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-a-release-schedule--cherry-picking">Building a release schedule / cherry picking</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#building-binaries--promotion-to-stable">Building Binaries / Promotion to Stable</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#hardware--software-support-in-binary-build-matrix">Hardware / Software Support in Binary Build Matrix</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#python">Python</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#tldr">TL;DR</a></li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#accelerator-software">Accelerator Software</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-support-cases">Special support cases</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#special-topics">Special Topics</a>
<ul>
<li><a href="https://github.com/pytorch/pytorch/blob/master/#updating-submodules-for-a-release">Updating submodules for a release</a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
<h2>General Overview</h2>
<p>Releasing a new version of PyTorch generally entails 3 major steps:</p>
<ol start="0">
<li>Cutting a release branch preparations</li>
<li>Cutting a release branch and making release branch specific changes</li>
<li>Drafting RCs (Release Candidates), and merging cherry picks</li>
<li>Promoting RCs to stable and performing release day tasks</li>
</ol>
<h2>Cutting a release branch preparations</h2>
<p>Following Requirements needs to be met prior to final RC Cut:</p>
<ul>
<li>Resolve all outstanding issues in the milestones(for example <a href="https://github.com/pytorch/pytorch/milestone/28">1.11.0</a>)before first RC cut is completed. After RC cut is completed following script should be executed from builder repo in order to validate the presence of the fixes in the release branch :</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/pytorch/pytorch/commits/v1.13.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-18-2023 16:14:41 | 01-18-2023 16:14:41 | OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21167). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,166 | closed | Fix doctest CI | # What does this PR do?
Fix doctest CI, caused by a change in #20757. The change in multi-label example has some minor issue. See comments in the review. | 01-18-2023 14:00:19 | 01-18-2023 14:00:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,165 | closed | rag model evaluation program bug | ### System Info
transformers=4.25.1
huggingface-hub=0.10.1
tokenizers =0.13.2
python=3.7
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'RagTokenizer'.
The class this function is called from is 'BartTokenizerFast'.
Loading passages from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr
Traceback (most recent call last):
File "/home/nano/transformers/examples/research_projects/rag/eval_rag.py", line 321, in <module>
main(args)
File "/home/nano/transformers/examples/research_projects/rag/eval_rag.py", line 295, in main
retriever = RagRetriever.from_pretrained(checkpoint, **model_kwargs)
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 429, in from_pretrained
index = cls._build_index(config)
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 400, in _build_index
config.index_path or LEGACY_INDEX_PATH,
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 108, in __init__
self.passages = self._load_passages()
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 133, in _load_passages
passages_path = self._resolve_path(self.index_path, self.PASSAGE_FILENAME)
File "/home/nano/transformers/src/transformers/models/rag/retrieval_rag.py", line 117, in _resolve_path
resolved_archive_file = cached_file(index_path, filename)
File "/home/nano/transformers/src/transformers/utils/hub.py", line 420, in cached_file
local_files_only=local_files_only,
File "/home/nano/miniconda3/envs/rag/lib/python3.7/site-packages/huggingface_hub/file_download.py", line 1022, in hf_hub_download
cache_dir, repo_folder_name(repo_id=repo_id, repo_type=repo_type)
File "/home/nano/miniconda3/envs/rag/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py", line 92, in _inner_fn
validate_repo_id(arg_value)
File "/home/nano/miniconda3/envs/rag/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py", line 137, in validate_repo_id
"Repo id must be in the form 'repo_name' or 'namespace/repo_name':"
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr'. Use `repo_type` argument if needed.
### Expected behavior
I run the evaluation program of the rag model, and after adding hyperparameters according to the example, it prompts an error | 01-18-2023 07:32:33 | 01-18-2023 07:32:33 | This example relies on earlier version of Transformers and HuggingFace Hub, you should downgrade them.<|||||>@sgugger I'm sorry, can you give some advice about the version?I have tried several versions myself without success.<|||||>It looks like this example was released along with Transformers 3.2.0 or 3.3.0.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,164 | closed | deleted references of self.vocab_size and self.type_vocab_size for multiple models [TF implementation] | # What does this PR do?
This PR deletes references of `self.vocab_size` and `self.type_vocab_size` for these models [Tensorflow implementation] : bert, albert, lxmert, electra, tapas, convbert, layoutlm, gpt2, camembert, clip, ctrl, deberta, deberta_v2, distilbert, esm, funnel, gptj, groupvit, longformer, mobilebert, mpnet, openai, rembert, roberta, roberta_prelayernorm, roformer, xlm_roberta, xlnet.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.([link](https://github.com/huggingface/transformers/issues/21053))
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante | 01-18-2023 06:26:50 | 01-18-2023 06:26:50 | Hi, @gante as you said [here](https://github.com/huggingface/transformers/pull/21065#issuecomment-1385233073) I made changes for `src/transformers/models/albert/modeling_tf_albert.py`, could you please check it? If it's ok then I will push other changes of other models.
The failed test `ci/circleci: check_repository_consistency` are due to the fact that I only changed Embeddings for albert model and it is different from TFBertEmbeddings , the test will be successful when I change them too. <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@susnato before moving forward, let's make sure our `transformers` master agrees with the set of changes we are about to make :)
_________________________________
@sgugger this PR contains a change I want to apply across TF models, and @susnato is kindly doing the bulk of the work. In essence, some `config` attributes are not constant throughout the model's life, like `config.vocab_size`, and our TF implementation stores them as immutable class attributes (e.g. `self.vocab_size = config.vocab_size`). PT doesn't have this problem, since it simply stores `self.config = config` in the layers, which benefits from the updates the mutable `config` dictionary may receive elsewhere.
The proposed change is to make TF implementation closer to PT implementation and store `self.config` in the layers, as opposed to individual configuration parameters. This also solves the bug that triggered this discussion, where the vocabulary size was not being correctly updated and causing exceptions.
Let us know if you are okay with us making this change over most model architectures ๐ <|||||>@susnato you might need to run `make fixup` locally, to automatically format the code and make our CI happy<|||||>Hi, @gante I added all the models I found to have self.vocab_size and removed reference to self.vocab_size and self.type_vocab_size and also all the tests are passed! Could you please check it? <|||||>> LGTM +1
>
> Can we edit the PR title to a shorter one before merging? sweat_smile
@gante Done!<|||||>Awesome!
Thank you for all the work you've put into fixing this, @susnato ๐ค |
transformers | 21,163 | closed | Output of finetuned facebook/wav2vec2-xls-r-300m model is getting incorrect | ### System Info
Transformers 4.23.1
Pytorch 1.12.1
Datasets 2.4.0
Tokenizers 0.13.2
I have finetuned facebook/wav2vec2-xls-r-300m model on my own dataset. I have cross check my dataset twice is it correct format as like libri speech dataset. I have finetuned my code using reference of @patrickvonplaten
https://huggingface.co/blog/fine-tune-wav2vec2-english

### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
a
### Expected behavior
In our case I am getting result like "THTMTHTET" for IT IS PERMITTED TO TURN THE WHEELPAN BELOW MM WITH THE CONTROLS IN TABLE IT IS PERMITTED TO TURN BELOW THE LAST TURNING' GROOVE
this text we need | 01-18-2023 04:53:38 | 01-18-2023 04:53:38 | cc @sanchit-gandhi <|||||>Hey @Shubhambugade09,
Could you please try provide a google colab that easily reproduces your training run or maybe instead use the forum specific fine-tuning questions: https://discuss.huggingface.co/
Thanks! <|||||>Hey @patrickvonplaten
I have send you the jupyter notebook on [[email protected]](mailto:[email protected]). please check it
Thanks! <|||||>Hey @Shubhambugade09, would you mind posting a link to your Colab here so I can review it? It would be awesome to post the link here rather than sending as email so that the discussion remains public, this way helping other people who might be experiencing the same issue ๐ค Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,162 | closed | using raw string for regex to search <extra_id> | # What does this PR do?
Similar to this: https://github.com/huggingface/transformers/pull/21125
This change replaces the regex pattern written in a Unicode string with a raw string for these two files:
* `tokenization_t5.py`
* `test_tokenization_t5.py`
Also checked, and no more:)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Hi, @ArthurZucker @sgugger. Would you be happy to review this PR?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-18-2023 04:09:08 | 01-18-2023 04:09:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,161 | closed | CodeGen Tokenizer Deletes Newline Symbols | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1.post201 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@rooa @patrickvonplaten @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The CodeGen tokenizer seems to remove the newline symbol in certain scenarios.
In particular, `decode(encode(text))` does not always equal the original `text`.
The following is the smallest example that reproduces the error but other text examples will have this issue as well.
```python
from transformers import CodeGenTokenizer
# other checkpoints in the CodeGen series have the same issue
tokenizer = CodeGenTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
# new line (10), space (32), space (32)
text = "\n "
print([ord(c) for c in text])
# output: [10, 32, 32]
encoded = tokenizer.encode(text)
print(encoded)
# output: [50286]
decoded = tokenizer.decode(encoded)
print([ord(c) for c in decoded])
# actual: [32, 32]
# expected: [10, 32, 32]
```
### Expected behavior
Expected: the decoded string is equal to the original string.
Actual: the decoded string is missing the leading new line symbol. | 01-18-2023 03:11:00 | 01-18-2023 03:11:00 | Hi, @hellodanylo this looks like the same issue described in issue #21120, where the tokenizer strips the whitespace and \n in front and at the end of the sentence. To overcome it use `tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")`.<|||||>cc @ArthurZucker <|||||>@susnato thanks, using the fast tokenizer did solve this issue.
For future readers, you can get the fast tokenizer using either of the following ways:
```
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
tokenizer = CodeGenTokenizerFast.from_pretrained("Salesforce/codegen-350M-multi")
```
If this bug is not specific to CodeGen tokenizer, then we can close this as duplicate of #21120?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,160 | closed | Fix typos in documentation | Fixes to typos in documentation: @sgugger, @stevhliu and @MKhalusova | 01-17-2023 21:18:48 | 01-17-2023 21:18:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Please make sure to run `make style` on your branch so that the quality scripts pass. Thank you!<|||||>Done, thanks @sgugger |
transformers | 21,159 | closed | Feature Request: VideoMAEForMaskedVideoModeling | ### Feature request
Basically, it would be nice if we could fill in the masked video.
### Motivation
I doubt I'm the only person that would like to try/train this model for inpainting masked video.
### Your contribution
I guess I could contribute it since I went to the torchside | 01-17-2023 20:55:13 | 01-17-2023 20:55:13 | Hi,
This is already available, the class is called [VideoMAEForPreTraining](https://huggingface.co/docs/transformers/model_doc/videomae#transformers.VideoMAEForPreTraining). To reconstruct pixel values, you can load the following model:
```
from transformers import VideoMAEForPreTraining
model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base")
```
To visualize a masked video, you can borrow the code from the [original implementation](https://github.com/MCG-NJU/VideoMAE/blob/45dcd7f10183099669baa481c6d33165842d8bf3/run_videomae_vis.py#L167). |
transformers | 21,158 | closed | Adapt repository creation to latest hf_hub | # What does this PR do?
This PR adapts the code to push the trained model to the Hub to the latest APIs in `huggingface_hub`. In particular, `Repository` is no longer responsible for the distant repo creation, so this PR switches to the use of `create_repo`. We relied on this behavior in:
- the Trainer
- all PyTorch no_trainer examples
- all Flax examples
- some tests
It also uses the new `token` keyword argument instead of `use_auth_token`, which is the reason for the version bump (`Repository` in v0.10 still expects `use_auth_token`). I updated all the uses I could find to pass from `use_auth_token` to `token` as part of this PR. | 01-17-2023 20:51:22 | 01-17-2023 20:51:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> It also uses the new token keyword argument instead of use_auth_token,
It is indeed good practice to drop `use_auth_token` whenever possible when using `huggingface_hub` (in favor of `token`). Just to mention it, using `use_auth_token` would still work (without deprecation yet) so if you missed some occurrences, it will not break. |
transformers | 21,157 | closed | Make `parallelism` for CircleCI jobs work - but keep it to be `1` for now | # What does this PR do?
Enable `parallelism` for CircleCI jobs. So far I only enable it for torch/tf/flax jobs. It can be switch off easily. | 01-17-2023 19:02:13 | 01-17-2023 19:02:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for looking into this. As said internally I'm really not a fan of the split test reports this gets us at the end.
I can probably try to concatenate the failed tests (with details) at the end - although so far I don't have clear idea of the feasibility)
But at least we can switch back to N=1 if we really need to (i.e. we have real difficulty to read the test failures)<|||||>Ready for review again. So far it still uses `parallelism=1` - this PR just provides the necessary change for using `parallelism=N`. Hopefully I can figure out a way to make a better report format in another PR, and we can finally go for N > 1.<|||||>> Thanks! Another thing to look for before enabling it is if we just pay 8x the price for parallelism=8 and not way more than that.
@sgugger I think the expectation to check is if we just pay 1x~2x the price for parallelism=N (say 8). Using 8 executors means less runtime on each executor (ideally 1/8), so the total should be the same (but there is definitely some overhead, like in cache loading / pip install steps) |
transformers | 21,156 | closed | [HF Trainer] [PyTorch FSDP] Add support for backward_prefetch, forward_prefetch, limit_all_gathers | ### **Feature request**
Can we add Trainer support for the following [FSDP](https://pytorch.org/docs/1.13/fsdp.html?highlight=fsdp#module-torch.distributed.fsdp) features? `backward_prefetch`, `forward_prefetch` and `limit_all_gathers`
### **Motivation**
`backward_prefetch`, `forward_prefetch` and `limit_all_gathers` are important to achieve best performance when training at scale with FSDP. | 01-17-2023 18:37:17 | 01-17-2023 18:37:17 | cc @pacman100 <|||||>There are also 2 newly added[ sharding strategies](https://pytorch.org/docs/master/fsdp.html#torch.distributed.fsdp.ShardingStrategy) with PyTorch 2.0: `HYBRID_SHARD` and `_HYBRID_SHARD_ZERO2` that can impact performance. <|||||>The ^^ above fix only addes backward_prefetch and forward_prefetch options in fsdp, limit_all_gathers and other sharding strategies is not available with current pytorch version used in repo.<|||||>Hi! These two fixes still doesn't allow the following two strategies
> There are also 2 newly added[ sharding strategies](https://pytorch.org/docs/master/fsdp.html#torch.distributed.fsdp.ShardingStrategy) with PyTorch 2.0: HYBRID_SHARD and _HYBRID_SHARD_ZERO2 that can impact performance.
We could potentially enable it by adding two conditions here? https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/trainer.py#L453-L458<|||||>> Hi! These two fixes still doesn't allow the following two strategies
>
> > There are also 2 newly added[ sharding strategies](https://pytorch.org/docs/master/fsdp.html#torch.distributed.fsdp.ShardingStrategy) with PyTorch 2.0: HYBRID_SHARD and _HYBRID_SHARD_ZERO2 that can impact performance.
>
> We could potentially enable it by adding two conditions here?
>
> https://github.com/huggingface/transformers/blob/118e9810687dd713b6be07af79e80eeb1d916908/src/transformers/trainer.py#L453-L458
Yes, I will open a PR shortly. |
transformers | 21,155 | closed | Update examples with image processors | # What does this PR do?
Updates all of the feature extractor references to image processors in the `examples` folder.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 01-17-2023 18:27:37 | 01-17-2023 18:27:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This broke the deepspeed tests. Fix is here: https://github.com/huggingface/transformers/pull/21283
when modifying examples and breaking back compat please scan the slow tests and adjust those too. Thank you!
The reason this is important is because Deepspeed CI runs our slow deepspeed tests as normal CI so when we break things their CI is broken.<|||||>@stas00 - thanks for applying a quick fix and apologies about the break. I'll make sure to remember the slow tests next time! <|||||>Thank you, Amy!
`tests/extended/` and `tests/deepspeed` are the ones that heavily rely on `examples/pytorch`
|
transformers | 21,154 | closed | CLI: update hub PR URL | # What does this PR do?
Keeps up with hub API changes, and updates the instruction to get the PR URL (which was returning an object) | 01-17-2023 15:33:21 | 01-17-2023 15:33:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>It is already recent enough: the change was added in `v0.10.0` (see `1.` in `v0.10.0` breaking changes), which is our current minimum version |
transformers | 21,153 | closed | Change variable name to prevent shadowing | This PR replaces the `input` variable with `input_string` to prevent shadowing caused by the `input()` function.
Thanks to @LysandreJik for catching it. | 01-17-2023 15:00:56 | 01-17-2023 15:00:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Since the changes are small, I guess it's good to merge but still confirming. <|||||>Yes, one core maintainer approval is enough to merge. |
transformers | 21,152 | closed | [DOCTEST] Refactor doctest for simplicity and safety | # What does this PR do?
This a a draft PR to simplify the testing of documentation, which would rely on the `doctest` API.
The documentation tests related to any model added will also be run as part of the CI that are not slow, juste to make sure that we fix everything in one go. (and that the contributors have an easier time doing it).
I am also learning about github workflows and CI jobs, which is a good exercice! | 01-17-2023 14:55:59 | 01-17-2023 14:55:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21152). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,151 | closed | Contrastive Search in .generate() function doesn't work with Half | ### System Info
The CLI fails but this is irrelevant to the problem
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load any model like so
```
model = AutoModelForCausalLM.from_pretrained(
"<PATH>",
torch_dtype=torch.float16,
)
```
2. Perform generation using contrastive search
```
gen_tokens = model.generate(
tokenized_input.input_ids,
top_k=4,
penalty_alpha=0.6
)
```
### Expected behavior
Contrastive search probably should work with torch.float16 (if not just let me know - idk if there are stability issues).
This can be fixed by adding the following code to https://github.com/huggingface/transformers/blob/25ddd91b249014d818fb2ed3d4ba856ed9a5653e/src/transformers/generation/utils.py#L1873
```
# conditionally convert from float16
if context_hidden.dtype == torch.float16:
context_hidden = context_hidden.to(dtype=torch.float32)
if next_hidden.dtype == torch.float16:
next_hidden = next_hidden.to(dtype=torch.float32)
if top_k_probs.dtype == torch.float16:
top_k_probs = top_k_probs.to(dtype=torch.float32)
``` | 01-17-2023 14:44:12 | 01-17-2023 14:44:12 | Hey @sam-ulrich1 ๐
To be candid, fp16 was not a concern when writing contrastive search :) I've tried adding your suggested change and running the script below, but that was not enough to fix it
```py
from transformers import GPT2Tokenizer, OPTForCausalLM
import torch
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m", padding_side='left')
model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16)
inputs = tokenizer(["My cat is"], return_tensors="pt")
outputs = model.generate(**inputs, top_k=4, penalty_alpha=0.6)
print(tokenizer.batch_decode(outputs.sequences))
```
Would you be able to share a snippet of what you're trying to run? :)<|||||>Odd! It works on my machine (pun intended)!
Let me get my version and other info and I can make a PR if you'd like. That way we can work from code not just snippets<|||||>@gante Could you share your traceback? I'll take a look at this later today<|||||>@sam-ulrich1 haha roles reversed, usually I'm the one asking for tracebacks!
```
Traceback (most recent call last):
File "/home/joao/transformers/../joao_scripts/dbg.py", line 17, in <module>
outputs = model.generate(**inputs, top_k=4, penalty_alpha=0.6)
File "/home/joao/hf/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/joao/transformers/src/transformers/generation/utils.py", line 1372, in generate
return self.contrastive_search(
File "/home/joao/hf/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/joao/transformers/src/transformers/generation/utils.py", line 1769, in contrastive_search
outputs = self(
File "/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/joao/transformers/src/transformers/models/opt/modeling_opt.py", line 934, in forward
outputs = self.model.decoder(
File "/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/joao/transformers/src/transformers/models/opt/modeling_opt.py", line 645, in forward
inputs_embeds = self.project_in(inputs_embeds)
File "/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/joao/hf/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
```<|||||>Ya I got a kick out of that too!
It actually looks like that is an OPT issue with Half. I'm playing around with CodeGen so that would be my reference but I know other models are affected as well. Currently the problem I'm targeting is `"baddbmm_with_gemm" not implemented for 'Half'`
I'll take a look at the OPT thing as well but if it's out of scope I'll probably start another issue to keep the tracking simple.<|||||>@gante I'm not gonna get this done today but I'll get it knocked out by the end of the week. I just have a bit busier week than I expected <|||||>@sam-ulrich1 no worries :) and let me know if you need a hand!<|||||>@gante How do I run the tests in the repo? I added the below test at the below link so that I can validate my fix. I want to run this test on the CodeGen model but I've never worked with a testing setup like this
https://github.com/huggingface/transformers/blob/0359e2e15f4504513fd2995bdd6dd654c747b313/tests/generation/test_utils.py#L1432
```
def test_contrastive_generate_fp16(self):
# check `generate()` and `contrastive_search()` are equal
for model_class in self.all_generative_model_classes:
# won't fix: FSMT and Reformer have a different cache variable type (and format).
if any(model_name in model_class.__name__.lower() for model_name in ["fsmt", "reformer"]):
return
config, input_ids, attention_mask, max_length = self._get_input_ids_and_config()
# NOTE: contrastive search only works with cache on at the moment.
if not hasattr(config, "use_cache"):
return
config.use_cache = True
config.is_decoder = True
config.torch_dtype = torch.float16
# test old generation output for backwards compatibility
model = model_class(config).to(torch_device).eval()
output_contrastive, output_generate = self._contrastive_generate(
model=model, input_ids=input_ids, attention_mask=attention_mask, max_length=max_length
)
self.assertListEqual(output_contrastive.tolist(), output_generate.tolist())
```<|||||>@sam-ulrich1 try `py.test tests/ -k contrastive_generate_fp16 -vv`, assuming you are in `.../transformers/`.
(`tests/` is the folder containing the test files, `-k` filters tests by name, `contrastive_generate_fp16` is the test name filter based on your test name)<|||||>Thanks!<|||||>@gante Okay it seems to be fixed but there is one model that fails the test for (what appears to be) a unrelated problem. What's the procedure for this? Can y'all accept a PR if all the tests don't pass?
Here's the failing model:
```
FAILED tests/models/git/test_modeling_git.py::GitModelTest::test_contrastive_generate_fp16 - RuntimeError: output with shape [10, 1, 1, 1] doesn't match the broadcast shape [10, 1, 1, 4]
```
And pytest stack trace+
```python
___________________________________________________________________________________________________________ GitModelTest.test_contrastive_generate_fp16 ____________________________________________________________________________________________________________
self = <tests.models.git.test_modeling_git.GitModelTest testMethod=test_contrastive_generate_fp16>
def test_contrastive_generate_fp16(self):
# check `generate()` and `contrastive_search()` are equal
for model_class in self.all_generative_model_classes:
# won't fix: FSMT and Reformer have a different cache variable type (and format).
if any(model_name in model_class.__name__.lower() for model_name in ["fsmt", "reformer"]):
return
config, input_ids, attention_mask, max_length = self._get_input_ids_and_config()
# NOTE: contrastive search only works with cache on at the moment.
if not hasattr(config, "use_cache"):
return
config.use_cache = True
config.is_decoder = True
config.torch_dtype = torch.float16
print(config)
# test old generation output for backwards compatibility
model = model_class(config).to(torch_device).eval()
> output_contrastive, output_generate = self._contrastive_generate(
model=model, input_ids=input_ids, attention_mask=attention_mask, max_length=max_length
)
tests/generation/test_utils.py:1453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/generation/test_utils.py:655: in _contrastive_generate
output_generate = model.generate(
../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27: in decorate_context
return func(*args, **kwargs)
src/transformers/generation/utils.py:1321: in generate
return self.contrastive_search(
../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27: in decorate_context
return func(*args, **kwargs)
src/transformers/generation/utils.py:1804: in contrastive_search
outputs = self(
../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/git/modeling_git.py:1478: in forward
outputs = self.git(
../../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl
return forward_call(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GitModel(
(embeddings): GitEmbeddings(
(word_embeddings): Embedding(99, 32, padding_idx=98)
(position_embedd...n_features=768, out_features=32, bias=True)
(1): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
)
)
)
input_ids = tensor([[36],
[64],
[41],
[89],
[58],
[72],
[41],
[ 2],
[36],
[64]], device='cuda:0')
attention_mask = tensor([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]], device='cuda:0')
position_ids = None, pixel_values = None, head_mask = [None, None, None, None, None], inputs_embeds = None, past_key_values = None, use_cache = True, output_attentions = False, output_hidden_states = True, return_dict = True
@add_start_docstrings_to_model_forward(GIT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=_CONFIG_FOR_DOC)
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
pixel_values: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPooling]:
r"""
past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
`decoder_input_ids` of shape `(batch_size, sequence_length)`.
use_cache (`bool`, *optional*):
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
`past_key_values`).
Returns:
Examples:
```python
>>> from transformers import AutoProcessor, AutoModel
>>> import requests
>>> from PIL import Image
>>> processor = AutoProcessor.from_pretrained("microsoft/git-base")
>>> model = AutoModel.from_pretrained("microsoft/git-base")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> text = "this is an image of two cats"
>>> inputs = processor(text, images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
```"""
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
elif input_ids is not None:
input_shape = input_ids.size()
elif inputs_embeds is not None:
input_shape = inputs_embeds.size()[:-1]
else:
raise ValueError("You have to specify either input_ids or inputs_embeds")
seq_length = input_shape[1]
# past_key_values_length
past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
# Prepare head mask if needed
# 1.0 in head_mask indicate we keep the head
# attention_probs has shape bsz x n_heads x N x N
# input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
# and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
projected_visual_features = None
if pixel_values is not None:
if pixel_values.ndim == 4:
# here we assume pixel_values is of shape (batch_size, num_channels, height, width)
visual_features = self.image_encoder(pixel_values).last_hidden_state
elif pixel_values.ndim == 5:
# here we assume pixel_values is of shape (batch_size, num_frames, num_channels, height, width)
visual_features = []
for frame_idx in range(pixel_values.shape[1]):
visual_features_frame = self.image_encoder(pixel_values[:, frame_idx, :, :]).last_hidden_state
visual_features_frame += self.img_temperal_embedding[frame_idx]
visual_features.append(visual_features_frame)
# finally, concatenate all features along sequence dimension
visual_features = torch.cat(visual_features, dim=1)
else:
raise ValueError("pixel_values must be of rank 4 or 5")
projected_visual_features = self.visual_projection(visual_features)
embedding_output = self.embeddings(
input_ids=input_ids,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
past_key_values_length=past_key_values_length,
)
if projected_visual_features is None:
projected_visual_features = torch.zeros(
(embedding_output.shape[0], 0, embedding_output.shape[2]),
dtype=embedding_output.dtype,
device=embedding_output.device,
)
# Repeat visual features to match embedding batch size.
projected_visual_features = projected_visual_features.repeat(
embedding_output.size(0) // projected_visual_features.size(0), 1, 1
)
# concatenate patch token and text token embeddings
hidden_states = torch.cat((projected_visual_features, embedding_output), dim=1)
# By default, an additive causal mask is created
# for masking the future (one direction).
tgt_mask = self._generate_future_mask(seq_length, embedding_output.dtype, embedding_output.device)
# Create an attention mask of shape (batch_size, 1, tgt_seq_len, src_seq_len)
combined_attention_mask = self.create_attention_mask(
tgt=embedding_output,
memory=projected_visual_features,
tgt_mask=tgt_mask,
past_key_values_length=past_key_values_length,
)
if attention_mask is not None:
# if the user provides an attention mask, we add it to the default one
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
expanded_attn_mask = _expand_mask(attention_mask, embedding_output.dtype, tgt_len=input_shape[-1]).to(
embedding_output.device
)
if past_key_values_length > 0:
expanded_attn_mask = expanded_attn_mask[:, :, -past_key_values_length:, :]
else:
> combined_attention_mask[:, :, -input_shape[1] :, -input_shape[1] :] += expanded_attn_mask
E RuntimeError: output with shape [10, 1, 1, 1] doesn't match the broadcast shape [10, 1, 1, 4]
```<|||||>Oh yeah, GIT is a bit different -- it's a multimodal model that requires careful manipulations at generate time. Open a PR with what you have now, I think I can figure out what's wrong with GIT after I have access to the changes :)<|||||>Jumping here, the error `RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'` is just that `Half` only works on `GPU` and should not be used on cpu ๐ <|||||>That would make a lot of sense! I didn't address that error in this fix. I focused on `"baddbmm_with_gemm" not implemented for 'Half'` but I can take a look at that error over the weekend if you'd like<|||||>@gante Fix is here rebased to latest commit on main but the PR guidelines are kinda long so I won't be able to create the PR until later
https://github.com/gage-technologies/transformers<|||||>I am having this issue as well. I tried 4.26 and 4.25.1. I am gonna try @sam-ulrich1 solution.<|||||>The fix did not help. Neither using DeepSpeed nor using vanilla Transformers. Using bfloat16 gives me expected results(but I need float16 for DeepSpeed)<|||||>I take back what I said. I am not having this issue at all. With or withou t @sam-ulrich1 fix, it is working fine. The issue is with DeepSpeed. <|||||>I'm also facing a similar issue:
```py
generator = pipeline("text2text-generation", model="philschmid/flan-t5-xxl-sharded-fp16", model_kwargs={"load_in_8bit":True, "device_map": "auto"})
output = generator(prompt, penalty_alpha=0.6, top_k=4, max_length=256)
```
Gives me the error:
```
RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Half'
```
So contrastive search seems not compatible with loading the model in 8-bit. Is that expected or a bug? <|||||>@sam-ulrich1 do you have some updates on your end? I can open a PR from the changes in your fork, if you're interested :)<|||||>@gante Shoot! Sorry man this slipped my mind. Let me take a look.at the PR guidelines again and see if I can get mine rebased and prepped and if not then I'm happy to let you.
Thanks man!<|||||>Just to flag, the error I faced [here ](https://github.com/huggingface/transformers/issues/21151#issuecomment-1427589596) still exists with @sam-ulrich1's fix. Should I open a new Issue as this may be related specifically to 8-bit?<|||||>@gante I'm gonna look at this today. Sorry man, I've been slammed with work the past month<|||||>@gante If you want to just snag my changes go ahead otherwise I will eventually get to this it's just been a really tough few weeks<|||||>BTW I'm not sure if this fix is still needed, I am unable to reproduce the issue on `main`.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tok = AutoTokenizer.from_pretrained("distilgpt2")
model = AutoModelForCausalLM.from_pretrained("distilgpt2", torch_dtype=torch.float16).to("cuda")
inputs = tok(["This cat is"], return_tensors="pt").to("cuda")
gen_out = model.generate(**inputs, top_k=4, penalty_alpha=0.6)
```
If someone else comes across this issue, please let me know ๐ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,150 | closed | OPT: Fix batched generation with FLAX | # What does this PR do?
Fixes #20666
OPT numerical masking was using `-inf` where the attention mask was `0`, which in turn caused the hidden states to be `nan` and derail the whole generation. Changing to a common masking value (`-1e9`) fixes the issue. I've also taken the opportunity to re-enable the commented out tests :) | 01-17-2023 12:15:22 | 01-17-2023 12:15:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,149 | closed | Generate: TF contrastive search must pop `use_cache` from `model_kwargs` | # What does this PR do?
Fixes a slow test that broke with #20994
(actually, more like ~25 slow tests ๐
) | 01-17-2023 11:19:07 | 01-17-2023 11:19:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,148 | closed | Fixed num_channels!=3 normalization training [#20630] | # What does this PR do?
Fork from #20630 (which is to fix https://github.com/huggingface/transformers/issues/20580 and https://github.com/huggingface/transformers/issues/19913) | 01-17-2023 09:16:39 | 01-17-2023 09:16:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge CI is green, but I don't review (yet) the content of PR though<|||||>@NielsRogge
I think it is already there: when I clicked `Squash and merge`, it shows at the end
```
Co-authored-by: Lay Jain <[email protected]>
Co-authored-by: ydshieh <[email protected]>
```<|||||>cc @layjain <|||||>The CI got fixed on the other PR so I merged it. Is there still a need for this one?<|||||>@sgugger OK, this morning it was not running even I pushed a commit. No more need of this PR - despite I have a question regarding the logic. We can fix it if @NielsRogge think what I pointed is indeed a bug. |
transformers | 21,147 | closed | Fine tunning of Donut rvl-cdip throughing an error. module 'google.protobuf.descriptor' has no attribute '_internal_create_key' | When I am trying to run these lines....
**from transformers import DonutProcessor, VisionEncoderDecoderModel, BartConfig
processor = DonutProcessor.from_pretrained("nielsr/donut-base")
model = VisionEncoderDecoderModel.from_pretrained("nielsr/donut-base", config=config)**
Getting this error
AttributeError Traceback (most recent call last)
/tmp/ipykernel_3566149/2070905111.py in
2 from transformers import AutoTokenizer, AutoModel
3
----> 4 tokenizer = AutoTokenizer.from_pretrained("nielsr/donut-base")
5
6 model = AutoModel.from_pretrained("nielsr/donut-base")
~/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
653 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
654 )
--> 655 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
656
657 # Otherwise we have to be creative.
~/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1799 [logger.info](http://logger.info/)(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}")
1800
-> 1801 return cls._from_pretrained(
1802 resolved_vocab_files,
1803 pretrained_model_name_or_path,
~/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, local_files_only, _commit_hash, *init_inputs, **kwargs)
1954 # Instantiate tokenizer.
1955 try:
-> 1956 tokenizer = cls(*init_inputs, **init_kwargs)
1957 except OSError:
1958 raise OSError(
~/.local/lib/python3.8/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py in init(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs)
153 mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token
154
--> 155 super().init(
156 vocab_file,
157 tokenizer_file=tokenizer_file,
~/.local/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py in init(self, *args, **kwargs)
116 # We need to create and convert a slow tokenizer to build the backend
117 slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)
--> 118 fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
119 else:
120 raise ValueError(
~/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py in convert_slow_tokenizer(transformer_tokenizer)
1160 converter_class = SLOW_TO_FAST_CONVERTERS[tokenizer_class_name]
1161
-> 1162 return converter_class(transformer_tokenizer).converted()
~/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py in init(self, *args)
436 super().init(*args)
437
--> 438 from .utils import sentencepiece_model_pb2 as model_pb2
439
440 m = model_pb2.ModelProto()
~/.local/lib/python3.8/site-packages/transformers/utils/sentencepiece_model_pb2.py in
32 syntax="proto2",
33 serialized_options=b"H\003",
---> 34 create_key=_descriptor._internal_create_key,
35 serialized_pb=(
36 b'\n\x19sentencepiece_model.proto\x12\rsentencepiece"\xa1\n\n\x0bTrainerSpec\x12\r\n\x05input\x18\x01'
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
In the env I have installed these libraries
aiohttp==3.8.3
aiosignal==1.3.1
async-timeout==4.0.2
attrs==22.2.0
brotlipy==0.7.0
certifi @ file:///croot/certifi_1671487769961/work/certifi
cffi @ file:///croot/cffi_1670423208954/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
cryptography @ file:///croot/cryptography_1673298753778/work
datasets==2.8.0
dill==0.3.6
filelock==3.9.0
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
frozenlist==1.3.3
fsspec==2022.11.0
huggingface-hub==0.11.1
idna @ file:///croot/idna_1666125576474/work
mkl-fft==1.3.1
mkl-random @ file:///home/builder/ci_310/mkl_random_1641843545607/work
mkl-service==2.4.0
multidict==6.0.4
multiprocess==0.70.14
numpy @ file:///croot/numpy_and_numpy_base_1672336185480/work
packaging==23.0
pandas==1.5.2
Pillow==9.3.0
protobuf==3.19.6
pyarrow==10.0.1
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyOpenSSL @ file:///opt/conda/conda-bld/pyopenssl_1643788558760/work
PySocks @ file:///home/builder/ci_310/pysocks_1640793678128/work
python-dateutil==2.8.2
pytz==2022.7.1
PyYAML==6.0
regex==2022.10.31
requests @ file:///opt/conda/conda-bld/requests_1657734628632/work
responses==0.18.0
sentencepiece==0.1.97
six @ file:///tmp/build/80754af9/six_1644875935023/work
tokenizers==0.13.2
torch==1.13.1
torchaudio==0.13.1
torchvision==0.14.1
tqdm==4.64.1
transformers @ git+https://github.com/huggingface/transformers.git@2411f0e465e761790879e605a4256f3d4afb7f82
typing_extensions @ file:///croot/typing_extensions_1669924550328/work
urllib3 @ file:///croot/urllib3_1670526988650/work
xxhash==3.2.0
yarl==1.8.2 | 01-17-2023 08:31:29 | 01-17-2023 08:31:29 | This looks like an error we had with recent versions of protobuf. Are you absolutely sure the env information you are pasting is correct? Could you try doing `pip install protobuf<=3.20.2 --upgrade` in your env?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,146 | closed | fix the issue that the output dict of jit model could not get [:2] | "TypeError: unhashable type: 'slice'"
Signed-off-by: Wang, Yi A <[email protected]>
- pipelines: @Narsil
| 01-17-2023 07:13:07 | 01-17-2023 07:13:07 | @yao-matrix<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,145 | closed | Unable to load weights from pytorch checkpoint file | ### System Info
I trained a model and uploaded the checkpoint in the Hub [here](https://huggingface.co/taskydata/deberta-v3-base_10xp3nirstbbflanseuni_10xc4). When I try to load the model, I get the following error message:
`OSError: Unable to load weights from pytorch checkpoint file for '/root/.cache/huggingface/hub/models--taskydata--deberta-v3-base_10xp3nirstbbflanseuni_10xc4/snapshots/f5d6b49731ea0b36601f151dd67623380462a3cb/pytorch_model.bin' at '/root/.cache/huggingface/hub/models--taskydata--deberta-v3-base_10xp3nirstbbflanseuni_10xc4/snapshots/f5d6b49731ea0b36601f151dd67623380462a3cb/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.`
Also, when I try to run it using the inference API, I get:
`Could not load model taskydata/deberta-v3-base_10xp3nirstbbflanseuni_10xc4 with any of the following classes: (<class 'transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2ForSequenceClassification'>, <class 'transformers.models.deberta_v2.modeling_tf_deberta_v2.TFDebertaV2ForSequenceClassification'>).`
Transformer version: `4.25.1`
Any help on how to resolve this would be greatly appreciated. Thanks!
cc. @LysandreJik @younesbelkada @ArthurZucker
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Mentioned above.
### Expected behavior
The model should be loading since all the files are uploaded. | 01-17-2023 03:16:52 | 01-17-2023 03:16:52 | Hi @manandey,
Here is the error I get:
```
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
```
Your saved model is probably corrupted, could you tell us how did you saved the model, or alternatively give it another try?<|||||>Hi @younesbelkada, when I try to load the model by connecting my colab to the GCP instance where the checkpoints are saved, the model seems to be loading perfectly fine as can be seen in the attached snapshot. But when I try to download the checkpoint and upload it to hub, the downloaded files seem to become corrupted.
To download the checkpoint, I am using something like this: `!tar -zcvf checkpoints.tar.gz checkpoint/checkpoint-300000`

<|||||>If I understood correctly you are manually uploading the weights on the Hub?
Can you maybe try:
```
model.push_to_hub("taskydata/deberta-v3-base_10xp3nirstbbflanseuni_10xc4")
```
after the lines that you have attached above.
Make sure to login from your notebook with:
```
from huggingface_hub import notebook_login
notebook_login()
```<|||||>Thanks a lot, @younesbelkada! It worked! <|||||>Very happy that it worked! Thanks for guiding us precisely through your issue |
transformers | 21,144 | closed | Accept batched tensor of images as input to image processor | # What does this PR do?
Adds functionality to `image_utils` so that a batched tensor of images can be accepted as input to the image processors.
Fixes #21142 #14650
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 01-16-2023 19:01:46 | 01-16-2023 19:01:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,143 | closed | Fixes to TF collators | This PR makes a couple of fixes to TF data collators:
1) Fixes an incorrect call to `is_integer` when the default data collator receives `tf.Tensor` input and is outputting `tf.Tensor` as well
2) Prefer "np" tensors rather than "tf" tensors when calling our collators via `to_tf_dataset`. This is because data preprocessing is generally done with `np.ndarray` rather than `tf.Tensor` anyway, and Keras/`tf.data` can do the final conversion to `tf.Tensor` for us. | 01-16-2023 16:37:25 | 01-16-2023 16:37:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,142 | closed | Error when passing a tensor of images to CLIPProcessor | ### System Info
- huggingface_hub version: 0.11.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.8
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.huggingface/token
- Has saved token ?: False
- Configured git credential helpers: !f()
- FastAI: N/A
- Tensorflow: 2.11.0
- Torch: 1.13.1
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
### Who can help?
@ArthurZucker @amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run the following script:
```python
from transformers import CLIPProcessor
import torch
model_name_or_path = "openai/clip-vit-large-patch14"
processor: CLIPProcessor = CLIPProcessor.from_pretrained(
model_name_or_path
)
dummy_input = torch.randn(10, 3, 224, 224)
dummy_output = processor(images=dummy_input, return_tensors="pt")
```
2. Look at the monitor to see the error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File /opt/conda/envs/main/lib/python3.10/site-packages/PIL/Image.py:2953, in fromarray(obj, mode)
2952 try:
-> 2953 mode, rawmode = _fromarray_typemap[typekey]
2954 except KeyError as e:
KeyError: ((1, 1, 224, 224), '|u1')
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
Cell In[2], line 10
4 processor: CLIPProcessor = CLIPProcessor.from_pretrained(
5 model_name_or_path
6 )
8 dummy_input = torch.randn(10, 3, 224, 224)
---> 10 dummy_output = processor(images=dummy_input, return_tensors="pt")
File /opt/conda/envs/main/lib/python3.10/site-packages/transformers/models/clip/processing_clip.py:85, in CLIPProcessor.__call__(self, text, images, return_tensors, **kwargs)
82 encoding = self.tokenizer(text, return_tensors=return_tensors, **kwargs)
84 if images is not None:
---> 85 image_features = self.feature_extractor(images, return_tensors=return_tensors, **kwargs)
87 if text is not None and images is not None:
88 encoding["pixel_values"] = image_features.pixel_values
...
-> 2955 raise TypeError("Cannot handle this data type: %s, %s" % typekey) from e
2956 else:
2957 rawmode = mode
TypeError: Cannot handle this data type: (1, 1, 224, 224), |u1
```
### Expected behavior
The function should return a preprocessed tensor containing a batch of images. | 01-16-2023 16:05:32 | 01-16-2023 16:05:32 | Seems like `is_batched` is at fault for this. Doc seems a bit lacking ๐
Should probably be replaced with :
```python
def is_batched(img):
if isinstance(img, (list, tuple)):
return is_valid_image(img[0])
return is_valid_image(img)
```
Pretty sure this is expected as our tests are run on `lists` or `tuples` of images. <|||||>@ArthurZucker @AntreasAntoniou Yep - it's down to how batches are checked, and the processing classes (feature extractors, tokenizers, image processors) expect either a single object e.g. image or a list/tuple of objects. It should be possible to take a batched tensor and create a list from it, we just have to be careful in our assumptions. I'll look into it.
The example:
```
def is_batched(img):
if isinstance(img, (list, tuple)):
return is_valid_image(img[0])
return is_valid_image(img)
```
won't work, as the check for `is_batched` determines whether the input needs to be wrapped in a list. This is because the image processors iterate over a list of images i.e. a single image would return `True` here and break things downstream. |
transformers | 21,141 | closed | feat: add standalone guide on XLA support. | # What does this PR do?
We have had XLA support for our TF generation models (GPT2, Whisper, for example) for a while. This PR adds a standalone guide in the doc to discuss it.
Cc: @Rocketknight1 @amyeroberts @ydshieh | 01-16-2023 14:44:47 | 01-16-2023 14:44:47 | @sayakpaul
It seems there is an issue with your CircleCI permissions.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>@ydshieh I don't have CircleCI signed in nor have I installed it here: https://github.com/settings/applications. <|||||>Could you login CircleCI with your GitHub account, and follow `huggingface/transformers`?
See https://circleci.com/docs/projects/
(but it's kinda strange - you have opened a lot of PRs before, so not sure why we have this issue now)<|||||>Should be all good now I guess:
<img width="896" alt="image" src="https://user-images.githubusercontent.com/22957388/212717447-19e1b042-d39f-44f0-bf8c-faef7b97c2ea.png">
<|||||>Yeah, then we can try to push an empty commit to see if the CI will run :-)
```
git commit --allow-empty -m "Empty commit to trigger CI"
git push
```
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger addressed your comments including [this one](https://github.com/huggingface/transformers/pull/21141/files#r1072172489). Let me know if I am good to merge given all tests are green. <|||||>Yes, all good! |
transformers | 21,140 | closed | Rename test_feature_extraction files | # What does this PR do?
Renames the tests for image processors: `test_feature_extraction_xxx.py` -> `test_image_processing_xxx.py`
A follow up PR will change the feature extractor references to equivalent image processor ones in the files.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 01-16-2023 14:09:05 | 01-16-2023 14:09:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @amyeroberts !
I think there are a few lines in `tests/utils/test_add_new_model_like.py` to be changed, like
https://github.com/amyeroberts/transformers/blob/6b10c045e259a47f3786ceb11089fe418828346e/tests/utils/test_add_new_model_like.py#L62
<|||||>@ydshieh Done! |
transformers | 21,139 | closed | Added clefourrier as ref point for graph models in bug reports | # What does this PR do?
Added myself as entry point for graph models in issue doc. | 01-16-2023 13:35:53 | 01-16-2023 13:35:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Also, I missed this file
```
.github/PULL_REQUEST_TEMPLATE.md
```
if you think it is relevant.<|||||>Add @sgugger for final review as I never changed these files before, and this is also more administrative decision :-)<|||||>Did not see the comment section in PULL_REQUEST_TEMPLATE when I opened them in browser view - edited!
Edit: However, not sure what I could add to `feature-request`?<|||||>> Did not see the comment section in PULL_REQUEST_TEMPLATE when I opened them in browser view - edited!
>
> Edit: However, not sure what I could add to `feature-request`?
My bad, my brain is not completely recovered from all the `drink` I had last week. |
transformers | 21,138 | closed | Feature Request: Flax Whisper | ### Feature request
It already has TF and Pytorch support. Would be nice to use it on Flax as-well.
| 01-16-2023 11:28:06 | 01-16-2023 11:28:06 | cc @sanchit-gandhi <|||||>Hey @OhadRubin! Thanks to the fantastic work by @andyehrenberg it's nearly complete: https://github.com/huggingface/transformers/pull/20479
Will do a final review tomorrow and then get it merged asap!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,137 | closed | microsoft/markuplm-base-finetuned-websrc fails when used in a `question-answering` pipeline | ### System Info
Python 3.9.7
Transformers 4.25.1
`microsoft/markuplm-base-finetuned-websrc` fails when used in a `question-answering` pipeline (see test script below).
```
Traceback (most recent call last):
File "/Users/juliensimon/markuplm/app.py", line 24, in <module>
result = pipe(question="What are the trending stocks?", context=page)
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/question_answering.py", line 392, in __call__
return super().__call__(examples[0], **kwargs)
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1074, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1095, in run_single
for model_inputs in self.preprocess(inputs, **preprocess_params):
File "/Users/juliensimon/Envs/vscodedemo/lib/python3.9/site-packages/transformers/pipelines/question_answering.py", line 403, in preprocess
max_seq_len = min(self.tokenizer.model_max_length, 384)
AttributeError: 'MarkupLMProcessor' object has no attribute 'model_max_length'
```
Setting `max_seq_length` in the pipeline call solves the issue, but a similar one happens with the `is_fast` attribute. Both are indeed not defined in [MarkupLMProcessor](https://github.com/huggingface/transformers/blob/05b8e25fffd61feecb21928578ad39e63af21b4f/src/transformers/models/markuplm/processing_markuplm.py#L25).
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import requests
from transformers import (
pipeline,
AutoModelForQuestionAnswering,
MarkupLMProcessor,
)
def get_page(url):
response = requests.get(url)
return response.text
model = "microsoft/markuplm-base-finetuned-websrc"
processor = MarkupLMProcessor.from_pretrained(model)
model = AutoModelForQuestionAnswering.from_pretrained(model)
pipe = pipeline("question-answering", model=model, tokenizer=processor)
url = "https://finance.yahoo.com"
page = get_page(url)
result = pipe(question="What are the trending stocks?", context=page)
```
### Expected behavior
I would expect the pipeline to work. If this model isn't supported, then we should make it clear in the [docs](https://huggingface.co/docs/transformers/model_doc/markuplm). | 01-16-2023 11:11:57 | 01-16-2023 11:11:57 | Hi,
MarkupLM isn't supported by the QA pipeline, similar to how LayoutLM (and v2 and v3) aren't supported by it.
The reason for this is that MarkupLM requires additional inputs besides the ones that text models require, like `input_ids` and `attention_mask`. For LayoutLM, we created a separate `DocumentQuestionAnsweringPipeline` to account for this.<|||||>Thanks for the answer. Until we have a proper pipeline for this model, could we add your explanation to the docs? If I couldn't figure it out, I suspect many more users will hit the same issue :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge I would like to work on this issue if it is still available.<|||||>@NielsRogge I can create a `MarkupQuestionAnsweingPipeline` for this.<|||||>This model is highly specific actually, not sure if a pipeline for it makes sense.
Will remove the good first issue from this, might be better to just overwrite the preprocess and postprocess steps of the QA pipeline to make it work for MarkupLM. |
transformers | 21,136 | closed | Fix `RealmModelIntegrationTest.test_inference_open_qa` | # What does this PR do?
The test `RealmModelIntegrationTest::test_inference_open_qa` fails since my PR #21000. This integration test creates a config manually by `config = RealmConfig()` and passes it to `from_pretrained`, therefore it doesn't have the attribute `searcher_seq_len` (which is removed in #21000).
Using `from_pretrained` without specifying `config` will load from the Hub checkpoint which has `searcher_seq_len` in the config (loaded via `super().__init__(..., **kwargs)`), and fix the test.
| 01-16-2023 11:03:28 | 01-16-2023 11:03:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,135 | closed | AttributeError: module 'tensorflow' has no attribute 'Tensor' when using documentation code (Tokenizer.batch_decode) | ### System Info
Windows 10, VSCode
### Who can help?
_No response_
### Information
I was referring to the documentation on huggingface to run the facebook OPT model here:
https://huggingface.co/docs/transformers/main/en/model_doc/opt#transformers.OPTForCausalLM
And I've received the following error on my Windows 10 machine in VScode.
```
Traceback (most recent call last):
File "c:/Users/Admin/Desktop/Projects/NLP_Cron/script_chat.py", line 11, in <module>
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\tokenization_utils_base.py", line 3429, in batch_decode
return [
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\tokenization_utils_base.py", line 3430, in <listcomp>
self.decode(
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\tokenization_utils_base.py", line 3466, in decode
token_ids = to_py_obj(token_ids)
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\utils\generic.py", line 160, in to_py_obj
elif is_tf_tensor(obj):
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\utils\generic.py", line 136, in is_tf_tensor
return False if not is_tf_available() else _is_tensorflow(x)
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\utils\generic.py", line 129, in _is_tensorflow
return isinstance(x, tf.Tensor)
AttributeError: module 'tensorflow' has no attribute 'Tensor'
```
I first thought it was specific to this model, But I'm facing the same issue on other models.
I have tried uninstalling TensorFlow and reinstalling it.
I have upgraded "transformers" library as well. But to no avail. This seems to be a recent problem.
The virtual environment I'm using says I have these versions of tensorflow and transformers.
- transformers 4.25.1
- tensorflow 2.11.0
### Reproduction
Steps to reproduce the behaviour:
1. Go to https://huggingface.co/docs/transformers/main/en/model_doc/opt#transformers.tensorflow
2. Run the example snippet consisting of this code
```
from transformers import GPT2Tokenizer, OPTForCausalLM
model = OPTForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
prompt = "Hey, are you consciours? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])
```
Assuming required libraries are installed error message shows up.
### Expected behavior
Expected the output as per shown in documentation. | 01-16-2023 11:01:09 | 01-16-2023 11:01:09 | Hi, @gaurav-95
Please run `transformers-cli env` in terminal and share the full system info so it's easier to reproduce the error.<|||||>transformers-cli env gives me this.
```
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Admin\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\Scripts\transformers-cli.exe\__main__.py", line 4, in <module>
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\commands\transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "C:\Users\Admin\Desktop\Projects\NLP_Cron\cronenv\lib\site-packages\transformers\commands\pt_to_tf.py", line 46, in <module>
tf.config.experimental.enable_tensor_float_32_execution(False)
AttributeError: module 'tensorflow' has no attribute 'config'
```
Could you elaborate on what system info do you need?
Im running on an Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz
20.0 GB (19.8 GB usable) RAM
No dedicated gpu in machine. My virtualenvironment is called "cronenv"
Update: I was able to run the same code on a google colab notebook, seems like a problem with my environment.<|||||>Hi, @gaurav-95
Actually if you run the above code it should output something like this,
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Since you are not getting it could you please check your transformers installation? (just run `import transformers` and check if it successfully imports or gives an error)
<|||||>Thanks for getting back and hinting towards the problem. I can confirm there was something wrong with my python installation.
Steps that resolved it for me.
1. I made a requirements file of the existing install
2. I deleted the existing virtual environment.
3. Re-installed python.
4. Re-installed dependencies from saved requirements file.
5. Ran code and it works now!<|||||>> Thanks for getting back and hinting towards the problem. I can confirm there was something wrong with my python installation.
>
> Steps that resolved it for me.
>
> 1. I made a requirements file of the existing install
> 2. I deleted the existing virtual environment.
> 3. Re-installed python.
> 4. Re-installed dependencies from saved requirements file.
> 5. Ran code and it works now!
LOL, I got the same thing happen to me as well I think??? Ill give this a try :) BTW how did you delete the env? |
transformers | 21,134 | closed | Add ConvNeXt-V2 Model | # What does this PR do?
Adds ConvNeXt-V2 to transformers.
original repo: https://github.com/facebookresearch/ConvNeXt-V2
paper: https://arxiv.org/abs/2301.00808
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@alaradirik
| 01-16-2023 10:56:07 | 01-16-2023 10:56:07 | It does look like the model code is exactly the same at a first glance (saw everything is copied from ConvNext). If that is the case, yes to re-using the code of ConvNext, but if we need to make modifications in the convnext modeling file, we should add ConvNext V2 as a new model like in the PR.<|||||>> It does look like the model code is exactly the same at a first glance (saw everything is copied from ConvNext). If that is the case, yes to re-using the code of ConvNext, but if we need to make modifications in the convnext modeling file, we should add ConvNext V2 as a new model like in the PR.
Yes, the code is almost the same, but it adds a Global Response Normalization (GRN) module and removes the layer_scale_parameter from the ConvNeXtV2Layer. Makes more sense to add it as a new model then.
CC @IMvision12 <|||||>Thanks for the review @alaradirik I will address all comments!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,133 | closed | [GIT] Fix training | # What does this PR do?
This PR ensures that GIT can be properly fine-tuned. As GIT is a causal, GPT-like model that is also conditioned on CLIP-embedded image patches, one needs to only compute a loss on the predicted text tokens. | 01-16-2023 10:51:22 | 01-16-2023 10:51:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, fixed now! |
transformers | 21,132 | closed | Fixing batching pipelines on single items for ChunkPipeline | # What does this PR do?
Fixing #20783
The issue was that the iterator would not be called because of the input type.
Regardless of the input type, `ChunkPipeline` will always potentially be iterating over
its inputs in order to use batching.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 01-16-2023 09:20:09 | 01-16-2023 09:20:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,131 | closed | could set max_length or max_seq_length bigger than 512 in NER? | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification
for fine-tune in NER
could I set --max_length or --max_seq_length bigger than 512 ??
File "/data/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 237, in forward
embeddings += position_embeddings
RuntimeError: The size of tensor a (2048) must match the size of tensor b (512) at non-singleton dimension 1
### Expected behavior
as I said before | 01-16-2023 06:31:13 | 01-16-2023 06:31:13 | Please use the [forums](https://discuss.huggingface.co/) for such questions, as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,130 | closed | Small simplification to TopKLogitsWarper | # What does this PR do?
The max of `top_k` and `min_tokens_to_keep` performed on every call can just be done once up-front.
Apologies if there's some reason for it being this way that I overlooked!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] (N/A) Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] (N/A) Did you write any new necessary tests?
## Who can review?
@sgugger @patrickvonplaten | 01-16-2023 00:11:53 | 01-16-2023 00:11:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante |
transformers | 21,129 | closed | Error 429 | ### System Info
transformers==4.25.1
Code block:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("flax-community/roberta-base-thai")
model = AutoModelForMaskedLM.from_pretrained("flax-community/roberta-base-thai")
Error:
OSError: There was a specific connection error when trying to load flax-community/roberta-base-thai:
429 Client Error: Too Many Requests for url: https://huggingface.co/flax-community/roberta-base-thai/resolve/main/config.json
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Import package and run code.
### Expected behavior
No errors expected. | 01-15-2023 18:11:03 | 01-15-2023 18:11:03 | Hi, @milohpeng the script you gave seems to run fine on my side, maybe it was an issue regarding your internet connection, can you try now again and check if it runs or not? Or run `transformers-cli env` in terminal and give your system info so we can reproduce the error.<|||||>Here is the information as requested,
- `transformers` version: 4.25.1
- Platform: Linux-5.4.56.bsk.10-amd64-x86_64-with-debian-10.12
- Python version: 3.7.3
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I'm not sure whether my IP has been blacklisted...<|||||>@milohpeng I used the same environment as you and managed to get the results without any errors, maybe you tried to download the model so many times that they blacklisted your IP.
I can't help you with this but maybe @sgugger can.<|||||>This should not consistently happen, as the error reflected (429) requires **a lot** of requests in a short amount of time. @milohpeng are you using a shared IP by any chance?<|||||>Hey @sgugger yes I am, I'm using my company's IP to access Huggingface. Is there anything I can do to reverse this as my colleagues seem to be affected as well? Thanks in advance! <|||||>Could you provide us with the IP in question, so that I can investigate further with our infra team? You can email it to me (sylvain @ hf.co without the spaces) if you don't want it public. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,128 | closed | TypeError: Descriptors cannot not be created directly. - protobuf version bug | ### System Info
- `transformers` version: 4.25.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.8.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: I have GPU but I didn't run any code other than `import transformers`
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
To reproduce the behavior:
1. install latest version of `protobuf` using `pip install -U protobuf`. I have `4.21.12`
2. run `python -c "import transformers"`
3. you will see the following error
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\__init__.py", line 30, in <module>
from . import dependency_versions_check
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module>
from .utils.versions import require_version, require_version_core
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\utils\__init__.py", line 34, in <module>
from .generic import (
File "C:\Users\off99\anaconda3\lib\site-packages\transformers\utils\generic.py", line 33, in <module>
import tensorflow as tf
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 37, in <module>
from tensorflow.python.eager import context
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\python\eager\context.py", line 29, in <module>
from tensorflow.core.framework import function_pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 16, in <module>
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 16, in <module>
from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 16, in <module>
from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
File "C:\Users\off99\anaconda3\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py", line 36, in <module>
_descriptor.FieldDescriptor(
File "C:\Users\off99\anaconda3\lib\site-packages\google\protobuf\descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
-> <module 'google._upb._message' from 'C:\\Users\\off99\\anaconda3\\lib\\site-packages\\google\\_upb\\_message.cp38-win_amd64.pyd'...
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
```
I don't want to downgrade version as suggested by the error message because other packages depend on latest version of `protobuf` specifically `google-cloud-documentai`.
### Expected behavior
No error
Please suggest the best course of action. | 01-15-2023 18:01:39 | 01-15-2023 18:01:39 | Hi, @off99555 You might need to create a separate environment and then install transformers where you can have a different protobuf version than rest of your system.<|||||>@susnato but that is not considered trivial or acceptable, no? https://stackoverflow.com/a/6572017/2593810
Do you mind sharing how you would do it in a way that is not so hacky?<|||||>@off99555 I would install Anaconda from [here](https://docs.anaconda.com/anaconda/install/windows/) and then I would create a new environment using `conda create --name venv python=3.9` in terminal and then activate it using `conda activate venv` , and finally I would install transformers there with the right protobuf version. (Installing anything in the venv will not interfere with you local python installation)
Btw never install anything in your base environment(it is the environment that you get by default after installing anaconda).
I hope it helps.<|||||>@susnato but I'm using the `google-cloud-documentai` package in the same project that uses `transformers` package. They both require different versions of `protobuf`. I cannot just install different versions of `protobuf` on different environments because I need both environments to be activated which would cause the version collision issue again.<|||||>Hi, @off99555
I checked and it's actually grpcio-status which is causing the problem, (grpcio-status==1.51.1 requires protobufprotobuf>=4.21.6), you can run `pip install grpcio-status==1.33.2 protobuf==3.19.6` to fix the issue.
btw, `google-cloud-documentai` will be version 2.7.0 then.
let me know if it worked or not.<|||||>I am not sure why you are opening the issue here. It is TensorFlow that has problem with this version of Transformers, not Transformers itself. The issue should be raised in the TensorFlow repository. We have just pinned protobuf to avoid this problem.<|||||>> I am not sure why you are opening the issue here. It is TensorFlow that has problem with this version of Transformers, not Transformers itself. The issue should be raised in the TensorFlow repository. We have just pinned protobuf to avoid this problem.
It's because I don't know exactly what is the cause. I was confused. All I know is that I run `import transformers` and it gave me the error so I open the issue here.
Thanks for pointing out that it's related to tensorflow. So the problem is simply that tensorflow uses old version of `protobuf` whereas `grpcio-status` is using the latest `protobuf` version.
So I just downgraded `grpcio-status` according to what @susnato suggests and it seems to work. Thank you!<|||||>Just wanted to flag that maybe the protobuf version in this repo _should_ be updated โ I'm trying to write a gRPC service and am currently getting around this issue by passing `use_fast=False` to my HF pipeline. I can't downgrade my `grpcio-tools` package seeing the last version to not support Protobuf v3 was released in 2020, and would prefer to not have my dependencies _that_ out-of-date. So I believe using such an old PB version will impact anyone trying to write microservices using HF, not just the transitive dependencies of other packages like Tensorflow.<|||||>@Nickersoft We sadly cannot remove the pin on `protobuf` until TensorFlow starts supporting version 4.0. Looks like TensorFlow 2.12 will solve this, so we just have to wait for it to be out.
In the meantime, you can always remove the pin in a source install.<|||||>@sgugger also wanted to flag that unpinning protobuf might not be sufficient to fully resolve this issue. [This](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/sentencepiece_model_pb2.py) file will need to be regenerated with newer version of protobuf as well (not sure if there are other such files in transformers code base).<|||||>cc @Narsil for this. Could you open a new issue specific to this @yigor and re-ping me and Narsil there? This way we'll be sure this does not slip through the crack.<|||||>The new sentence piece proto generated file is this : https://github.com/google/sentencepiece/blob/master/python/src/sentencepiece/sentencepiece_model_pb2.py<|||||>My specific version of this error only relates to the "google/pegasus-cnn_dailymail" model. The second fix suggested in the error message worked for me - I just added
```
import os
os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"]="python"
```
at the head of my Jupyter Notebook |
transformers | 21,127 | closed | MT5ForConditionalGeneration: forward() got an unexpected keyword argument 'penalty_alpha' | When I use the MT5ForConditionalGeneration class to load an mt5 model and then specify the `penalty_alpha` parameter in the generate function, the library raises the error `forward() got an unexpected keyword argument 'penalty_alpha'`. But when I load the same model with `AutoModelForSeq2SeqLM` class, it doesn't raise that issue.
This shouldn't be happening because the `AutoModel` class automatically selects the relevant `MT5ForConditionalGeneration` for all T5 models. So, why does this raises an issue when I use the `MT5ForConditionalGeneration` directly?
Also, this is particularly interesting because when you fine-tune a t5 model using the `MT5ForConditionalGeneration` class but load that model(after training) using the `AutoModelForSeq2SeqLM` and then use the `penalty_alpha`, it still raises the same issue.
Code:
```
model = MT5ForConditionalGeneration.from_pretrained(f"{model_dir}")
tokenizer = MT5Tokenizer.from_pretrained(f"{model_dir}")
input_ids = tokenizer.encode(
source_text, return_tensors="pt", add_special_tokens=True
)
input_ids = input_ids.to(device)
generated_ids = model.generate(
input_ids=input_ids,
penalty_alpha=0.6, top_k=4
)
preds = [
tokenizer.decode(
g,
skip_special_tokens=skip_special_tokens,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
)
for g in generated_ids
]
```
Transformers version: 4.16.2
Python version: 3.9.15 | 01-14-2023 19:45:18 | 01-14-2023 19:45:18 | cc @gante <|||||>Hi, @Mahyar-Ali
I have transformers - 4.25.1 and used the code you gave (above) and it ran without any error, then I switched back to transformers - 4.16.2(same as your version) and it gave the error regarding "penalty alpha", I believe it is fixed in later versions of transformers, since I got the output in version 4.25.1 .
Maybe you can try to upgrade to latest version and check if it works for you or not.<|||||>It's working now. Didn't pay attention to the version (as it was working well for `AutoModel`). Thanks!<|||||>@susnato thank you for pitching in ;) |
transformers | 21,126 | closed | CUDA out of memory for bart-large while using deepspeed with Zero stage 3 | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: GPU 0: Tesla T4
- Using distributed or parallel set-up in script?: no
### Who can help?
@stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Trying to run the following in colab [link](https://colab.research.google.com/drive/1zHogV6VnqGPV5LoQoDzTy9D2OafYpWkz?usp=sharing)
It is pretty stock example of running summarization training with bart-large-cnn with deepspeed turned. I was hoping to train t5-3b on A100 but running into issues with smaller model on T4.
### Expected behavior
Running this on a Tesla T4 with 16 GB of GPU RAM one would expect that bart-large-cnn would work with Zero stage 3 enabled. Instead I get OutOfMemoryError: CUDA out of memory.
Zero stage 3 seems to be enabled per following output in the log file:
...
[2023-01-14 18:34:57,363] [INFO] [config.py:1024:print] zero_enabled ................. True
[2023-01-14 18:34:57,364] [INFO] [config.py:1024:print] zero_optimization_stage ...... 3 | 01-14-2023 18:51:41 | 01-14-2023 18:51:41 | Really, this is a question for the Deepspeed Issues since HF Trainer only provides the integration, but why do you expect that this is supposed to fit into 16GB? Colab barely has any cpu ram so very often there isn't really any free memory to offload to.
But let's figure out the math first and then it'd be easy to manage expectations.
How many params does this model have? e.g. if the checkpoint is saved in half precision - from `1.63 GB` model size it'd mean you'd need `1.63*9=14.67` i.e. 15GB just for the weights. If it were in fp32, than half of that and then it should fit w/o using zero at all.
You typically need `n_params * 18` just for the weights in mixed precision training. And then more for activations.
Also I don't see where you override the batch size - you should set it to `1` to start with and only increase that if you don't OOM.
p.s. to count the unique params:
```
sum(dict((p.data_ptr(), p.numel()) for p in model.parameters()).values())
```<|||||>A bit more data.
When using the standard GPU and high RAM environment before any training I see:
Gen RAM Free: 53.5 GB | Proc size: 93.4 MB
GPU RAM Free: 15109MB | Used: 0MB | Util 0% | Total 15109MB
When explicitly setting the batch size to 1 it works even on the smaller GPU (T4). Once the training starts it keeps the GPU RAM utilization at about 7GB from what I can see:
GPU RAM Free: 8067MB | Used: 7042MB | Util 47% | Total 15109MB
For batch size 2:
GPU RAM Free: 6407MB | Used: 8702MB | Util 58% | Total 15109MB
For batch size 4:
GPU RAM Free: 1975MB | Used: 13134MB | Util 87% | Total 15109MB
At batch size 6 we are hitting the limits:
GPU RAM Free: 157MB | Used: 14952MB | Util 99% | Total 15109MB
So, when using auto for batch size, deepspeed determines optimal batch size to be 8 and runs out of CUDA memory.
This is definitely not a transformer issue.
I guess lesson learned is to ramp up the batch size as opposed to assuming that deepspeed will calculate the optimal size.<|||||>I'm glad to see you sorted it out, @xpact
> So, when using auto for batch size, deepspeed determines optimal batch size to be 8 and runs out of CUDA memory.
The `auto` values in the ds config file are used differently depending on their key. their main purpose is to avoid situations where the command line args and ds_config mismatch, so basically they are just substituted with the command line args.
Only `auto`'s from `zero_optimization` section is actually "optimized" to the model size - this is the only exception.
Each `auto` key is documented here: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#deepspeed-trainer-integration and this is exclusively HF Trainer feature (and of Accelerate) - i.e. deepspeed has no idea what to do with `auto` values.
-------------------
In general the batch size is typically one of the most importants hparam - and you always want to define it explicitly and not rely on any defaults.<|||||>You can also add `skip_memory_metrics=False` to your training args and it'll print you the full memory usage stats at the end of each run. |
transformers | 21,125 | closed | Use raw string for regex in tokenization_t5_fast.py | # What does this PR do?
This change replaces the regex pattern written in a Unicode string to a raw string, to suppress `DeprecationWarning`/`SyntaxError` around the pattern.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger and @raghavanone who organized the original code.
| 01-14-2023 13:02:46 | 01-14-2023 13:02:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,124 | closed | [LongT5] Remove duplicate encoder_attention_mask default value check | # What does this PR do?
This PR performs a minor code clean-up by removing duplicate code in the LongT5 stack.
Currently, if the stack is a decoder block with `encoder_hidden_states` provided, the check for the existence of `encoder_attention_mask` is done twice:
1. https://github.com/huggingface/transformers/blob/c8f35a9ce37bd03f37fcf8336172bdcbe7ffc86a/src/transformers/models/longt5/modeling_longt5.py#L1452
2. https://github.com/huggingface/transformers/blob/c8f35a9ce37bd03f37fcf8336172bdcbe7ffc86a/src/transformers/models/longt5/modeling_longt5.py#L1479
Because the conditions on the second check are identical to that of the first check (`self.is_decoder is True`, `encoder_attention_mask is None`, `encoder_hidden_states is not None`), the second check will never be True since `encoder_hidden_states` was updated during the second check.
This PR proposes removing the first check and only use the second when the extended encoder attention mask is set anyway.
## Who can review?
@ArthurZucker , @younesbelkada
| 01-14-2023 10:44:59 | 01-14-2023 10:44:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,123 | closed | Ernie-M | ### Model description
Ernie-M looks pretty good in multilingual benchmarks, beating XLM-Roberta.
Paddlepaddle recently added ernie-m to the huggingface repo, we can use it with paddlenlp.transformers.
Would be nice to have the model supported in huggingface transformer as well.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/PaddlePaddle/ernie-m-base
https://huggingface.co/PaddlePaddle/ernie-m-large | 01-14-2023 10:26:42 | 01-14-2023 10:26:42 | https://github.com/PaddlePaddle/ERNIE/blob/ernie-kit-open-v1.0/erniekit/modules/ernie.py
has more implementation details.<|||||>Hi, @shermansiu is there any pytorch/tf implementation of this model?<|||||>None that I'm aware of.
Anyways, the author of [ERNIE-Pytorch](https://github.com/nghuyong/ERNIE-Pytorch) ported over a few other Ernie models to Huggingface. I'm sure it could be adapted for this. And the PaddlePaddle syntax is quite similar to that of PyTorch, so I'm sure it should be relatively easy, though it'll probably take some time.<|||||>@shermansiu Thanks for the resources!
I am currently trying to port the model to huggingface(pytorch), (done till Embedding Layer with acceptable tolerance of 1e-3)<|||||>Hi @KnutJaegersberg, Ernie-M is implemented! |
transformers | 21,122 | closed | FELIX: Flexible Text Editing Through Tagging and Insertion | ### Model description
FELIX is an encoder-only text editing model, which allows for faster editing and summarization than sequence-to-sequence models, because the summarization can be computed in parallel instead of autoregressively.
- [Blog](https://ai.googleblog.com/2021/05/introducing-felix-flexible-text-editing.html?hl=hr&m=1)
- [Paper](https://aclanthology.org/2020.findings-emnlp.111/)
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
https://github.com/google-research/google-research/tree/master/felix
No weights are available, but code to train it is available. A component of FELIX is BERT, so training FELIX is a matter of fine-tuning a pre-trained BERT model.
@Jmallins | 01-14-2023 08:44:34 | 01-14-2023 08:44:34 | Because FELIX can be applied to any encoder-only model, perhaps what is needed is an `AutoModelForSequenceEditing`/`<ModelName>ModelForSequenceEditing`?<|||||>Closing this as it's a duplicate of #11632 |
transformers | 21,121 | closed | Add Epsilon- and Eta-Sampling | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Implements epsilon- and eta-sampling, as seen in [[Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191)](https://arxiv.org/abs/2210.15191). The code is adapted from the author's official repository [here](https://github.com/john-hewitt/truncation-sampling).
Resolves #21092.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@gante
| 01-14-2023 07:48:18 | 01-14-2023 07:48:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21121). All of your documentation changes will be reflected on that endpoint.<|||||>@gante I incorporated your changes! I rebased with `main` because there were some merge conflicts, which caused some of the review snippets above to get "outdated," but it's done!<|||||>@sgugger this PR adds a new sampling-based generation strategy that can be effectively implemented through logits processors<|||||>Yeah, no problem!๐<|||||>@shermansiu now step 4 remains :) Would you like to work on it? (we can retweet and share your posts)<|||||>I'm interested, but it feels like it's best added to https://github.com/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb than in its own notebook, as other truncation sampling approaches (top-p and top-k) are already there.<|||||>We don't update the notebooks for our blog posts, so they stay consistent with their content :) But we will certainly update our [new docs](https://huggingface.co/docs/transformers/main/en/generation_strategies)
The new docs only contain simple examples (for now), so the benefits of the new generation strategy should be showcased in a notebook. In the near future, we will have an advanced text generation doc with clear examples for each flag (but it won't be ready in the next 2-3 months)<|||||>I have a lot on my to-do list right now... Although I'd love to contribute to the notebook, I think it's unrealistic for me to be able to put something out soon.<|||||>Yesterday's ACL 2023 tutorial on "Generating Text from Large Language Models" covers eta-sampling and more! John Hewitt, the first author of the eta-sampling paper, was one of the presenters for that tutorial!
Site: https://rycolab.io/classes/acl-2023-tutorial/
Slides: https://drive.google.com/file/d/1UHbGcjzBURG1n2DufC7iDTmGNjIz5Dp_/view |
transformers | 21,120 | open | `PreTrainedTokenizer` (slow) strip tokens that are around `unique_no_split_tokens` | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. load a `PreTrainedTokenizer` that contains `unique_no_split_tokens`, e.g. `EleutherAI/gpt-j-6B`.
```python
tokenizer = transformers.GPT2Tokenizer.from_pretrained('EleutherAI/gpt-j-6B')
```
2. use the tokenizer to split a string that contains a `unique_no_split_tokens`, e.g. `" <|extratoken_1|> "`.
```python
print(tokenizer(" <|extratoken_1|> ").input_ids)
```
### Expected behavior
The tokenizer splits the string into 3 tokens (`" "`, `"<|extratoken_1|>"` and `" "`), and gives their ids (`[220, 50257, 220]`). This is the behavior of `PreTrainedTokenizerFast`.
But the actual behavior is that the `PreTrainedTokenizer` only gives the id of `"<|extratoken_1|>"`, i.e. `50257` | 01-14-2023 06:51:03 | 01-14-2023 06:51:03 | This is probably due to the following line, which is still not fixed in the HEAD.
https://github.com/huggingface/transformers/blob/f58248b8240571bbbb0918ddd36cc3fdf061df11/src/transformers/tokenization_utils.py#L532-L537<|||||>This bug strips away `\n` around my special token, making my model believe that there is no newline in my text.<|||||>@ArthurZucker I can pick up this, Let me know what should be possible fix ?
<|||||>There is indeed a discrepancy between the `fast` and `slow` version.
The problem here is that the tokens are indeed part of the `no_split_tokens`, but they are not `AddedToken`.
I am not really sure if the `fast` or `slow` has the correct behavior ๐
<|||||>The cleanest way is to have the tokens as `AddedTokens` because you can handle the `rstrip` and `lstripe` arguments<|||||>@ArthurZucker I think `decode(encode(text)) == text` should be true by default, because some use cases (e.g. code generation) require the correct formatting of text. "Automatic formatting" should not be done by default to avoid breaking such use cases.
From another point of view, I guess most pre-trained models use a fast tokenizer (as the name `fast` implies), so these models also expect the behavior of the `fast` version.<|||||>> I think decode(encode(text)) == text should be true by default
This is untrue for pretty much all tokenizers, since tokenization is a destructive operation. At the very least you get back the normalized text (with some minimal unicode clean up) but for some tokenizers like BERT you will have whitespace simplified or text lowercased.<|||||>> > I think decode(encode(text)) == text should be true by default
>
> This is untrue for pretty much all tokenizers, since tokenization is a destructive operation. At the very least you get back the normalized text (with some minimal unicode clean up) but for some tokenizers like BERT you will have whitespace simplified or text lowercased.
I agree that minimal unicode clean up is acceptable (mostly because that does not break my use cases), but whitespace simplification or text lowercasing is not by default enabled, so by default users do get a mostly conservative tokenizer.
But to add new tokens, the most simple way (`add_tokens('mytoken')` with `special_tokens=False` by default) in a slow tokenizer accidentally (from the view of a user) breaks this conservative behavior, and I think this is unexpected by users.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Is there any progress on this issue? @ArthurZucker <|||||>Not yet! I finally have time so this week should be good! <|||||>Is there any progress on this issue?<|||||>Hey, to follow progress is suggest you check #23909, which should try to adresse this. <|||||>Quick update, this is gonna take a bit more time as a more in-depth refactoring is needed |
transformers | 21,119 | closed | GPT2 tokenizer decode swallows space | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokens = tokenizer.encode("y is ?")
print(tokens) # prints [88, 318, 5633]
print(tokenizer.decode(tokens)) # prints "y is?" (wrong!)
```
### Expected behavior
It should roundtrip back to the same string, matching the behavior of OpenAI's reference implementation: https://github.com/openai/gpt-2/blob/master/src/encoder.py
OpenAI's implementation encodes to the same tokens, but correctly decodes them to `"y is ?"` | 01-14-2023 01:34:38 | 01-14-2023 01:34:38 | Hi, @daniel-ziegler use `clean_up_tokenization_spaces=False` in `decode` so the code will be -
`print(tokenizer.decode(tokens, clean_up_tokenization_spaces=False))`
this solves the problem,
thanks,
susnato.<|||||>Indeed! Thanks for jumping this quickly @susnato ๐ค<|||||>Thanks, that does work. I still think it's a bug that that isn't the default behavior for reversible tokenizers like GPT2's -- there's only one standard decoding behavior for them and it should be the default.<|||||>@daniel-ziegler, I think it's due to the reason that most tokenizers don't preserve the structure such as spaces, and the huggingface team didn't want to have different implementations for both type of tokenizers (which will make the code more complecated!), so it's True by default. |
transformers | 21,118 | closed | Issue with ESM and DDP due to unused positional embeddings when rotary embeddings specified | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, 1 GPU
- Using distributed or parallel set-up in script?: Yes, DDP
### Who can help?
@ArthurZucker @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Using class `ESMModel` results in an unused parameter creation. In the DDP setting, this causes errors as some parameters don't receive grad. Within `EsmEmbeddings` , self.positional_embeddings should not be instantiated as an `nn.Embedding` if `config. position_embedding_type` is not `absolute`.
### Expected behavior
We would expect instantiaton of `EsmEmbeddings` to not create the unused `nn.Embedding` module as this can create cryptic errors later due to unused parameters. | 01-13-2023 21:30:15 | 01-13-2023 21:30:15 | Note that you can avoid any error by passing `find_unused_parameters=True` when wrapping your model in DDP.<|||||>This is part of the model design that I believe is in the original repo as well, but if it's still causing issues with @sgugger's fix, we can look into making a PR to remove the base positional embeddings when the model is using rotary embeddings instead!
If it's working for you with that fix, though, feel free to close the issue.<|||||>Hi @Rocketknight1, it would fix the issue but due to other features of the particular model (ESM within a larger module) in use, the flag results in `RuntimeError: Expected to mark a variable ready only once`. At any rate, it turned out this portion of ESM was the only problematic portion, but feel free to close the issue since that find unused params fix will (probably) be sufficient for most users.<|||||>Alright, I'll do that for now - but if anyone comes across this issue and `find_unused_parameters=True` is not helping, feel free to reopen and comment here! |
transformers | 21,117 | closed | Problem running a project with transformers | ### System Info
I am trying to run a project that uses transformers (whisper), I get this error message:
```
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
```
I have tried:
```
pip3 install numpy
pip3 install -I transformers --no-cache-dir --force-reinstall
```
When I run the same python interpreter (pip3 and python3 are from the same interpreter) and import numpy, it is successful:
```
$ head -1 `which whisper`
#!/usr/local/opt/[email protected]/bin/python3.10
$ /usr/local/opt/[email protected]/bin/python3.10
Python 3.10.9 (main, Dec 15 2022, 18:25:35) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import numpy.version as version
>>> version.version
'1.24.1'
```
I was trying to dig deeper and looked at transformers/utils/versions.py. I looked at
```python
got_ver = importlib_metadata.version(pkg)
```
And got_ver indeed returns None, even though I can use numpy:
```
Python 3.10.9 (main, Dec 15 2022, 18:25:35) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import importlib.metadata as importlib_metadata
>>> numpy.version.version
'1.24.1'
>>> print(importlib_metadata.version("numpy"))
None
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
Python 3.10.9 (main, Dec 15 2022, 18:25:35) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import importlib.metadata as importlib_metadata
>>> numpy.version.version
'1.24.1'
>>> print(importlib_metadata.version("numpy"))
None
```
### Expected behavior
The last line would not return None and the transformer library would not return this:
```
>>> import transformers
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/transformers/__init__.py", line 30, in <module>
from . import dependency_versions_check
File "/usr/local/lib/python3.10/site-packages/transformers/dependency_versions_check.py", line 41, in <module>
require_version_core(deps[pkg])
File "/usr/local/lib/python3.10/site-packages/transformers/utils/versions.py", line 123, in require_version_core
return require_version(requirement, hint)
File "/usr/local/lib/python3.10/site-packages/transformers/utils/versions.py", line 117, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/usr/local/lib/python3.10/site-packages/transformers/utils/versions.py", line 45, in _compare_versions
raise ValueError(
ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy.
```
| 01-13-2023 20:38:21 | 01-13-2023 20:38:21 | I am unable to reproduce on my side, even in a new environment an installing numpy 1.24.1. Could you maybe try a full re-install in a fresh environment?<|||||>It is a system-wide Python install, I currently don't have time to reinstall the whole system.
I hot-fixed it by removing numpy from transformers/dependency_versions_check.py, I changed pkgs_to_check_at_runtime to not include numpy.
```python
pkgs_to_check_at_runtime = "python tqdm regex requests packaging filelock tokenizers".split()
```
After that, everything works (including, obviously, numpy), so it is really just the check that is broken.
Sharing for others who might stumble on the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am having the same issue, and have reproduced the steps above.
Doing the hot-fix of removing Numpy from the dependency checks seems to have worked ๐ <|||||>Maybe check your `site-packages/` directory first? This issue has also happened to me because both `numpy-1.24.2.dist-info/` and `numpy-1.24.2-py3.10.egg-info/` exist in `site-packages/`, and `numpy-1.24.2-py3.10.egg-info/` is an empty directory. I assume this mainly when `importlib_metadata` happens to seek metadata in a different directory, and the directory relative happens to be empty, it might return `None`. You can try to check your directory by this
```
$ ls /opt/homebrew/lib/python3.10/site-packages/ | grep numpy
```
In addition, you may use `rm -rf` to delete the empty one or the distracting one. |
transformers | 21,116 | closed | PushToHubCallback is hanging on the training completion | ### System Info
Adding a PushToHubCallback callback when training a TF model in a Jupyter notebook results in a cell hanging upon training completion. Nothing is pushed to Hub. Here's the callback:
```
push_to_hub_callback = PushToHubCallback(
output_dir="my_food_classifier",
tokenizer=image_processor,
)
callbacks = [metric_callback, push_to_hub_callback]
model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
epochs=num_epochs,
callbacks=callbacks
)
```
A Jupyter notebook where this can be reproduced is linked below, however, I'm getting the same result, when running this as a script, not in a notebook environment.
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1 @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1OlFUm41Tqfz4v4oHJ4XzJEpSwmlGq9z3#scrollTo=daKTh8apJHU_
### Expected behavior
I would expect that the callback would save and push the model to the Hub once per epoch, and, possibly, upon training completion. | 01-13-2023 20:16:27 | 01-13-2023 20:16:27 | `PushToHubCallback(..., save_strategy="no")` seems to fix the hanging problem but in that case the model will only be saved at the end of the training...<|||||>Investigating this now - PushToHubCallback shouldn't be hanging like this.<|||||>I tried running this locally and it ran successfully for me - is it possible that model uploading from Colab was just slow, and you didn't give it time to complete? In some cases the progress bar on the final upload doesn't display, though I have some ideas on how to fix that!<|||||>Hi @Rocketknight1 , thanks for looking into the issue! When I reported this, I gave it about 40-45 minutes after it finished training. At that point, nothing was uploaded to Hub and the notebook cell just kept spinning with no end and I interrupted the execution. However, I have tried to reproduce it again just now, andโฆ it worked. Checkpoints uploaded during training, and final model took less than 2 minutes to upload. I have not modified anything in the code. No clue why it wasnโt working before, and why it does now.
My only guess is that perhaps there was some sort of connection issue. Could it be that the callback was not able to reach Hub at that moment and kept trying? <|||||>As a piece of anecdotal evidence - in case it's useful - I tried running @MKhalusova example a few days ago (added full script below), and also had issues with hanging. It ran seemlessly today - no idea what changed ๐คทโโ๏ธ
Two notes:
* [The model card only](https://huggingface.co/amyeroberts/my_food_classifier) shows performance for epoch 0, even though the save strategy is "epoch". I'm guessing this is because of [not starting an upload job whilst another is happening](https://github.com/huggingface/transformers/blob/91c2278b97a16e7dcde28fd0fce72969560f587b/src/transformers/keras_callbacks.py#L374)?
* I've never seen a progress bar for upload either at the end of an epoch or at the end of training using `PushToHubCallback`
```python
import numpy as np
import tensorflow as tf
import evaluate
from datasets import load_dataset
from transformers import AutoImageProcessor, DefaultDataCollator, TFAutoModelForImageClassification, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
BATCH_SIZE = 16
NUM_BATCHES = 5
NUM_EPOCHS = 5
LEARNING_RATE = 3e-5
WEIGHT_DECAY_RATE = 0.01
SEED = 42
# Load in the dataset
food = load_dataset("food101", split="train[:5000]")
food = food.train_test_split(test_size=0.2)
labels = food["train"].features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
checkpoint = "google/vit-base-patch16-224-in21k"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
accuracy = evaluate.load("accuracy")
def process(examples):
examples.update(image_processor(examples['image'], ))
return examples
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
food = food.map(process, batched=True).shuffle(seed=SEED)
data_collator = DefaultDataCollator(return_tensors="tf")
model = TFAutoModelForImageClassification.from_pretrained(
checkpoint,
id2label=id2label,
label2id=label2id,
)
tf_train_dataset = food["train"].select(range(BATCH_SIZE * NUM_BATCHES)).to_tf_dataset(
columns=['pixel_values'],
label_cols=["label"],
shuffle=True,
batch_size=BATCH_SIZE,
collate_fn=data_collator
)
tf_eval_dataset = food["test"].select(range(BATCH_SIZE * NUM_BATCHES)).to_tf_dataset(
columns=['pixel_values'],
label_cols=["label"],
shuffle=True,
batch_size=BATCH_SIZE,
collate_fn=data_collator
)
optimizer, lr_schedule = create_optimizer(
init_lr=LEARNING_RATE,
num_train_steps=len(food["train"]) * NUM_EPOCHS,
weight_decay_rate=WEIGHT_DECAY_RATE,
num_warmup_steps=0,
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset)
push_to_hub_callback = PushToHubCallback(
output_dir="amyeroberts/my_food_classifier",
tokenizer=image_processor,
)
model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
epochs=NUM_EPOCHS,
callbacks=[metric_callback, push_to_hub_callback]
)
```<|||||>@amyeroberts We generally avoid displaying progress bars from that callback, because the upload of each checkpoint runs in the background while the next epoch is training. As a result, the callback progress bar would run into the Keras progress bar and cause chaos in the console output.
However, I think the callback was supposed to display a progress bar for the final upload after training is finished, when there's no risk of running into the Keras bars. This is also the only upload that will actually cause any delays. I'll put that on my list to investigate!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,115 | closed | Fixed typo in docstring | # What does this PR do?
Missing 'to' in 'pad the inputs the maximum length'.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-13-2023 19:24:57 | 01-13-2023 19:24:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger,
I've looked at the CircleCI test and it seems like it failed because of a timeout when installing from PIP, not because of styling. Is there any way to rerun the tests?<|||||>Indeed, everything is green on re-launch. Thanks for your contribution! |
transformers | 21,114 | closed | Deprecating `position_ids` in GPTJ | ### System Info
latest transformers version, pytorch
### Who can help?
Hi @ArthurZucker and @younesbelkada, let me know if someone else should be tagged in this, esp. considering this is a harmless "bug", not something really urgent.
Basically what the title says: I think the position_ids in the [GPT-J code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_gptj.py) should be deprecated, since they are not used anywhere as far as I can tell. Something along the line of what bloom does at:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/bloom/modeling_bloom.py#L891
I can also submit a PR if you want, just wanted to make sure I didn't overlook anything.
Let me know,
Clรฉment
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
Cleaner code, no useless no-op | 01-13-2023 18:19:11 | 01-13-2023 18:19:11 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Friendly ping @ArthurZucker and @younesbelkada <|||||>I agree with you, opened a PR for this. Thanks for reporting. |
transformers | 21,113 | closed | Fixing offline mode for pipeline (when inferring task). | # What does this PR do?
```pyton
pipe = pipeline(model="xx")
```
Was actually using network even when `TRANSFORMERS_OFFLINE=1` was used.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 01-13-2023 18:16:49 | 01-13-2023 18:16:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> So the current `test_offline_mode` should be split in two, with one test just doing TRANSFORMERS_OFFLINE=1 and somehow testing there are no calls to the Hub, and a second test `test_cached_is_used_when_offline` where we do the mock but don't touch TRANSFORMERS_OFFLINE. The test you're adding should be like `test_offline_mode` and not mock anything (unless it's to check there are no calls to the internet).
For the added test this is indeed the only thing proving it fails when the code doesn't check the offline mode.
For the other tests if I understand correctly it's `TRANSFORMERS_OFFLINE=1` -> Users asks us to not touch internet, regardless of if internet is available or not. We should FAIL if we're hitting the internet (hence the mock).
If internet is not available, regardless of `TRANSFORMERS_OFFLINE` we should default to the cached version.
That's ok for the `from_pretrained` but I don't think this is doable with the pipeline task, because it's not included directly in the model + config, right ? Only the README.md has that information, of which we do not have a cached version, correct ? (Don't think we should either).
If that's correct, then I'm ok with splitting the tests, but the mock should still probably be in both tests, 1 to fake a buggy internet, the other to make sure we trigger an failure when we actually use internet even after been explicitly asked not to do it, no ?
(We could change the mock errors strings to reflect that difference)<|||||>Agreed! And yes the pipeline task for those tests needs to be passed, it can't be retrieved in offline mode/internet fails.<|||||>Made the changes is that what you had in mind ? |
transformers | 21,112 | closed | Refactoring of the text generate API docs | This is part 2 of the refactoring efforts in the text generation docs. The first part (here - https://github.com/huggingface/transformers/pull/21090) adds an introductory guide with examples.
The second part of the refactor (this PR) reduces repetitive examples and somewhat trims down the API reference doc. Only documentation is affected by this PR.
The text generation API doc page can be trimmed down even further If we remove the docstrings of individual methods like greedy_search(), contrastive_search(), etc. At the moment in 99% of the cases, one can use generate() directly. If it was 100%, I would remove these from the docs. However, I have kept them in the doc at the moment.
| 01-13-2023 17:29:47 | 01-13-2023 17:29:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,111 | closed | [VideoMAE] Fix docstring | # What does this PR do?
This PR fixes a docstring for VideoMAE (the model doesn't have a CLS token).
Fixes #21016 | 01-13-2023 16:31:33 | 01-13-2023 16:31:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Cc @ydshieh seems like the CI has only run 7 checks. Any idea why?<|||||>Because it doesn't touch any code. Tests are only run for code changes.<|||||>Because it doesn't touch any code. Tests are only run for code changes. |
transformers | 21,110 | open | Add support for BLIP and GIT in image-to-text and VQA pipelines | ### Feature request
BLIP and GIT are 2 recent additions in the library, providing state-of-the-art performance for tasks like image captioning and visual question answering (VQA). GIT is even capable of video captioning and video QA.
Hence it makes sense to support them in our image-to-text and VQA pipelines.
### Motivation
Having support for better models in pipelines is very desired!
See also a request for it here: https://discuss.huggingface.co/t/support-for-different-models-in-text-to-image-pipeline/29504
### Your contribution
I can assist in adding support, see #18446 as a very similar case | 01-13-2023 15:08:12 | 01-13-2023 15:08:12 | Hi @NielsRogge , can work on it?<|||||>Sure @atturaioe feel free to start working on it <|||||> I am writing to inquire about the possibility of me starting work on this issue. @NielsRogge can I contribute?<|||||>@NielsRogge is this issue still open for contribution?<|||||>Yes <|||||>@NielsRogge If nobody is working on it, I would like to pick up the issue.<|||||>I would like to pick the issue if its still available.<|||||>@NielsRogge is this issue still open to contribute . I would like to work on it <|||||>Support for BLIP in the image-to-text pipeline has been added in #21904. GIT can be added as explained in [this comment](https://github.com/huggingface/transformers/issues/21514#issuecomment-1446420536), feel free to open a PR.
Support for the VQA pipeline still needs to be added for both, also there contributions are welcome.<|||||>@NielsRogge can I work on this issue??<|||||>Hello @NielsRogge !
I would like to work on this issue (add support for VQA to GIT model) as a first contribution.
But before I start, I have a question :
Currently the only model implementing the VQA pipeline is `ViltForQuestionAnswering`, it does the task using [classification](https://github.com/huggingface/transformers/blob/4baa34c18f18274fe028ad5a5511ea3fba9eeece/src/transformers/models/vilt/modeling_vilt.py#L1079)
However in [GIT paper](https://arxiv.org/abs/2205.14100) they say that :
> For VQA, the input question is treated as a text prefix, and the answer is generated in an auto-regressive way. Furthermore, we present a new generation-based scheme for ImageNet classification, where the predicted labels come directly from our generative model without pre-defining the vocabulary.
So I wonder if I should implement it as a classifier or should I follow the paper ?
Thanks<|||||>Hi @marechaux, we will need to implement the 2 different approaches in the VQA pipeline. ViLT and GIT indeed solve VQA entirely different (ViLT is a classifier whereas GIT is a generative GPT-like model).<|||||>> Support for BLIP in the image-to-text pipeline has been added in #21904. GIT can be added as explained in [this comment](https://github.com/huggingface/transformers/issues/21514#issuecomment-1446420536), feel free to open a PR.
>
> Support for the VQA pipeline still needs to be added for both, also there contributions are welcome.
Hey @NielsRogge, took a shot at this. Am I correct in understanding that the ideal implementation of "microsoft/git-base" in the image-to-text pipeline would look something like this?
```python
from transformers import AutoProcessor, GitForVision2Seq
processor = AutoProcessor.from_pretrained("microsoft/git-base")
model = GitForVision2Seq.from_pretrained("microsoft/git-base")
pipe = pipeline("image-to-text", model=model, image_processor=processor.image_processor, tokenizer=processor.tokenizer)
print(pipe("https://www.wideopenpets.com/wp-content/uploads/sites/6/2021/12/Popular-Horse-Feature-Image.png"))
```
If so, I got this to work by:
1. Adding the GitForVision2Seq class and making it available for imports / in MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES
2. Updating src/transformers/models/git/processing_git.py to use a custom GITImageProcessor. This GITImageProcessor is an exact copy of the CLIPImageProcessor that GitProcessor already wraps, with the only difference being how the GITImageProcessor.preprocess method returns data when being called by the ImageToTextPipeline.preprocess method (Basically adding the input_ids key with a value of None ).
So the GITImageProcessor.preprocesor ends with this:
```python
data = {"pixel_values": images}
return_data = BatchFeature(data=data, tensor_type=return_tensors)
return_data['input_ids'] = None
return return_data
```
rather than the CLIPImageProcessor.preprocessor method returning this
```python
data = {"pixel_values": images}
return BatchFeature(data=data, tensor_type=return_tensors)
```
Curious your thoughts on this approach. How would this would affect other GIT image processing workflows (i.e. VQA, etc.)? Could we can use a conditional to account for those?<|||||>Thanks for taking a stab at this. I'm fine with adding a `GitForVision2Seq` (as proposed by @Narsil) however it'd be great to not having to add a custom `GITImageProcessor`. What's the reason this is added? Is it only to include "input_ids" which are set to `None`?<|||||>Exactly this - 'only to include "input_ids" which are set to None?'
I see how adding an entirely new GITImageProcessor seems excessive when all it would do is add the Input_ids : None key value pair to data being returned from the .preprocess method.
As you describe here, https://github.com/huggingface/transformers/issues/21514#issuecomment-1446359970, Once we hit the preprocess method in ImageToTextPipeline and map the model to git, the model_inputs are returned (via the CLIPImageProcessor through the GITProcessor in processing_git.py) without the input_ids key. So AFAIK, the best we can do is modify the return value of the CLIPImageProcessor.preprocess method without changing the CLIPImageProcessor class by replicating the CLIPImageProcessor, rebranding it as a GITImageProcessor, and modify the .preprocess method.
Let me know if that works or if you feel there is a better approach. Is the idea that there would be some way to do this within GitForVision2Seq?
As an aside, I read some best practices for working in the transformers library (https://huggingface.co/transformers/v4.10.1/add_new_model.html#general-overview-of-transformers). Would it be preferable to copy the entire CLIPImageProcessor class as GITImageProcessor within processing_git.py or do something more like this within processing_git.py.
```python
class GITImageProcessor(CLIPImageProcessor):
def preprocess(self, *args, **kwargs):
# Call the original preprocess method
return_data = super().preprocess(*args, **kwargs)
# Add 'input_ids' key to the data
return_data['input_ids'] = None
return return_data
```<|||||>Hmm I don't get why `input_ids` need to be set to `None`. Could you clarify?
[This example](https://huggingface.co/docs/transformers/model_doc/git#transformers.GitForCausalLM.forward.example) shows that you only need to pass `pixel_values` to the `generate` method to do image captioning.<|||||>Hello, it seems that the BLIP for the image to text pipeline has been completed, however that the VQA pipeline for both BLIP & GIT are not complete, along with the image to text pipeline for GIT. @marechaux how is the VQA going for GIT?<|||||>Hi! I'm also interested in helping out if we can divide the work :) <|||||>Hey @NielsRogge , I was working on VQA pipeline for BLIP but i am confused how can i give `pixel_values` to `_forward` method in `VisualQuestionAnsweringPipeline` [(src)](https://github.com/Tanmaypatil123/transformers/blob/main/src/transformers/pipelines/visual_question_answering.py#L19) because BLIP requires pixel values and those are generated by preprocessor . Sorry if this is silly question because this is my first open source contribution .<|||||>Hi @Tanmaypatil123 there's already this PR: #23348. Feel free to take it over/improve it<|||||>Hello, can I work on this?<|||||>Hi Team, Can I start working on it ? |
transformers | 21,109 | closed | Add visualbert in visual question answering model mapping | Following the doc https://huggingface.co/docs/transformers/v4.25.1/en/model_doc/visual_bert#transformers.VisualBertForQuestionAnswering I reckon this one was missing.
## Who can review?
@ydshieh do I need to write any additional test for this?
| 01-13-2023 14:25:36 | 01-13-2023 14:25:36 | No, but we should check if the pipeline testing (in the PR CI job page) runs against this newly added `visual_bert`.
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>VisualBERT isn't supported by the VQA pipeline, the pipeline is currently very specifically implemented to work with ViLT. |
transformers | 21,108 | closed | QuestionAnsweringPipeline top_k returns single result | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-1025-gcp-x86_64-with-glibc2.31
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.9.0+cu111 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes: tesla T4
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using a QuestionAnsweringPipeline with the the `top_k` parameter set to a number greater than 1, the model can still return a single answer in the form of a dictionary.
Example to reproduce bug:
```[python]
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
pipeline = QuestionAnsweringPipeline(
model=AutoModelForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad"),
tokenizer=AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
)
pipeline([{
"context": " 1 ",
"question": "What is Anne's age?"
}], top_k=10)
```
### Expected behavior
When the `top_k` parameter, I would expect that the call to the model returns a list containing the best predictions up to the tenth, when possible. If the model only outputs one answer, I would expect this answer to be within a list.
When there are no possible answer, the returned value is an empty list. When there are multiple answers, the returned value is also a list. Outputting a dictionary creates an edge case that needs to be handled when, for example, iterating over the outputs of the model | 01-13-2023 14:12:37 | 01-13-2023 14:12:37 | You are entirely correct to wish for it.
The current behavior is linked to our commitment to not break things.
If you actually check out the code, you'll see this is an exception because there's only 1 returned value (Because only 1 return value is possible).
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L596-L597
There's actually another one: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L391-L392
Since we're not breaking things, and this pipeline was written a long time ago, it was not necessarily aligned with other pipelines. So these quircks are unfortunately necessary.
This is the one thing I would like to modify in V5 (cleanup pipelines return types to make them extremely consistent).
I hope you understand the current state of things. Not sure if there's anything to be done about it. |
transformers | 21,107 | closed | Update `TFTapasEmbeddings` | # What does this PR do?
Update `TFTapasEmbeddings` to fix `test_embeddings_out_of_bounds_raise_exception` for `TFTapas`. | 01-13-2023 12:11:47 | 01-13-2023 12:11:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,106 | closed | Update modeling doc strings FE -> IP | # What does this PR do?
Replaces references of feature extractors with image processors that had been missed in the first batch of changes
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 01-13-2023 12:04:01 | 01-13-2023 12:04:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,105 | closed | Make `test_save_pretrained_signatures` slow test | # What does this PR do?
`UtilsFunctionsTest.test_save_pretrained_signatures` in `tests/test_modeling_tf_common.py` is introduced on Oct. 2022, which runs about ~60 seconds, and failed a lot of times **on Push CI** due to the timeout limit on Push CI job setting (`PYTEST_TIMEOUT`).
Notice that, on CircleCI, we have `PYTEST_TIMEOUT=120` - as we use flag `-n 8` to run tests in parallel. I don't want to set `120` for Push CI at this moment (without running with it first), as it might make the CI slow down (a lot). | 01-13-2023 10:25:02 | 01-13-2023 10:25:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! I'd like to have @Rocketknight1 and @gante approve on this before we merge (or find a way to make this test slower). |
transformers | 21,104 | closed | [Tokenizers] Fix a small typo | # What does this PR do?
Fixes #21073, there was a typo in the `__repr__` method of the `PretrainedTokenizer` which uses the previous `model_max_len` instead of `model_max_length` | 01-13-2023 09:03:36 | 01-13-2023 09:03:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,103 | closed | Fine-tuning wav2vec2 model: eval_loss & eval_wer keep increasing | ### System Info
- transformers version: 4.22.0.dev0
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Both have same issue
- $ pip freeze |grep datasets
datasets==2.4.0
### Who can help?
@patrickvonplaten
@anton-l
@sanchit-gandhi
@OllieBroadhurst
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I am fine-tuning the pre-trained model : **facebook/wav2vec2-large-robust-ft-libri-960h** by using customized dataset (3k hours Singapore English dataset).
2. I split the large dataset into multiple small batches, each batch contained about 100k+ samples.
3. Load the previous tuned output with next batch of dataset to run the training script at [ctc_finetune.py](https://drive.google.com/file/d/1NogO0G8-RtLGaisfcmBrXh6ESKZbWfZK/view?usp=sharing)
- First time loaded pre-trained model **facebook/wav2vec2-large-robust-ft-libri-960h** and frist batch of dataset
4. After that run the evaluation on the tuned model with fixed eval_dataset
5. Loop step 3&4 until end of the last batch dataset.
- Except the customized dataset, my training script [ctc_finetune.py](https://drive.google.com/file/d/1NogO0G8-RtLGaisfcmBrXh6ESKZbWfZK/view?usp=sharing) which is almost same as original one
- Train and eval log at [train_eval.log](https://drive.google.com/file/d/1nsuWis0r6inuyF80aZDn7FaXActfMAUJ/view?usp=share_link)
- I upload all my scripts and training logs in the link at https://drive.google.com/drive/folders/1M5xE4L_HBxBQynWyl6f1c-tJtat027d1?usp=share_link
- I cannot figure out where was wrong. I am not sure whether it's libs bug or my configuration mistake.
- Appricate if someone can help me take look it.
### Expected behavior
**After 20 EPOCH for each train and eval, here is the output as follows:**
dataset batch | epoch | train_samples | train_loss | eval_loss | eval_wer | eval_samples(Fixed for all the eval)
-- | -- | -- | -- | -- | -- | --
1 | 20 | 104398 | 27.24 | 268.3798 | 0.5587 | 76946
2 | 20 | 104211 | 32.74 | 389.0578 | 0.6787 | 76946
3 | 20 | 104223 | 27.24 | 436.5064 | 0.7194 | 76946
4 | 20 | 104174 | 24.86 | 469.7542 | 0.7437 | 76946
5 | 20 | 104018 | 23.01 | 484.5408 | 0.7627 | 76946
6 | 20 | 104158 | 21.79 | 508.0651 | 0.7728 | 76946
7 | 20 | 104166 | 21.21 | 503.9046 | 0.7799 | 76946
8 | 20 | 104280 | 20.44 | 516.5866 | 0.7919 | 76946
9 | 20 | 104111 | 19.6 | 508.3685 | 0.7888 | 76946
10 | 20 | 104073 | 19.31 | 525.6768 | 0.7914 | 76946
11 | 20 | 104144 | 19.04 | 534.6445 | 0.7979 | 76946
12 | 20 | 104298 | 18.8 | 525.5178 | 0.7936 | 76946
13 | 20 | 104230 | 18.62 | 520.3677 | 0.7952 | 76946
14 | 20 | 104053 | 17.95 | 526.2173 | 0.8025 | 76946
- I expected the **eval_loss** and **eval_wer** should decrease gradually.
- The above result was performed train and eval separately.
- I had tried to run the evaluatoin during the training , the result was similar, which the **eval_loss** and **eval_wer** **increased** unexceptedly as following table:
epoch | train_loss | eval_loss | eval_wer
-- | -- | -- | --
1 | 138.01 | 58.21 | 0.21
2 | 114.08 | 75.16 | 0.27
3 | 98.49 | 86.66 | 0.31
4 | 85.66 | 95.94 | 0.33
5 | 76.16 | 105.48 | 0.34
6 | 66.63 | 107.92 | 0.34
7 | 60.2 | 118.42 | 0.37
8 | 55.34 | 121.54 | 0.37
9 | 49.62 | 128.89 | 0.37
10 | 43.41 | 137.79 | 0.39
11 | 40.14 | 133.79 | 0.38
12 | 36.3 | 143.32 | 0.4
13 | 33.48 | 144.25 | 0.38
14 | 30.32 | 152.04 | 0.4
15 | 27.34 | 158.57 | 0.4
16 | 24.78 | 153.13 | 0.38
17 | 22.8 | 159.58 | 0.39
18 | 21.36 | 165.38 | 0.39
19 | 20.27 | 167.18 | 0.38
20 | 20.34 | 167.7 | 0.38
| 01-13-2023 03:39:17 | 01-13-2023 03:39:17 | It's hard to work out the exact issue but every time you run `ctc_finetune.py` you're resetting the learning rate and performing warmup all over again. This means that your training parameters are going to be all over the show so you only really want to run a training script once.
With a dataset that large, you're probably better off [sharding](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) the datasets and then saving the shards to disk in sizes of around 1GB.
```python
num_shards = 50 # or whatever
ds = ... # some large dataset
for shard_idx in range(num_shards)):
shard = ds.shard(num_shards, shard_idx, contiguous=True)
shard.save_to_disk(f"shard_{shard_idx}")
```
Then load the dataset by concatenating the shards. Something like:
```python
from datasets import load_from_disk, concatenate_datasets
ds = concatenate_datasets([load_from_disk(shard_fp) for shard_fp in shard_paths])
```
Then running the custom training script again after making these changes or following the guide [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2#prepare-data-tokenizer-feature-extractor) and tweaking the notebook.
From a performance perspective, you're probably more likely to get better results with the [XLS-R weights](https://huggingface.co/facebook/wav2vec2-xls-r-300m) so try that first.
Also the [forums](https://discuss.huggingface.co/) are probably a better source of help for this than here.<|||||>Also note that CTC training is not very robust, I'd recommend to try out a bunch of different hyper-paremeters. The most important one to get right is usually the learning rate and dropout <|||||>Also cc @sanchit-gandhi <|||||>> ```python
> ds = concatenate_datasets([load_from_disk(shard_fp) for shard_fp in shard_paths])
> ```
Thanks **OllieBroadhurst** and **Patrickvonplaten** for the guide.
- I had tried to create all the dataset into one ds file, which had no issue to create such ds in my end, expected it was extremely slow (about few days).
- With the large ds file, then I run the training script above, somehow it hung and stucked forever at the loading of ds. That's why I split the dataset into small batch.
- I haven't tried the **shading** soultion, just curiously, by concatenate_datasets, will it be the same effect as creating the one ds file?
<|||||>As a rule of thumb, using datasets of 1GB or less shouldn't cause problems. Sharding just chunks the dataset up into smaller datasets, not too different to what you were doing before. Replace the line of code where you save your large dataset with the code that splits it into shards and saves each shard. No need to save the whole thing as one large dataset.
The `concatenate_datasets` should work fine. The contents of the dataset wont be loaded into memory, it's only done so while it's being iterated over during training.
<|||||>Thank you **OllieBroadhurst** for the prompt response.
- Yes I concatenated all the small ds in to one as you suggested. The process was pretty fast, only took seconds to reach
the line of **data_collator = DataCollatorCTCWithPadding(processor=processor)**, and it was stucking there or the **Trainer initialization** for a long time.
- It took long time to go through this portion of code as follows:
`
//Instantiate custom data collator
data_collator = DataCollatorCTCWithPadding(processor=processor)
//Initialize Trainer
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=feature_extractor,
)
`
- It seems it only use single thread to execute the data process in somewhere. Does it possible to speed up the data process ? (eg, by multiple threads)
<|||||>This is almost certainly the trainer which is pretty complex so it's hard to tell why. You can check if it's the dataset by using the top few rows, like `train_dataset.select(range(100))` and seeing if a smaller model works. Also check your memory usage maybe.
I'm not 100% what you mean by "data process"?<|||||>> This is almost certainly the trainer which is pretty complex so it's hard to tell why. You can check if it's the dataset by using the top few rows, like `train_dataset.select(range(100))` and seeing if a smaller model works. Also check your memory usage maybe.
>
> I'm not 100% what you mean by "data process"?
- It only takes 7GB out of 128GB total memory.
- And 2 out of 32 cores taken, 2GB out of 48GB used for each GPU(total 2) .
- Distributed launch the training with local_rank=2 as follows :
python -m torch.distributed.launch **--nproc_per_node=2** ctc_finetune.py --train --eval -lr 2
- ~20 hours passed, so far, it is still stucking in same steps mentioned above.
- If I use small dataset, it works. For the size of 100K samples dataset, it will take about **30**minutes to pass through the above steps.
- I jus felt that the stucking might be caused by the itration of the dataset elements in the training code somewhether. However I did change any code from the libs. No sure if it caused by my configurations
- I had put my model output configs at [Json_Configs](https://drive.google.com/drive/folders/1p19ohigIUxlGyLei72ZfABXVrI611h3I?usp=share_link). Appreciate if you can have a look on it. <|||||>Start by eliminating any multi-device influence. Try `python -m torch.distributed.launch ctc_finetune.py --train --eval -lr 2`. If that doesn't work, try `python -m ctc_finetune.py --train --eval`.<|||||>Thanks OllieBroadhurst
- I will try out your suggestion and let you know the result.
- For the stucking issue, after 21hours, it finally went through the stucking point above, and started the training iteration as follow:
1%|โ | 42500/**4338800** [7:42:00<**590:50:23**, 2.02it/s]Saving model checkpoint to /mnt/workspace/output/train_eval/checkpoint-42500
Configuration saved in /mnt/workspace/output/train_eval/checkpoint-42500/config.json
{'loss': 286.0071, 'learning_rate': 4.9099999999999994e-05, 'epoch': 0.0}
{'loss': 114.5247, 'learning_rate': 0.0002977528714424097, 'epoch': 0.16}
{'loss': 111.7823, 'learning_rate': 0.0002977182757507265, 'epoch': 0.17}
{'loss': 112.0589, 'learning_rate': 0.00029754529729231053, 'epoch': 0.18}
{'loss': 110.4228, 'learning_rate': 0.0002973033350246782, 'epoch': 0.19}
{'loss': 109.3662, 'learning_rate': 0.00029723414364131185, 'epoch': 0.2}
- Currently the estimated training is about 590 hours, I need increase the **train_batch_size** to reduce the time (the GPU load was only taken 13G/48G for each GPU).
- Is there other way to reduce the training time, other than train_batch_size ? <|||||>You probably won't need the full 590 hours. You would most likely stop when the `eval_loss` starts plateauing. Try the XLS-R model if you aren't already, the loss should converge quicker. Feel free to increase `train_batch_size` but also consider the `fp16` argument which will help things a lot.
You might also want to increase the steps between evaluation which can take a lot of time depending on your eval dataset size.<|||||>> Start by eliminating any multi-device influence. Try `python -m torch.distributed.launch ctc_finetune.py --train --eval -lr 2`. If that doesn't work, try `python -m ctc_finetune.py --train --eval`.
Hello **OllieBroadhurst** , thanks for the guiding.
- I had tried `python -m torch.distributed.launch ctc_finetune.py --train --eval -lr 2` as your suggestion. it ended with same stucking as previously.
- I traced the caller and identified the stucked point as follow:
1. `processor = AutoProcessor.from_pretrained(training_args.output_dir)`
2. `data_collator = DataCollatorCTCWithPadding(processor=processor)`
3. `trainer = Trainer(****)`
4. `train_result = trainer.train(resume_from_checkpoint=checkpoint)`
`-> find_executable_batch_size -> self._inner_training_loop`
` ->train_dataloader = self.get_train_dataloader()` ----- **this was the stuck point, looks it was stucking at DataLoader()**
- I suspected it might casued by the line of **self._remove_unused_columns(train_dataset, description="training")** which inside of the caller of `get_train_dataloader()` as my dataset has one unused column **input_length**, however, I set the r**emove_unused_columns=False** in TrainingArguments, and rerun it, the stucking still there.
- And ideal about the stucking cause ?<|||||>
> You probably won't need the full 590 hours. You would most likely stop when the `eval_loss` starts plateauing. Try the XLS-R model if you aren't already, the loss should converge quicker. Feel free to increase `train_batch_size` but also consider the `fp16` argument which will help things a lot.
>
> You might also want to increase the steps between evaluation which can take a lot of time depending on your eval dataset size.
- Yes, fp16 set to True in my script
- And I increased the `train_batch_size` and `eval_batch_size` to maximize my training PC load.
- For the model fine-tuning with customized Singapre English dataset, what base model should I started with ?
- XLS-R, wav2vec2-large-robust-ft-libri-960h or wav2vec2-large-robust, what's the different if I pick up one of them as my base model? <|||||>I would set `group_by_length` to `False` if you haven't already. This can take a very, very long time for large datasets like yours.
`wav2vec2-large-robust` hasn't been fine-tuned, `wav2vec2-large-robust-ft-libri-960h` has been fine-tuned on English (Librispeech). `XLS-R` has been pretrained on 436 000 hours of audio from multiple language. It means that you'll get the best "head start" using those weights.<|||||>> I would set `group_by_length` to `False` if you haven't already. This can take a very, very long time for large datasets like yours.
>
> `wav2vec2-large-robust` hasn't been fine-tuned, `wav2vec2-large-robust-ft-libri-960h` has been fine-tuned on English (Librispeech). `XLS-R` has been pretrained on 436 000 hours of audio from multiple language. It means that you'll get the best "head start" using those weights.
- Thanks so much for the issue troubleshooting. The `group_by_length= True` for my settings currently. I think it could be the cause. Will disable it and give a try.
- Can I used fine tuned model such as `wav2vec2-large-robust-ft-libri-960h` to continue fine-tuning with my own dataset ?
<|||||>You can go with whatever weights/architecture you like! `wav2vec2-large-robust-ft-libri-960h` is really great for English but I haven't tried it on other languages yet, feel free to give it a shot.<|||||>Thank you **OllieBroadhurst** for the advice.
- Yup, my intention is to do the **incremental** training, in another word, to accumulate the multiple runs of training result to the existing model.
- In order to achieve the goal above, I am not sure whether it's possible to fine tune the new dataset with the fine-tuned model
- The answer seems you already gave to me at your first reply : **you're resetting the learning rate and performing warmup all over again**.
- So my question is how can I achieve the **incremental** like training?
- In my case, should I prepare all the datasets in one go ? Means start with **wav2vec2-large-robust + libri-960h dataset + my custermized dataset**
<|||||>Really great point from @OllieBroadhurst about the learning rate reset! Related to this, it's worth making sure you're reloading your optimiser states if you're resuming training from a checkpoint to avoid the momentum (optimiser) terms being reset each time.
Regarding incremental training, is this purely to save disk space (i.e. download smaller chunks of the data at a time)? Maybe you could try running training using [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet)? This way, you won't ever have to download anymore than 1 batch worth of data to your device at a time. You can check out the script [run_speech_recognition_ctc_streaming.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_streaming.py). It works in much the same way as the non-streaming script, so should look quite familiar! This will enable you to train your model in one go.
The main changes in using this script revolve around the fact that we don't know the **size** of our dataset a-priori, so we can't specify the number of training epochs. Instead, we have to specify a number of train steps. You can work out how many train steps you require: 1 epoch is `floor( total num samples / train batch size )` samples. <|||||>Hello **Sanchit-gandhi**, thanks for the elaboration.
- For the incremental training, what I am looking for is **to have a methodology to add on the addition new dataset** to existing pre-trained / fine-tuned model.
- More specific, for example I want to fine-tune the model **wav2vec2-large-robust-ft-libri-960h** with my own dataset. Here are two options in my mind to run the fine-tuning:
1. Load the model wav2vec2-large-robust-ft-libri-960h + my dataset.
2. Load the original model **wav2vec2-large-robust** + **libri-960h dataset** + my dataset
- For the **option 1**, my concern is the **learning rate reset** issue
- For the **option 2**, my concern is, since we already have fine-tuned weight `wav2vec2-large-robust-ft-libri-960h`, it's no point to re-run it again.
- Hope my explanation is clear. Do you have other option to recommend?<|||||>Hey @lgq-liao, thanks for the clarification. Indeed if you load the checkpoint [`wav2vec2-large-robust-ft-libri-960h`](https://huggingface.co/facebook/wav2vec2-large-robust-ft-libri-960h) there is little point in fine-tuning on LS 960h. Here, it only really makes sense to fine-tune on your own dataset.
If you're take the fine-tuned checkpoint `wav2vec2-large-robust-ft-libri-960h` and train it further on your additional dataset, you don't need to worry too much about matching the learning rates. I would just treat it as a new fine-tuning run and apply a standard learning rate procedure here. Since your dataset is likely out-of-domain with LS 960h, in effect it's like starting a new training run.<|||||>
@sanchit-gandhi : got you. Thanks for the guiding.
@OllieBroadhurst : The dataset loading stuck issue goes away after I set `group_by_length=False ` :+1:
<|||||>The training was running 140+ hours and it looks good before the **EPOCH 12.36**.
Would you please tell me what was wrong after the EPOCH 12.36?
Here is the log as follows:
```
{'loss': 142.6154, 'learning_rate': 0.00028539729505169863, 'epoch': 0.0}
{'loss': 116.9755, 'learning_rate': 0.0002705168020679468, 'epoch': 1.0}
{'eval_loss': 80.37191009521484, 'eval_wer': 0.07217616514530197, 'eval_runtime': 325.7806, 'eval_samples_per_second': 236.19, 'eval_steps_per_second': 7.382, 'epoch': 1.0}
.
{'loss': 105.4153, 'learning_rate': 0.0002554974150664697, 'epoch': 2.0}
{'eval_loss': 73.21361541748047, 'eval_wer': 0.06538271103380858, 'eval_runtime': 316.0957, 'eval_samples_per_second': 243.426, 'eval_steps_per_second': 7.608, 'epoch': 2.0}
.
{'loss': 96.2782, 'learning_rate': 0.0002404780280649926, 'epoch': 3.0}
{'eval_loss': 66.80726623535156, 'eval_wer': 0.06313430751456828, 'eval_runtime': 316.3667, 'eval_samples_per_second': 243.218, 'eval_steps_per_second': 7.602, 'epoch': 3.0}
.
{'loss': 89.7653, 'learning_rate': 0.00022545891802067946, 'epoch': 4.0}
{'eval_loss': 61.151939392089844, 'eval_wer': 0.06073101679637412, 'eval_runtime': 315.8389, 'eval_samples_per_second': 243.624, 'eval_steps_per_second': 7.615, 'epoch': 4.0}
.
{'loss': 83.1802, 'learning_rate': 0.00021043939254062035, 'epoch': 5.0}
{'eval_loss': 57.170997619628906, 'eval_wer': 0.0541305368999708, 'eval_runtime': 316.7452, 'eval_samples_per_second': 242.927, 'eval_steps_per_second': 7.593, 'epoch': 5.0}
.
{'loss': 74.4893, 'learning_rate': 0.00019541972858197931, 'epoch': 6.0}
{'eval_loss': 53.97002410888672, 'eval_wer': 0.056226592354666295, 'eval_runtime': 316.7992, 'eval_samples_per_second': 242.886, 'eval_steps_per_second': 7.592, 'epoch': 6.0}
.
{'loss': 73.0806, 'learning_rate': 0.0001804002031019202, 'epoch': 7.0}
{'eval_loss': 49.42509841918945, 'eval_wer': 0.04158086508309317, 'eval_runtime': 316.5823, 'eval_samples_per_second': 243.052, 'eval_steps_per_second': 7.597, 'epoch': 7.0}
.
{'loss': 65.0651, 'learning_rate': 0.0001653810930576071, 'epoch': 8.0}
{'eval_loss': 44.11850357055664, 'eval_wer': 0.040122132365076744, 'eval_runtime': 316.0374, 'eval_samples_per_second': 243.471, 'eval_steps_per_second': 7.61, 'epoch': 8.0}
.
{'loss': 58.1436, 'learning_rate': 0.00015036170605613, 'epoch': 9.0}
{'eval_loss': 40.9358024597168, 'eval_wer': 0.04316401538715452, 'eval_runtime': 315.3451, 'eval_samples_per_second': 244.006, 'eval_steps_per_second': 7.627, 'epoch': 9.0}
.
{'loss': 54.2643, 'learning_rate': 0.00013534231905465286, 'epoch': 10.0}
'eval_loss': 36.22915267944336, 'eval_wer': 0.03583353434814072, 'eval_runtime': 316.2894, 'eval_samples_per_second': 243.277, 'eval_steps_per_second': 7.604, 'epoch': 10.0}
.
{'loss': 47.4946, 'learning_rate': 0.00012032320901033972, 'epoch': 11.0}
{'eval_loss': 32.77891159057617, 'eval_wer': 0.030601647898231492, 'eval_runtime': 316.6529, 'eval_samples_per_second': 242.998, 'eval_steps_per_second': 7.595, 'epoch': 11.0}
.
{'loss': 45.0098, 'learning_rate': 0.00010530368353028065, 'epoch': 12.0}
{'eval_loss': 28.909757614135742, 'eval_wer': 0.029566950626531415, 'eval_runtime': 316.2782, 'eval_samples_per_second': 243.286, 'eval_steps_per_second': 7.604, 'epoch': 12.0}
.
{'loss': 42.008, 'learning_rate': 0.0001001817762186115, 'epoch': 12.34}
{'loss': 40.349, 'learning_rate': 0.00010011267540620383, 'epoch': 12.34}
> {'loss': 41.2924, 'learning_rate': 9.983682607090103e-05, 'epoch': 12.36}
{'loss': 41.5023, 'learning_rate': 9.94907680945347e-05, 'epoch': 12.39}
.
{'loss': 39.5593, 'learning_rate': 9.042360598227473e-05, 'epoch': 12.99}
{'loss': 40.342, 'learning_rate': 9.035436669128508e-05, 'epoch': 12.99}
{'loss': 38.5417, 'learning_rate': 9.028512740029543e-05, 'epoch': 13.0}
{'eval_loss': 25.61539649963379, 'eval_wer': 0.03266723374001803, 'eval_runtime': 316.6491, 'eval_samples_per_second': 243.001, 'eval_steps_per_second': 7.595, 'epoch': 13.0}
.
{'loss': 36.2269, 'learning_rate': 9.021588810930574e-05, 'epoch': 13.0}
{'loss': 37.1206, 'learning_rate': 9.014664881831609e-05, 'epoch': 13.01}
{'loss': 36.0957, 'learning_rate': 9.007754800590842e-05, 'epoch': 13.01}
{'loss': 38.5678, 'learning_rate': 9.000830871491875e-05, 'epoch': 13.02}
.
{'loss': 42.091, 'learning_rate': 8.717143648449039e-05, 'epoch': 13.21}
{'loss': 42.3242, 'learning_rate': 8.696427252584932e-05, 'epoch': 13.22}
{'loss': 130.8234, 'learning_rate': 8.689517171344164e-05, 'epoch': 13.22}
{'loss': 585.9929, 'learning_rate': 8.684545790251107e-05, 'epoch': 13.23}
> {'loss': 0.0, 'learning_rate': 8.67762186115214e-05, 'epoch': 13.23}
{'loss': 0.0, 'learning_rate': 8.670697932053174e-05, 'epoch': 13.24}
{'loss': 0.0, 'learning_rate': 8.663774002954209e-05, 'epoch': 13.24}
{'loss': 0.0, 'learning_rate': 8.656850073855243e-05, 'epoch': 13.25}
```
<|||||>This is odd. It seems like logging changed from once per epoch to many times per epoch - probably based on the number of steps? In my mind this can only be the case if you reran training from an old checkpoint while changing `TrainingArguments`.
The `0.0` loss means that your training became unstable. It's hard to tell why because things seemed to be converging nicely until then. If you _did_ run training again and used new data, then check that none of your target transcripts are longer than the output sequence length of the model (~120 characters I think?) and that there aren't any missing values.<|||||>Thanks @OllieBroadhurst .
> probably based on the number of steps?
- Yes, the `logging_steps=500` in my configuration, it could be the cause of multiple times logging
> In my mind this can only be the case if you reran training from an old checkpoint while changing TrainingArguments.
- True, in fact, I had changed the `group_by_length=False` once and resumed the running from the exisiting checkpoint.
- Let me add more new dataset and start a fresh run to see whether the issue goes aways
> The 0.0 loss means that your training became unstable. It's hard to tell why because things seemed to be converging nicely until then. If you did run training again and used new data, then check that none of your target transcripts are longer than the output sequence length of the model (~120 characters I think?) and that there aren't any missing values.
- Currently, I filtered the dataset with maximum length is 6 seconds by the `vectorized_datasets.filter`. May I know how can I map the length in seconds to characters?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>The final re-run result looks good as expected. I'd like to close this issue. Thank you all for the help. |
transformers | 21,102 | closed | Fix `torchscript` tests for `AltCLIP` | # What does this PR do?
Fix `torchscript` tests for `AltCLIP`.
This model uses `roberta` as text model, which has
```python
self.register_buffer(
"token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False
)
```
and this requires the change in this PR to pass the test.
See [current failing job run page](https://github.com/huggingface/transformers/actions/runs/3889079663/jobs/6637067165) | 01-12-2023 17:04:57 | 01-12-2023 17:04:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,101 | closed | Slightly better WandbCallback | # What does this PR do?
Allows for more environment variables to be used with the `WandbCallback`. Prioritizes variables set in `TrainingArguments`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 01-12-2023 13:36:06 | 01-12-2023 13:36:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21101). All of your documentation changes will be reflected on that endpoint.<|||||>@stevhliu, I seemed to have messed up the code quality steps. I'll fix that soon.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,100 | open | Models for low resource languages | ### Model description
Hi, I was wondering if there is any way to take leverage of STOA models like [this](https://huggingface.co/facebook/bart-large-mnli) one by facebook and come up with a model for a low resource language, say Filipino using something like student-teacher methods.
My main aim is to come up with some **Zero-Shot models** for such languages with similar accuracy when used with languages like english.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 01-12-2023 13:22:00 | 01-12-2023 13:22:00 | |
transformers | 21,099 | closed | [Time-Series] informer model | # What does this PR do?
Adding Time Series Informer model https://arxiv.org/abs/2012.07436
Related issue: #20903
@kashif :) | 01-12-2023 11:22:34 | 01-12-2023 11:22:34 | very cool! Having a look shortly!<|||||>> very cool! Having a look shortly!
Wow you saw it fast! right now it's just the template of the vanilla TS transformer.
BTW I sent you an email :)<|||||>Hi, @NielsRogge and @kashif ๐
Maybe you have an example for a conversion script?
I'm following the [How to add a model to ๐ค Transformers?](https://huggingface.co/docs/transformers/add_new_model), section six "Write a conversion script":
> Donโt hesitate to ask the Hugging Face team to point you to a similar already existing conversion script for your model.
Thank you so much,
Eli<|||||>thanks! having a look!<|||||>> thanks! having a look!
Work is still in progress, but you might have a look if you have time :)
And by the way, your implemention and the vanilla TS are helping me a lot!<|||||>Hi @kashif, I fixed the final attention output of ProbSparseAttention, and added the ProbMask. In more detail:
# Major
1. Added calculation of the final `attn_output` using `v_aggregated`, meaning steps 7 & 8 in the following:
<img width="446" alt="image" src="https://user-images.githubusercontent.com/17675462/218316971-ce4dbe9e-d677-48e6-9184-0043ca179e6a.png">
**Reference:** [Informer paper](https://arxiv.org/abs/2012.07436), **Section:** "Implement of the ProbSparse self-attention"
2. Added ProbMask, function name `_prepare_decoder_prob_attention_mask`.
# Minor
1. Comment-in attention dropout in `ProbSparseAttention` since the original impl didn't use it.
2. Removed `attention_mask` for the encoder, since the original impl don't apply it to the encoder, only for the decoder.
3. Added `self.attn = config.attn` in the decoder, to later check if to create ProbMask or the standard casual mask.
4. Removed unused code from the original impl.
Now the tests are falling mostly because assertion errors. Before continuing to fix them, I would appreciate if you can have a look :)
Thanks,
Eli<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>thanks @sgugger will get it fixed!<|||||>@kashif fixed what I could from Sylvain comments.
The main thing is that some tests are breaking after this fix https://github.com/huggingface/transformers/pull/21099/commits/b4cbddfa05e3bd739b79569cd3c3b89e316f2451
<|||||>@sgugger I have fixed all the assert issues in the PR #21846 and will fix the copies here when that gets merged |
transformers | 21,098 | closed | TokenGT for graph classification | # What does this PR do?
Adds the TokenGT model for graph classification in Transformers.
Done:
- [x] Architecture ported
- [x] Collator (the model has no tokenizer) and preprocessing
Todo:
- [ ] Test results against original implementation, to make sure they are within precision range. Edit: exactly same results :fire:
- [ ] Add checkpoints and make sure they load properly
- [ ] Update doc
- [ ] Update test suite
- [ ] Add model card for the checkpoints once added
## Dependencies
Cython - this could be ported to Python, but preprocessing will be considerably slower, as well as collation if preprocessing is done on the fly.
Linked to #21079
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Discussed on Slack)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Not tagging anyone for now as this is a draft. | 01-12-2023 09:48:12 | 01-12-2023 09:48:12 | @Raman-Kumar Here is a first draft to get you started!
I suggest you start by finding a checkpoint, then try to compare the execution step by step with the original model to make sure results are the same (I can provide you with the script I used for Graphormer if you need). I also added some todos in the code, which can help you get started too! Feel free to compare everything with the Graphormer PR to get an idea of the process and things to do!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>๐ Okay doing that ...
@clefourrier may need a little relaxed timeline, studying a few more things<|||||>@Raman-Kumar Take the time you need, there is no urgency on my side; feel free to ping me if you need help later on!<|||||>Sorry for accidentally closing it<|||||>Closing and replacing with #21745 21745 |
transformers | 21,097 | closed | Errors when using transformers dev install | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.35
- Python version: 3.9.0
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. `python -m pip uninstall transformers`
2. `python -m pip install -e ".[dev]"`
3. `make fix-copies`
Here's the error message when I run the copy checking function:
```
$ python utils/check_copies.py --fix_and_overwrite
2023-01-11 23:05:36.590052: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-11 23:05:36.726808: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-01-11 23:05:36.726831: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-01-11 23:05:37.362737: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:05:37.362802: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:05:37.362811: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
```
I played around with it and a similar error occurs whenever I try to import transformers:
```
>>> import transformers
2023-01-11 23:21:57.034819: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-11 23:21:57.183106: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-01-11 23:21:57.183131: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-01-11 23:21:57.863217: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:21:57.863294: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-01-11 23:21:57.863305: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
```
And after some more poking, it looks like whenever I use Tensorflow (but not Pytorch) this error pops up:
```
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
2023-01-11 23:26:25.903590: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:267] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2023-01-11 23:26:25.903681: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ziggy-ThinkPad-T480): /proc/driver/nvidia/version does not exist
2023-01-11 23:26:25.905701: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
```
I have no idea why it keeps trying to use CUDA related functions when I don't have a GPU. Does the dev install assume that I have a GPU?
### Expected behavior
I expect the script to output something of this sort:
```
python utils/check_copies.py --fix_and_overwrite
Detected changes, rewriting src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py.
```
The above output was from a colab notebook (with gpu) that I installed the dev environment onto. | 01-12-2023 04:32:01 | 01-12-2023 04:32:01 | This is not an error message, but a warning. It comes from TensorFlow warning you you don't have any GPU installed. This is nothing linked to Transformers.<|||||>Thanks! I see now that it was just taking a very long time to run, so I assumed the message was an error.<|||||>This is actually quite an odd error, though - I don't get it when I run `fix_copies`, and I can't think of why TF would even be imported in that script. Marking this as 'mysterious', and I'll come back to it if I ever figure out why it was happening to you!<|||||>These warnings also happen to me when I dev install transformers on Google Colab.
Moreover, it seems that the first generation with `model.generate` is slower when this happens. There is a freeze of a few seconds in this first inference. |
transformers | 21,096 | closed | WIP: Added basic eos token based pooling | # What does this PR do?
This PR is still a WIP. This is based on [this issue](https://github.com/huggingface/transformers/issues/21029). The main problem is that when new tokens are added to the tokenizer and text model and learned such as with [textual inversion](https://textual-inversion.github.io/) the clip text model pools at the wrong location as the pooling is not done at the new token location and not the eos token id location
Fixes #21029
## Before submitting
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Models:
- text models: @ArthurZucker and @younesbelkada | 01-12-2023 02:41:10 | 01-12-2023 02:41:10 | Current informal tests to confirm that behavior isn't changed:
In original
```
from transformers import CLIPTokenizer, CLIPTextModel
tokenizer=CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer")
text_encoder=CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder").to("cuda")
encoded = text_encoder(**tokenizer(["hi my name is bob"], padding=True, return_tensors="pt").to("cuda"))
encoded.pooler_output
```
gives
```
tensor([[ 4.5786e-01, 6.4382e-02, 6.3140e-01, 6.2113e-01, 9.6310e-01,
-9.7793e-02, -7.2665e-01, -7.6867e-01, 6.6914e-02, 6.3608e-01,
1.4786e+00, -1.9321e-01, -1.1228e+00, 1.8028e+00, 8.4215e-01,
-5.4223e-01, -6.0589e-01, 1.1507e+00, 1.3731e-01, 8.6263e-01,
...
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok! Passed the first test and got the same result as above. The main problem I have with the current solution is we need to get the eos_token_id from the tokenizer like
```
encoded = text_encoder(**tokenizer(["hi my name is bob"], padding=True, return_tensors="pt", eos_token_id=tokenizer.eos_token_id).to("cuda"))
```
There might be ways to save eos_token_id in the transformer part but for now, I think this does fix the problem. Will make more tests in the future and improve quality.<|||||>Hey, quick tip, you can also initialize a class variable `eos_token_id` for example, which can be fetch in the `config` at `__init__` time.
Now regarding the issue, I have to ask why not use the same technique as in `CLIP`? You mention pros and cons, would you mind giving more details?
<|||||>@ArthurZucker Thanks for the tip! Will fix that asap.
And for the original implementation of CLIP, sorry let me clarify. I meant the original implementation of how CompVis/latent-diffusion trains on added tokens. In there, they force the placeholder token/added token to be a single token that is already within the embeddings during the textual inversion training.
Pro: With this approach, it'll work for the current implementation of clip
Con: It's a pretty strict requirement that the current textual inv script in diffusers doesn't have.
But overall, I think this pr, once I fix things up, won't cause any problems to the existing implementations because we want to pool on the eos token anyway and it'll also end up working for the textual inversion scripts in diffusers which will be nice.<|||||>@ArthurZucker Hi! In the config, I noticed that the eos_token_id for the clip text model can be different from the tokenizer as follows
```
CLIPTextConfig {
"_name_or_path": "CompVis/stable-diffusion-v1-4",
"architectures": [
"CLIPTextModel"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"dropout": 0.0,
"eos_token_id": 2,
"hidden_act": "quick_gelu",
"hidden_size": 768,
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 77,
"model_type": "clip_text_model",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"projection_dim": 512,
"torch_dtype": "float32",
"transformers_version": "4.26.0.dev0",
"vocab_size": 49408
}
```
is there some reason for this? I'll try testing out vocab_size-1(altho I don't think it's a good idea) for now<|||||>Will fix the checks.<|||||>Pulled from upstream so that some tests can pass. And changed a code bit and added documentation. I do think a better solution might be to fix the eos_token_id in the config. I'll try figuring out how to do that<|||||>Actually, this made the code diff a bit messy so closing this now and will make a new pr.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21096). All of your documentation changes will be reflected on that endpoint.<|||||>Also the reason for the possible difference between the `model.config.xxx_token_id` and `tokenizer.config.xxx_token_id` is because they are not linked together. We usually make sure that they have the same value but nothing is forcing that. Biggest reason I see is dependency, and simplicity since you could use other tokenizer with the same model and vice versa. ๐ <|||||>Thanks for the comment! Will post q on the new pr just for other people who want to follow. |
transformers | 21,095 | closed | Clarify and add missing typical_p argument docstring. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This adds an argument docstring of locally typical sampling (implemented in #15504) that was missing in `src/transformers/configuration_utils.py` and clarifies the existing docstring in `src/transformers/generation/configuration_utils.py`.
## Who can review?
@stevhliu @patrickvonplaten | 01-12-2023 02:40:35 | 01-12-2023 02:40:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante as well |
transformers | 21,094 | closed | device_map='auto' causes memory to not be freed with torch.cuda.empty_cache() | ### System Info
If I load a model like this
```
model = AutoModelForCausalLM.from_pretrained("models/opt-13b", device_map='auto', load_in_8bit=True)
```
and then do
```
model = None
torch.cuda.empty_cache()
```
the VRAM is not freed. The only way I have found to release it is to kill the program and start it over.
Freeing the memory like this works if the model is loaded the normal way with
```
model = AutoModelForCausalLM.from_pretrained("models/opt-13b", low_cpu_mem_usage=True, torch_dtype=torch.float16).cuda()
```
@ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code snippet is provided in the description above.
### Expected behavior
VRAM should be freed with `torch.cuda.empty_cache()`. | 01-12-2023 02:10:39 | 01-12-2023 02:10:39 | That is an interesting bug, but should probably be adressed in `accelerate` as using `device_map = "auto"` is reliant on the accelerate library. <|||||>I think it's the 8bit part that may be causing the issue, actually ;-) <|||||>I was able to reproduce the issue with `accelerate` loaded models (without 8-bit):
```
import torch
from transformers import AutoModelForCausalLM
model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", device_map="auto", load_in_8bit=True)
model_accelerate = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", device_map="auto", torch_dtype=torch.float16)
model_torch = AutoModelForCausalLM.from_pretrained(
"facebook/opt-350m", low_cpu_mem_usage=True, torch_dtype=torch.float16
).cuda()
del model_accelerate
torch.cuda.empty_cache()
del model_8bit
torch.cuda.empty_cache()
del model_torch
torch.cuda.empty_cache()
```
With this script the GPU VRAM is freed only after the lines:
```
del model_torch
torch.cuda.empty_cache()
```
I also profiled the GPU memory and observed that the allocated memory decreases after the aforementioned line.
<img width="1134" alt="Screenshot 2023-01-16 at 11 06 14" src="https://user-images.githubusercontent.com/49240599/212652122-796186e4-c476-42b6-b628-7def9d7ea3d0.png">
Note that the 8bit Linear modules seems to behave correctly with respect to `torch.cuda.empty_cache`, i.e:
```
import torch
from transformers import AutoModelForCausalLM
import bitsandbytes as bnb
model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", device_map="auto", load_in_8bit=True)
model_accelerate = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", device_map="auto", torch_dtype=torch.float16)
model_torch = AutoModelForCausalLM.from_pretrained(
"facebook/opt-350m", low_cpu_mem_usage=True, torch_dtype=torch.float16
).cuda()
linear_8bit = bnb.nn.Linear8bitLt(10000, 10000).to("cuda")
del model_accelerate
torch.cuda.empty_cache()
del model_torch
torch.cuda.empty_cache()
del model_8bit
torch.cuda.empty_cache()
del linear_8bit
torch.cuda.empty_cache()
```
The VRAM goes correctly down after the line:
```
del linear_8bit
torch.cuda.empty_cache()
```
So the issue should be on `accelerate` side but not sure where exactly, will investigate more but if you have more insights @sgugger @muellerzr would love to hear from you!<|||||>I have tried and can indeed reproduce without the 8bit loading. I don't know why the cache appears nonempty, but iterating on a loop (re-creating the model and then deleting it several times) does not result in a memory increase, so the memory is reused when we need it.
If someone manages to find more about this, I'd be very interested to learn why this is the case.<|||||>```
gc.collect()
torch.cuda.empty_cache()
gc.collect()
```
may free memory.<|||||>I can confirm that ~690MB of seems to be freed (monitoring with `nvidia-smi`) - which corresponds to the size of the weights of opt-350 in fp16 thanks to the trick proposed by @git-longcat
```
import torch
import gc
from transformers import AutoModelForCausalLM
model_accelerate = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", device_map={"":0}, torch_dtype=torch.float16)
model_torch = AutoModelForCausalLM.from_pretrained(
"facebook/opt-350m", low_cpu_mem_usage=True, torch_dtype=torch.float16
).cuda()
del model_accelerate
gc.collect()
torch.cuda.empty_cache()
del model_torch
torch.cuda.empty_cache()
```
@oobabooga can you try on your side and let us know if this trick works?
It seems crucial to call `gc.collect()` before `torch.cuda.empty_cache()` and not after <|||||>@git-longcat @younesbelkada I confirm that this works. Thank you.
In fact, all I needed was
```
model = None
gc.collect()
torch.cuda.empty_cache()
```
This doesn't work:
```
model = None
torch.cuda.empty_cache()
gc.collect()
```
This resolves the issue for me. I don't know if it can still be considered a bug or not.<|||||>@younesbelkada @oobabooga in Accelerate dev now (install via `pip install git+https://github.com/huggingface/accelerate`) I've introduced a `release_memory` util that will perform the above for `n` objects easily:
```python
>>> import torch
>>> from accelerate.utils import release_memory
>>> a = torch.ones(1000, 1000).cuda()
>>> b = torch.ones(1000, 1000).cuda()
>>> a, b, model = release_memory(a, b, model)
```
Just ensure that the objects are in the same order they were passed in as so the memory will be fully overriden. (see https://github.com/huggingface/accelerate/pull/990 for more information)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,093 | closed | NaN when training "t5-small" with parallelize() on multiple GPUs | ### System Info
* Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.27
* Python version: 3.10.8
* transformers version: 4.25.1
* huggingface-hub version: 0.11.1
* PyTorch version (GPU?): pytorch_1.13.1-cuda11.6-cudnn8-runtime
* Using GPU in script?: yes
* Using distributed or parallel set-up in script?: yes
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I using "t5-small" to generate target text from source text, If I set ```model.parallelize()``` before getting loss, the loss will be ```nan```. But If I just set ```model.cuda()```, the loss will be normal. Is there anything worng with the ```parallelize()``` function? Because as far as I know, pytorch does not need to do any special settings for backward() and parameters update when the model parallelism.
There is a toy sample:
```python
## Data
source = ['[ equal statistic job customer ostrich orange badger blue bull daisy giraffe hamster ivy rabbit possum whale cashew rectangle oval square pecan ] [ you orchid orange frog grey ivy racoon potato whale flax cylinder fennel pumpkin leopard ] [ desire otter mole albatross bat buffalo cat daisy grey hedgehog holly racoon squirrel potato whale apricot cylinder heart fennel raisin lilac ] [ action watch speak otter orange alligator bat bull clover daisy fish green horse racoon potato whale apricot rectangle circle fennel pecan lavender ] [ emotion ostrich orange alligator badger black cobra daisy fish green hazel horse squirrel potato wolf rectangle oval fennel pecan leopard ]', '[ problem ostrich mule badger black bull fox green hazel ivy rabbit potato whale sunflower sphere circle triangle pecan lemur ] [ company frequency time visual laws owl mule alligator badger black clover deer donkey emu grey horse racoon possum whale apricot rectangle oval fennel pecan lavender ] [ lion otter orange baboon bull cat daisy flamingo green iris racoon possum vulture apricot rectangle fennel raisin lemur ] [ equal orchid orange blossom camel chameleon deer fish green jackal possum whale rectangle oval triangle pecan ] [ equal conversation speak orchid orange bat camel deer fish grey iris racoon possum whale flax rectangle triangle pumpkin lemur ]']
target = ['Later on in a career, 300 people are clients. You have their attention. The only attention needed is to make up for attrition or to continue growth at the desired rate. With that being said, be on the lookout for me in some crazy Hawaiian shirts. Maybe a fun tie when I have to dress up.', 'To his detriment, I donโt remember anything else. I thought of the rule again a few days ago because of a Hawaiian style, Sandlot movie shirt. I was the person wearing it, and yes, I did receive all kinds of attention. Or maybe, it was the shirt. Either way people were talking to me.']
modelName = "t5-small"
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(modelName, model_max_length=512)
source_tokens = [tokenizer(i) for i in source]
target_tokens = [tokenizer(i) for i in target]
# Model & Optimizer
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained(modelName)
model.parallelize() #Model Parallelism
# model.to('cuda:0')
import torch
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
## Train
if __name__ == '__main__':
for epoch in range(10):
for i in range(2):
loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0).to('cuda:0'),
attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0).to('cuda:0'),
labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to('cuda:0')).loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(loss)
```
### Expected behavior
The loss shouldnt be nan when set ```model.parallelize()```, it should be the same as when set ```model.to('cuda:0')```. | 01-12-2023 00:29:01 | 01-12-2023 00:29:01 | `parallelize()` is a deprecated function and should not be used. You should use the `accelerate` library see [here](https://github.com/huggingface/accelerate) <|||||>@ArthurZucker
Thank you for your help. However, I use parallelize() because it can distribute layers of model to different GPUs, which is shown as followers:

There are some models in T5ForConditionalGeneration so big (such as t5-11b is 45G), so they are cant be put on a single GPU.
But when I use accelerate to distribute the model, I find it seems uses the data parallelism, and still put all layers of a model in a single GPU. Such as followers:

Could you please tell me how can I write code which can train the model by model parallelism? Thank you!<|||||>Hi @MrZhangKY
In this case you can use `device_map='balanced'`, the script below worked for me (no NaN loss) on 2xNVIDIA T4 GPU:
```python
## Data
source = ['[ equal statistic job customer ostrich orange badger blue bull daisy giraffe hamster ivy rabbit possum whale cashew rectangle oval square pecan ] [ you orchid orange frog grey ivy racoon potato whale flax cylinder fennel pumpkin leopard ] [ desire otter mole albatross bat buffalo cat daisy grey hedgehog holly racoon squirrel potato whale apricot cylinder heart fennel raisin lilac ] [ action watch speak otter orange alligator bat bull clover daisy fish green horse racoon potato whale apricot rectangle circle fennel pecan lavender ] [ emotion ostrich orange alligator badger black cobra daisy fish green hazel horse squirrel potato wolf rectangle oval fennel pecan leopard ]', '[ problem ostrich mule badger black bull fox green hazel ivy rabbit potato whale sunflower sphere circle triangle pecan lemur ] [ company frequency time visual laws owl mule alligator badger black clover deer donkey emu grey horse racoon possum whale apricot rectangle oval fennel pecan lavender ] [ lion otter orange baboon bull cat daisy flamingo green iris racoon possum vulture apricot rectangle fennel raisin lemur ] [ equal orchid orange blossom camel chameleon deer fish green jackal possum whale rectangle oval triangle pecan ] [ equal conversation speak orchid orange bat camel deer fish grey iris racoon possum whale flax rectangle triangle pumpkin lemur ]']
target = ['Later on in a career, 300 people are clients. You have their attention. The only attention needed is to make up for attrition or to continue growth at the desired rate. With that being said, be on the lookout for me in some crazy Hawaiian shirts. Maybe a fun tie when I have to dress up.', 'To his detriment, I donโt remember anything else. I thought of the rule again a few days ago because of a Hawaiian style, Sandlot movie shirt. I was the person wearing it, and yes, I did receive all kinds of attention. Or maybe, it was the shirt. Either way people were talking to me.']
modelName = "t5-small"
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(modelName, model_max_length=512)
source_tokens = [tokenizer(i) for i in source]
target_tokens = [tokenizer(i) for i in target]
# Model & Optimizer
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained(modelName, device_map="balanced")
print(set(model.hf_device_map.values()))
# model.parallelize() #Model Parallelism
# model.to('cuda:0')
import torch
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
## Train
if __name__ == '__main__':
for epoch in range(10):
for i in range(2):
loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0),
attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0),
labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to(0)).loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(loss)
```
Make sure to have your model dispatched by printing `set(model.hf_device_map.values())`, or you can manually inspect `set(model.hf_device_map)`
If you want to set a custom device map you can pass a dictionary such as:
```
custom_device_map = {
"shared": 0,
"encoder": 0,
"decoder": 1,
"decoder.embed_tokens":0,
"lm_head": 0,
}
```
and pass it at initialization: `model = T5ForConditionalGeneration.from_pretrained(modelName, device_map=custom_device_map)`. Although note that you need to manually set `"decoder.embed_tokens":0,` since the `embed_tokens` are shared between the encoder and decoder, so you need to make sure they are on the same device (maybe this can be addressed in the future but I think this is intended - otherwise you would need 2 copies of the embedding layer even though they are the same).<|||||>@younesbelkada
Thank you for your help very much!
However, I run the code by using 4*A6000, there are some error:
```cmd
{0, 1, 2}
tensor(10.3775, grad_fn=<ToCopyBackward0>)
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [67,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[1], line 31
29 for epoch in range(10):
30 for i in range(2):
---> 31 loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0),
32 attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0),
33 labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to(0)).loss
34 loss.backward()
35 optimizer.step()
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:156, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
154 output = old_forward(*args, **kwargs)
155 else:
--> 156 output = old_forward(*args, **kwargs)
157 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:1648, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1645 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)
1647 # Decode
-> 1648 decoder_outputs = self.decoder(
1649 input_ids=decoder_input_ids,
1650 attention_mask=decoder_attention_mask,
1651 inputs_embeds=decoder_inputs_embeds,
1652 past_key_values=past_key_values,
1653 encoder_hidden_states=hidden_states,
1654 encoder_attention_mask=attention_mask,
1655 head_mask=decoder_head_mask,
1656 cross_attn_head_mask=cross_attn_head_mask,
1657 use_cache=use_cache,
1658 output_attentions=output_attentions,
1659 output_hidden_states=output_hidden_states,
1660 return_dict=return_dict,
1661 )
1663 sequence_output = decoder_outputs[0]
1665 # Set device for model parallelism
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:988, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
985 position_bias = None
986 encoder_decoder_position_bias = None
--> 988 hidden_states = self.dropout(inputs_embeds)
990 for i, (layer_module, past_key_value) in enumerate(zip(self.block, past_key_values)):
991 layer_head_mask = head_mask[i]
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:151, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
149 @functools.wraps(old_forward)
150 def new_forward(*args, **kwargs):
--> 151 args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
152 if module._hf_hook.no_grad:
153 with torch.no_grad():
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:266, in AlignDevicesHook.pre_forward(self, module, *args, **kwargs)
261 for name, _ in named_module_tensors(
262 module, include_buffers=self.offload_buffers, recurse=self.place_submodules
263 ):
264 set_module_tensor_to_device(module, name, self.execution_device, value=self.weights_map[name])
--> 266 return send_to_device(args, self.execution_device), send_to_device(kwargs, self.execution_device)
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:131, in send_to_device(tensor, device, non_blocking)
128 def _has_to_method(t):
129 return hasattr(t, "to")
--> 131 return recursively_apply(_send_to_device, tensor, device, non_blocking, test_type=_has_to_method)
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:80, in recursively_apply(func, data, test_type, error_on_other_type, *args, **kwargs)
58 """
59 Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.
60
(...)
77 The same data structure as `data` with `func` applied to every object of type `main_type`.
78 """
79 if isinstance(data, (tuple, list)):
---> 80 return honor_type(
81 data,
82 (
83 recursively_apply(
84 func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
85 )
86 for o in data
87 ),
88 )
89 elif isinstance(data, Mapping):
90 return type(data)(
91 {
92 k: recursively_apply(
(...)
96 }
97 )
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:51, in honor_type(obj, generator)
47 """
48 Cast a generator to the same type as obj (list, tuple or namedtuple)
49 """
50 try:
---> 51 return type(obj)(generator)
52 except TypeError:
53 # Some objects may not be able to instantiate from a generator directly
54 return type(obj)(*list(generator))
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:83, in <genexpr>(.0)
58 """
59 Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.
60
(...)
77 The same data structure as `data` with `func` applied to every object of type `main_type`.
78 """
79 if isinstance(data, (tuple, list)):
80 return honor_type(
81 data,
82 (
---> 83 recursively_apply(
84 func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
85 )
86 for o in data
87 ),
88 )
89 elif isinstance(data, Mapping):
90 return type(data)(
91 {
92 k: recursively_apply(
(...)
96 }
97 )
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:99, in recursively_apply(func, data, test_type, error_on_other_type, *args, **kwargs)
90 return type(data)(
91 {
92 k: recursively_apply(
(...)
96 }
97 )
98 elif test_type(data):
---> 99 return func(data, *args, **kwargs)
100 elif error_on_other_type:
101 raise TypeError(
102 f"Can't apply {func.__name__} on object of type {type(data)}, only of nested list/tuple/dicts of objects "
103 f"that satisfy {test_type.__name__}."
104 )
File /opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py:124, in send_to_device.<locals>._send_to_device(t, device, non_blocking)
122 def _send_to_device(t, device, non_blocking):
123 try:
--> 124 return t.to(device, non_blocking=non_blocking)
125 except TypeError: # .to() doesn't accept non_blocking as kwarg
126 return t.to(device)
RuntimeError: CUDA error: device-side assert triggered
```
The code I run:
```python
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
## Data
source = ['[ equal statistic job customer ostrich orange badger blue bull daisy giraffe hamster ivy rabbit possum whale cashew rectangle oval square pecan ] [ you orchid orange frog grey ivy racoon potato whale flax cylinder fennel pumpkin leopard ] [ desire otter mole albatross bat buffalo cat daisy grey hedgehog holly racoon squirrel potato whale apricot cylinder heart fennel raisin lilac ] [ action watch speak otter orange alligator bat bull clover daisy fish green horse racoon potato whale apricot rectangle circle fennel pecan lavender ] [ emotion ostrich orange alligator badger black cobra daisy fish green hazel horse squirrel potato wolf rectangle oval fennel pecan leopard ]', '[ problem ostrich mule badger black bull fox green hazel ivy rabbit potato whale sunflower sphere circle triangle pecan lemur ] [ company frequency time visual laws owl mule alligator badger black clover deer donkey emu grey horse racoon possum whale apricot rectangle oval fennel pecan lavender ] [ lion otter orange baboon bull cat daisy flamingo green iris racoon possum vulture apricot rectangle fennel raisin lemur ] [ equal orchid orange blossom camel chameleon deer fish green jackal possum whale rectangle oval triangle pecan ] [ equal conversation speak orchid orange bat camel deer fish grey iris racoon possum whale flax rectangle triangle pumpkin lemur ]']
target = ['Later on in a career, 300 people are clients. You have their attention. The only attention needed is to make up for attrition or to continue growth at the desired rate. With that being said, be on the lookout for me in some crazy Hawaiian shirts. Maybe a fun tie when I have to dress up.', 'To his detriment, I donโt remember anything else. I thought of the rule again a few days ago because of a Hawaiian style, Sandlot movie shirt. I was the person wearing it, and yes, I did receive all kinds of attention. Or maybe, it was the shirt. Either way people were talking to me.']
modelName = "t5-small"
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(modelName, model_max_length=512)
source_tokens = [tokenizer(i) for i in source]
target_tokens = [tokenizer(i) for i in target]
# Model & Optimizer
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained(modelName, device_map="balanced")
print(set(model.hf_device_map.values()))
# model.parallelize() #Model Parallelism
# model.to('cuda:0')
import torch
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
## Train
if __name__ == '__main__':
for epoch in range(10):
for i in range(2):
loss = model(input_ids=torch.tensor(source_tokens[i]['input_ids']).unsqueeze(0),
attention_mask=torch.tensor(source_tokens[i]['attention_mask']).unsqueeze(0),
labels=torch.tensor(target_tokens[i]['input_ids']).unsqueeze(0).to(0)).loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(loss)
```
Is there any problem when using more than 2 GPUS?<|||||>Interesting, can you run the same script on CPU?
Whenever you have a `RuntimeError: CUDA error: device-side assert triggered` a good practice is to run the same script on CPU and check the error message<|||||>@younesbelkada
Its strange. When I change the environment to 2 GPUS, it works....<|||||>@younesbelkada
I think there are some problems when using more that 2 GPUS(for example, 4 GPUS). Do you have plans to fix this problem?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue is still occuring on the newest transformers version: 4.26.1. I also managed to train on two GPU's, but when I increase number of GPU's, I get error "RuntimeError: CUDA error: device-side assert triggered". |
transformers | 21,092 | closed | Add Epsilon and Eta sampling | ### Feature request
I would like to add Epsilon and Eta sampling from [Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2210.15191). They are novel truncation sampling decoding algorithms that have led to better human judgement scores and less repetition than nucleus sampling.
- [Official repository](https://github.com/john-hewitt/truncation-sampling)
### Motivation
I would like to generate more human-like and less repetitious text with Huggingface models.
### Your contribution
I am able to submit a PR for this. | 01-11-2023 22:23:25 | 01-11-2023 22:23:25 | cc @gante <|||||>Yesterday's ACL 2023 tutorial on "Generating Text from Large Language Models" covers eta-sampling and more! John Hewitt, the first author of the eta-sampling paper, was one of the presenters for that tutorial!
Site: https://rycolab.io/classes/acl-2023-tutorial/
Slides: https://drive.google.com/file/d/1UHbGcjzBURG1n2DufC7iDTmGNjIz5Dp_/view |
transformers | 21,091 | closed | Support for custom model transform in `PreTrainedModel.from_pretrained` | Add the possibility to pass a custom transform with signature `(model: torch.nn.modules.module.Module, **kwargs) -> None` that can do any transform on the model.
This can be seen as an extension of the bitsandbytes integration to be able to pass any transform that e.g. modify the model's keys. A direct application is an easier integration of SmoothQuant, k-bit quantization in transformers. Defining the transform should be left to an external library.
Some other necessary code modifs could be missing, I see the bitsandbytes integration modified a bit more.
This is just a draft to see if it's a good idea or not.
## Before submitting
- [ ] write doc
- [ ] write tests
- [ ] check if it works with fx.GraphModule
## Who can review?
@sgugger I'd be happy to have your opinion on this. | 01-11-2023 17:59:16 | 01-11-2023 17:59:16 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21091). All of your documentation changes will be reflected on that endpoint.<|||||>I see the point, but then isn't bitsandbytes integration itself against the transformers philosophy as well? Why was bitsandbytes integrated in `from_pretrained`? This PR allows as an opt-in to plug external code in transformers, it does not change the one-model-one-file experience for existing models using the vanilla code.
I'd see this kind of modularity as a compromise to allow to plug in transformers modifications to the modeling that users may want to do programmatically, but that we don't want to host in the canonical modeling implementations.
Edit: Ok so from our offline discussion, I understand that the idea in this PR breaks the "load from transformers" on the Hub, which is something we want to avoid. Model loading should work out of the box from the Hub. An alternative idea is to have a quantization config directly in the model's config, and do the transform from there. I'll have a look at this.<|||||>To fully put the result of our offline discussion in the open, the idea would be to:
- migrate from several arguments (like `load_in_8bit=True`) in `from_pretrained` to a quantization config as there are many other quantization methods we would like to support.
- when we quantize the model in `from_pretrained`, set it in the config (so for now indicate with flags which 8bit parameter have been used and when we migrate to a quantization config, put it in the model config)
- this way when the quantized model is pushed to the Hub, the information about how it was quantized is there. We can thus adapt the `from_pretrained` method to prepare the model just after its instantation and before the state dict is loaded using this information.
I'll also add this to the agenda of our next core maintainer meeting to make sure we are all fully aligned on this :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,090 | closed | Add: An introductory guide for text generation | Context: current [text generation doc](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation) is large, difficult to navigate, and may be overwhelming to users who are new to text generation.
I suggest splitting the document into two parts:
1. an introductory guide that explains the basics and provides some examples
2. trimmed down API reference to be used for looking up specific parameter descriptions (https://github.com/huggingface/transformers/pull/21112)
This PR is the first part. It adds a guide that introduces readers to the text generation strategies, explains the generation configuration defaults, and provides some examples. The second part will follow in a separate PR.
| 01-11-2023 17:55:46 | 01-11-2023 17:55:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi! Thank you all for the invaluable feedback! I have addressed all the comments and suggestions. Please let me know if there's anything else I need to improve or if we can merge this PR and https://github.com/huggingface/transformers/pull/21112. <|||||>I think both can be merged. Thank you @MKhalusova โค๏ธ |
transformers | 21,089 | closed | Different behavior in DistilBERT when using "inputs_embeds" | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-3.10.0-1160.80.1.0.1.el7.x86_64-x86_64-with-glibc2.27
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
@ArthurZucker @younesbelkada
### Reproduction
Do a forward pass with distilBERT passing "inputs_embs" instead of "input_ids", where "inputs_embs" contains the output of the forward over the word embedding matrix, i.e. just picking the token embeddings.
When doing this, one would expect the same behaviour as other popular models like BERT or ROBERTA, but distilBERT skips "positional embeddings addition + layerNorm + droput" as it skips the self.embedding() call.
I would expect that passing inputs_embs instead of input_ids has a coherent behaviour across diferent architectures, but it is not the case, at least for distilBERT. I am not sure if other models have this issue.
### Expected behavior
Properly build the input embedding by adding positional embeddings + layerNorm + droput (as happens in modeling_bert or modeling_roberta). This does not happen as the call to self.embedding() is skipped. | 01-11-2023 15:58:27 | 01-11-2023 15:58:27 | Hey, can you provide a reproducing script ? <|||||>@ArthurZucker I don't think that I'll have soon time to prepare a reproducing script. But, I'm happy to give more details on the issue found.
In distilbert when input_embeds is not None, the self.embedding layer is skipped completely:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/distilbert/modeling_distilbert.py
```
if inputs_embeds is None:
inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
return self.transformer(
x=inputs_embeds,
attn_mask=attention_mask,
head_mask=head_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
```
However, in BERT the self.embedding() call always happens and does the following:
https://github.com/huggingface/transformers/blob/7d2a5fa749d22f403fe6ceac7d62c003240aee45/src/transformers/models/bert/modeling_bert.py
```
if inputs_embeds is None:
inputs_embeds = self.word_embeddings(input_ids)
token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = inputs_embeds + token_type_embeddings
if self.position_embedding_type == "absolute":
position_embeddings = self.position_embeddings(position_ids)
embeddings += position_embeddings
embeddings = self.LayerNorm(embeddings)
embeddings = self.dropout(embeddings)
```
So, when passing input_embeds the call to self.word_embeddings() does not take place, so assuming that input_embeds are already word embbedings, but positional embedding addition, layer normalization and dropout happen after.
In summary, passing input_embeds to BERT (and other architectures) assumes that the input_embeds are the word embeddings. However, in DistilBERT if you pass the word embeddings as input_embeds you would be skipping adding positional embeddings, layer norm and dropout.
The docs do not say anything about this difference and does not seem reasonable having to do the skipped operations manually before passing input_embeds to distilBERT.
<|||||>Okay, this does not really need a reproduction script and I agree with you, the expected behaviour is that if you pre-compute the `input_embeds`, the output of the model should certainly be the same as if you were passing the input ids. This is true for most of our models. I have to check whether there is a particular reason for this in DistillBert, otherwise will push a fix! <|||||>Great news @ArthurZucker ! |
transformers | 21,088 | closed | fix typo in comment | Signed-off-by: xiaoyang zhu <[email protected]>
fix typo in comment
| 01-11-2023 13:36:13 | 01-11-2023 13:36:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,087 | closed | GIT batch prediction seems to be broken | ### System Info
```
name : transformers
version : 4.26.0.dev0
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to run image captioning in batches. The easiest way to try that was to change the example for video captioning [here](https://huggingface.co/docs/transformers/main/en/model_doc/git#transformers.GitForCausalLM.forward.example-3). According to [source code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/git/modeling_git.py#L1234), `pixel_values` must be in the shape of `(batch_size, num_frames, num_channels, height, width)` or `(batch_size, num_channels, height, width)`. But reshaping the `pixel_values` from the example to turn video captioning into batch image captioning throws the following error:
```RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 6 but got size 1 for tensor number 1 in the list.```
during `hidden_states = torch.cat((projected_visual_features, embedding_output), dim=1)` (line 1268 of modeling_git.py).
Here is the code for reproducibility:
```python
from transformers import AutoProcessor, AutoModelForCausalLM
from PIL import Image
import numpy as np
from huggingface_hub import hf_hub_download
from decord import VideoReader, cpu
processor = AutoProcessor.from_pretrained("microsoft/git-base-vatex")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-vatex")
# set seed for reproducability
np.random.seed(45)
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
converted_len = int(clip_len * frame_sample_rate)
end_idx = np.random.randint(converted_len, seg_len)
start_idx = end_idx - converted_len
indices = np.linspace(start_idx, end_idx, num=clip_len)
indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
return indices
def sample_frames(file_path, num_frames):
videoreader = VideoReader(file_path, num_threads=1, ctx=cpu(0))
videoreader.seek(0)
indices = sample_frame_indices(clip_len=num_frames, frame_sample_rate=4, seg_len=len(videoreader))
frames = videoreader.get_batch(indices).asnumpy()
return list(frames)
# load video
file_path = hf_hub_download(
repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
)
# sample frames
num_frames = model.config.num_image_with_embedding
print(num_frames)
frames = sample_frames(file_path, num_frames)
# pixel_values = processor(images=frames, return_tensors="pt").pixel_values.reshape((num_frames, 1, 3, 224, 224))
pixel_values = processor(images=frames, return_tensors="pt").pixel_values.reshape((num_frames, 3, 224, 224))
print(pixel_values.size())
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
print("Generated caption:", processor.batch_decode(generated_ids, skip_special_tokens=True))
```
### Expected behavior
Expected it to generate ids per image in the batch. | 01-11-2023 12:43:12 | 01-11-2023 12:43:12 | Hi,
If you want to run inference on individual frames, you'll need to use a model that expects individual frames, not videos.
Here you're loading [microsoft/git-base-vatex](https://huggingface.co/microsoft/git-base-vatex), hence, it expects `pixel_values` of shape (batch_size, num_frames, num_channels, height, width).
To run inference on a batch of images, you can use models which are trained on image captioning datasets, like [microsoft/git-base](https://huggingface.co/microsoft/git-base), [microsoft/git-base-coco](https://huggingface.co/microsoft/git-base-coco), [microsoft/git-base-textcaps](https://huggingface.co/microsoft/git-base-textcaps) (as well as any of the large variants).
Edit; after investigating it still seems like there's an error. Looking into this<|||||>Sorry, I should have also shown that it doesn't work on captioning models (even though I have tested it on my side, both `git-base-coco` and `git-large-coco`). My bad!
I appreciate that you're looking into this ๐ <|||||>For some reason Github isn't automatically linking the PR that will fix it: #21071 <|||||>Update: seems that the PR above doesn't fix it. So issue remains open<|||||>Ok figured this out! The problem is that you're not passing `input_ids` of the same batch size. By default, the generate method will just use the start token ID (which for GIT equals model.config.bos_token_id = 101). However when sending a batch of images through the model, we also need to prepare a batch of start tokens.
The following works:
```
from transformers import AutoProcessor, AutoModelForCausalLM
import requests
from PIL import Image
import torch
processor = AutoProcessor.from_pretrained("microsoft/git-base-coco")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = processor(images=image, return_tensors="pt").pixel_values
pixel_values = torch.stack([pixel_values, pixel_values], dim=0).squeeze()
start_token_id = model.config.bos_token_id
generated_ids = model.generate(pixel_values=pixel_values, input_ids=torch.tensor([[start_token_id], [start_token_id]]), max_length=50)
generated_captions = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_captions)
```
I'll add a corresponding test to make sure this is tested. |
transformers | 21,086 | closed | AutoModels for region-to-phrase-alignment and natural-language-for-visual-reasoning | ### Feature request
Hi! Region to phrase alignment and natural language for visual reasoning have no AutoModels yet. @sgugger is it OK to open a PR and solve this?
### Motivation
This is an issue that I faced when exporting VisualBert to ONNX, since the task mapping can't be done without it. Adding it would allow us to make the export for every task.
### Your contribution
I'll open a PR and solve it myself if allowed | 01-11-2023 11:06:15 | 01-11-2023 11:06:15 | Note that the ONNX conversion is now done directly in the optimum library, so this is probably where you would need to add something.<|||||>That's true, I'm doing the ONNX conversion from optimum. But optimum references AutoModels directly from transformers. I'd just program the AutoModel subclasses without any optimum stuff, of course.<|||||>@sgugger Sorry for insisting, I know you have lots of issues opened already. Just tagging you to see if you still think this should be solved from Optimum<|||||>I think the optimum library should provide an API for models that don't have an auto API yes (if it does not already), as there will always be such models and we won't add a new auto class for just one model.
cc @michaelbenayoun who might have more information.<|||||>Hi @mszsorondo,
Could you open a PR on the Optimum repo, with your request explained please?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,085 | closed | `import decord` crashes Python kernel/process when moving X-CLIP or other video classification models to CUDA GPU | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-6.0.12-76060006-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger for the docs and @NielsRogge for X-CLIP specifically.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was trying to run inference on GPU with X-CLIP on my own dataset. To do so I followed the [example code](https://huggingface.co/docs/transformers/main/en/model_doc/xclip#transformers.XCLIPModel.forward.example) in the docs, which uses decord as library to load videos into memory. I tested it and it ran perfectly, _but_ I noticed the model was on CPU.
A simple `model.to(torch.device("cuda"))` ought to do the trick, right? Well no, here is the rabbit hole I went down: https://github.com/huggingface/transformers/issues/21054.
My conclusion is that a simple `import decord` before trying to load the model into GPU memory is enough to make the `python` process crash, be it in the terminal or a Jupyter kernel.
This is both with the latest decord version from PyPI (which can only run on CPU) and with the latest decord version compiled from source with CUDA enabled.
To fix this, I used the [code](https://colab.research.google.com/gist/nateraw/c327cb6ff6b074e6ddc8068d19c0367d/pyav-io.ipynb#scrollTo=fzGRpWaUqnTL) generously provided by @nateraw built on pyAV, while discussing how to integrate videos in the datasets library (https://github.com/huggingface/datasets/issues/5225).
### Expected behavior
The model should be transferable to the GPU without issue.
To instruct people on how to do so, the docs should be updated to make use of pyAV instead of decord to avoid sending other users into hours if not days of debugging. | 01-11-2023 09:01:59 | 01-11-2023 09:01:59 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>cc @amyeroberts |
transformers | 21,084 | closed | Add Japanese translation to multilingual.mdx | Signed-off-by: Shogo Hida <[email protected]>
# What does this PR do?
Adds Japanese translation to multilingual.mdx
Fixes #18413
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-11-2023 09:00:13 | 01-11-2023 09:00:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I used the formal form to translate because Japanese is normally written in a formal way in docs.
I wrote this memo because one of the requirements was to use an informal tone. https://github.com/huggingface/transformers/issues/18413#issue-1325310941<|||||>@ArthurZucker can you have a look here? Thanks! |
transformers | 21,083 | closed | Optimize inference only mode memory if ipex is used | Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
optimize the memory if used ipex for inference.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Library:
- trainer: @sgugger
| 01-11-2023 08:20:07 | 01-11-2023 08:20:07 | @sgugger @jianan-gu @yao-matrix please review<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,082 | closed | Corrected a spelling mistake in CODE_OF_CONDUCT.md |
# What does this PR do?
Corrected an English grammar mistake
The noun phrase representative seems to be missing a determiner before it. Consider adding an article like 'the' or ' a'. | 01-11-2023 07:43:26 | 01-11-2023 07:43:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ok ๐ |
transformers | 21,081 | closed | Could swin-tiny-patch4-window7-224 be traced by using torch.jit.trace? | ### System Info
torch version: 1.12.0+cu102
transformers version: 4.18.0
model: microsoft/swin-tiny-patch4-window7-224
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi team, I want to traced the swin model.
My first try is directly tracing pretrained model.
```
from transformers import SwinModel, SwinConfig
import types
model = SwinModel.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model.eval()
x = torch.randn(1,3,224,224)
if not hasattr(model, 'forward_'): model.forward_ = model.forward
# change the forward to make it traceable
model.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)
traced = torch.jit.trace(model, x)
```
Error shows
>
> ---------------------------------------------------------------------------
> TracingCheckError Traceback (most recent call last)
> <ipython-input-55-622b7b3243c5> in <module>
> 8 # change the forward to make it traceable
> 9 model.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)
> ---> 10 traced = torch.jit.trace(model, x)
> 11 # try:
> 12 # traced = torch.jit.trace(model, x)
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 748
> 749 if isinstance(func, torch.nn.Module):
> --> 750 return trace_module(
> 751 func,
> 752 {"forward": example_inputs},
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 990 )
> 991 else:
> --> 992 _check_trace(
> 993 [inputs],
> 994 func,
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
> 25 def decorate_context(*args, **kwargs):
> ...
> - %pooler : __torch__.torch.nn.modules.pooling.___torch_mangle_6083.AdaptiveAvgPool1d = prim::GetAttr[name="pooler"](%self.1)
> ? ^ -
> + %pooler : __torch__.torch.nn.modules.pooling.___torch_mangle_6338.AdaptiveAvgPool1d = prim::GetAttr[name="pooler"](%self.1)
> ? ^^
Then, I tried to disable pooling layer by directly declare raw swin structure.
```
configuration = SwinConfig()
configuration.patch_norm = False
model = SwinModel(configuration, add_pooling_layer = False, use_mask_token = False)
model.eval()
x = torch.randn(1,3,224,224)
if not hasattr(model, 'forward_'): model.forward_ = model.forward
# change the forward to make it traceable
model.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)
```
Error shows
>
> ---------------------------------------------------------------------------
> TracingCheckError Traceback (most recent call last)
> <ipython-input-56-c1e3f68ee2a2> in <module>
> 13 # torch.jit.save(traced, f'model/inferentia/trial3_traced.pt')
> 14
> ---> 15 traced = torch.jit.trace(model, x)
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 748
> 749 if isinstance(func, torch.nn.Module):
> --> 750 return trace_module(
> 751 func,
> 752 {"forward": example_inputs},
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/jit/_trace.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
> 990 )
> 991 else:
> --> 992 _check_trace(
> 993 [inputs],
> 994 func,
>
> ~/miniconda3/envs/fsic_easy/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
> 25 def decorate_context(*args, **kwargs):
> 26 with self.clone():
> ---> 27 return func(*args, **kwargs)
> ...
> - %layernorm : __torch__.torch.nn.modules.normalization.___torch_mangle_6592.LayerNorm = prim::GetAttr[name="layernorm"](%self.1)
> ? ^^^
> + %layernorm : __torch__.torch.nn.modules.normalization.___torch_mangle_6846.LayerNorm = prim::GetAttr[name="layernorm"](%self.1)
> ? ^^^
>
May I know is it possible to trace the model?
### Expected behavior
Expect swin model can be traced. The traced model can be tried for aws-neuron. .. | 01-11-2023 04:07:04 | 01-11-2023 04:07:04 | cc @NielsRogge <|||||>Normally it should work, see https://github.com/huggingface/transformers/issues/17476 for details<|||||>Hi @NielsRogge , thanks. I tried the approach mentioned in https://github.com/huggingface/transformers/issues/17476. It works.
```
from transformers import SwinModel, SwinConfig
import types
model = SwinModel.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model.eval()
if not hasattr(model, 'forward_'): model.forward_ = model.forward
model.forward = types.MethodType(lambda self,x: self.forward_(x).last_hidden_state, model)
x = torch.randn(1,3,224,224)
traced = torch.jit.trace(model, x, check_trace = False)
```
However, I tried to convert traced model into neuron format and deploied to inferentia.
I followed the [tutorial](https://huggingface.co/docs/transformers/main/en/torchscript) and ran code below:
```
torch.neuron.trace(model, x, strict = False)
```
It showed error below. **_May I know is SwinT convertible in neuron format?_**
```
INFO:Neuron:There are 23 ops of 3 different types in the TorchScript that are not compiled by neuron-cc: aten::adaptive_avg_pool1d, aten::index, aten::roll, (For more information see https://github.com/aws/aws-neuron-sdk/blob/master/release-notes/neuron-cc-ops/neuron-cc-ops-pytorch.md)
INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 1813, fused = 1135, percent fused = 62.6%
WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph$2347; falling back to native python function call
ERROR:Neuron:torch.jit.trace error. The PyTorch-Neuron trace Python API uses the
torch.jit.trace function in PyTorch to generate ScriptFunction models for execution
on Inferentia. Due to this, your exisiting PyTorch model must be torch jit traceable.
```
<|||||>Hello @heylamourding,
it seems like that Inferentia1 (`neuron-sdk`) is missing support for some operators for the `SWIN` model, that's not a transformers issue more a neuron-sdk issue.
i saw you already opened an issue in the `neuron-sdk` repository: https://github.com/aws-neuron/aws-neuron-sdk/issues/626
<|||||>Hi @philschmid, thanks for your reply! Sorry for creating inappropriate issue here. I will close the issue. |
transformers | 21,080 | closed | Batch Decoding in GPT2 with variable length sequences | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.11
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi, I am trying to batch decode using GPT2. Each batch may contain variable sequences with different length. I did try specifying `left` padding and explicitly setting the `pad_token` in GPT2.
Steps to reproduce the error
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained('gpt2')
# run this only for gpt-2 as we do not have a pad token in gpt2
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = 'left'
model = AutoModelForCausalLM.from_pretrained('gpt2', pad_token_id = tokenizer.eos_token_id)
model.to(device)
sentence = "I went to the"
results = tokenizer(
[sentence],
add_special_tokens=True,
truncation=True,
padding=True,
return_tensors='pt',
)
print("========= With No Padding ==========")
print("Tokenizing the input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
max_length= 30
print("========= Using Padding of size {0} ==========".format(max_length))
results = tokenizer(
[sentence],
add_special_tokens=True,
max_length=max_length,
truncation=False,
padding='max_length',
return_tensors='pt',
)
print("Tokenizing the padded input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
```
Output
```
========= With No Padding ==========
Tokenizing the input sentence "I went to the" leads to
['I', 'ฤ went', 'ฤ to', 'ฤ the']
I went to the ฤ hospital
========= Using Padding of size 30 ==========
Tokenizing the padded input sentence "I went to the" leads to
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', 'I', 'ฤ went', 'ฤ to', 'ฤ the']
I went to the ฤ the
```
Explicitly, Modifying position embeddings takes care of the above problem.
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained('gpt2')
# run this only for gpt-2 as we do not have a pad token in gpt2
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = 'left'
model = AutoModelForCausalLM.from_pretrained('gpt2', pad_token_id = tokenizer.eos_token_id)
model.to(device)
sentence = "I went to the"
results = tokenizer(
[sentence],
add_special_tokens=True,
truncation=True,
padding=True,
return_tensors='pt',
)
position_ids = torch.zeros(results['attention_mask'].size(), dtype=torch.int32)
starting_index = 0
for index in range(results['attention_mask'][0].size(0)):
if results['attention_mask'][0][index] == 1:
position_ids[0][index] = starting_index
starting_index += 1
print("========= With No Padding ==========")
print("Tokenizing the input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
position_ids=position_ids.to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
max_length= 30
print("========= Using Padding of size {0} ==========".format(max_length))
results = tokenizer(
[sentence],
add_special_tokens=True,
max_length=max_length,
truncation=False,
padding='max_length',
return_tensors='pt',
)
print("Tokenizing the padded input sentence \"{0}\" leads to ".format(sentence) )
print(tokenizer.convert_ids_to_tokens( results['input_ids'][0] ))
position_ids = torch.zeros(results['attention_mask'].size(), dtype=torch.int32)
starting_index = 0
for index in range(results['attention_mask'][0].size(0)):
if results['attention_mask'][0][index] == 1:
position_ids[0][index] = starting_index
starting_index += 1
with torch.no_grad():
logits = model(results['input_ids'].to(device),
attention_mask=results['attention_mask'].to(device),
position_ids=position_ids.to(device),
).logits[:, -1, :]
index = torch.argmax(logits).item()
print( sentence + " " + tokenizer.convert_ids_to_tokens(index) )
print("\n" * 2)
```
The output when position embeddings are explicitly specified:
```
========= With No Padding ==========
Tokenizing the input sentence "I went to the" leads to
['I', 'ฤ went', 'ฤ to', 'ฤ the']
I went to the ฤ hospital
========= Using Padding of size 30 ==========
Tokenizing the padded input sentence "I went to the" leads to
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '<|endoftext|>', 'I', 'ฤ went', 'ฤ to', 'ฤ the']
I went to the ฤ hospital
```
Is it possible to have documentation mentioning this?
### Expected behavior
In both scenarios, with and without left padding the model should generate `ฤ hospital` as the token with the highest probability. However, without modifying position embedding and when tokens are padded we get `ฤ the` as the next token with the highest probability | 01-11-2023 03:52:56 | 01-11-2023 03:52:56 | cc @ArthurZucker <|||||>@younesbelkada related issue that we had closed before: https://github.com/huggingface/transformers/issues/18809<|||||>Before diving a bit deeper, I don't really understand why are you using `convert_id_to_tokens` instead of juste using the `tokenizer.batch_decode` method? Did you try with it? <|||||>> Before diving a bit deeper, I don't really understand why are you using `convert_id_to_tokens` instead of juste using the `tokenizer.batch_decode` method? Did you try with it?
Hi @ArthurZucker , the issues is not with `convert_id_to_tokens` . If we replace this function `convert_id_to_tokens` with `tokenizer.batch_decode` we still get the same issue.
The issue being `GPT2` model adds position embeddings to every token in the input sequence including `pad_tokens`.
Consider the input has `I went to the`. If we use batch size of `1` and no padding is specified, the position id for the word `I` will be `0`. However, if I specify the `max_length` as say `5` in the tokenizer. The tokenizer prepends the input with one pad_token. As a result, the position id for the word `I` will be `1`. This changes the model prediction<|||||>There seems to indeed be a bug! When I use the `generate()` function, I am getting the correct output :
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('gpt2')
>>> tokenizer.pad_token = tokenizer.eos_token
>>> tokenizer.pad_token_id = tokenizer.eos_token_id
>>> tokenizer.padding_side = 'left'
>>> model = AutoModelForCausalLM.from_pretrained('gpt2', pad_token_id = tokenizer.eos_token_id)
>>> prompt_text = [ 'I went to the','we are trying to','The purpose of this workshop is to check whether we can']
>>> encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=12, pad_to_max_length=True, return_tensors= "pt")
>>> input_ids = torch.tensor(encodings_dict['input_ids'])
>>> attn_mask = torch.tensor(encodings_dict['attention_mask'])
>>> tokenizer.batch_decode(model.generate(input_ids, attention_mask=attn_mask, max_length=12))
```
```python
['<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>I went to the hospital',
'<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>we are trying to get',
'<|endoftext|>The purpose of this workshop is to check whether we can make']
```
The issue lies with the fact that we have to pass the positions ids for gpt2. In the generate function, the positional ids are created on the fly if not passed, which is why we have the correct output.
```python
if attention_mask is not None and position_ids is None:
# create position_ids on the fly for batch generation
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
if past:
position_ids = position_ids[:, -1].unsqueeze(-1)
```
cc @LysandreJik I am guessing that the original implementation does not use this? Or is there a specific reason that we are using
```
if position_ids is None:
position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)
position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
```
in the model's forward? <|||||>Thanks for the great issue @murthyrudra!
Hmmm indeed, might be a bug dating back to the original implementation of `gpt2` within `transformers` (this codes dates back to Feb 2019). It's going to be a bit hard to change this within the code, but we can update the documentation/show pointers regarding how to circumvent this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,079 | open | TokenGT | ### Model description
Adding the TokenGT graph transformer model with @Raman-Kumar (see [Graphormer issue](https://github.com/huggingface/transformers/issues/20962#issuecomment-1375361519))
@Raman-Kumar I'll create a PR with what I had ported of TokenGT at the end of the week, to give you a starting point! You'll need to read [this](https://huggingface.co/docs/transformers/add_new_model) first, to get an idea of the steps we follow when integrating a model.
Then, 1st step will be checking the code with a checkpoint, so you need to look for one and download it, to compare results with the original implementation.
Does that work for you?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 01-10-2023 21:48:31 | 01-10-2023 21:48:31 | @clefourrier for sure will work<|||||>Thanks for assigning.
@clefourrier ๐ I am still examining and experimenting more...<|||||>Ping me if you need help! :smile: <|||||>๐ข giving up fingering out myself
my level - I was not familiar with transformer architecture, collator etc, and other models like bert
now I have studied them, and the TokenGT modelโs theoretical aspects.
I have downloaded the checkpoint folder form [drive link](https://drive.google.com/drive/folders/1mo0dV-aLxGFWbPF8xfE8phWTmOtIV1HG?usp=sharing) from the [original repo link](https://github.com/jw9730/tokengt)
Now I have to run **both** PR with checkpoint and original repo
Can you share the script you did with Graphormer?
@clefourrier <|||||>Ok so you will need to do something similar to this:
```python
import argparse
import os, sys
from pathlib import Path
import torch
from torch import nn
from torch.hub import load_state_dict_from_url
# Here, you need to import the transformers version of the TokenGT code (from the PR)
from transformers import (
AutoModel,
GraphormerConfig,
GraphormerForGraphClassification,
GraphormerModel,
# GraphormerCollator
)
from transformers.utils import logging
from transformers.models.graphormer.collating_graphormer import preprocess_item, GraphormerDataCollator
# Here, you need to import the original TokenGT code instead of Graphormer
sys.path.append("path to Graphormer/")
import graphormer
import graphormer.tasks.graph_prediction
import graphormer.models.graphormer
from graphormer.evaluate.evaluate import convert_namespace_to_omegaconf, tasks, options
from fairseq import utils
from fairseq.logging import progress_bar
# You will likely have to change some of these depending on the error messages you get when loading the checkpoint to transformers format
rename_keys = [
("encoder.lm_output_learned_bias", "classifier.lm_output_learned_bias"),
("encoder.embed_out.weight", "classifier.classifier.weight"),
#("encoder.embed_out.weight", "classifier.embed_out.weight"),
#("encoder.embed_out.bias", "classifier.embed_out.bias"),
]
def remove_ignore_keys_(state_dict):
ignore_keys = [
"encoder.version",
"decoder.version",
"encoder.masked_lm_pooler.bias", # to check
"encoder.masked_lm_pooler.weight", # to check
"_float_tensor",
]
for k in ignore_keys:
state_dict.pop(k, None)
def rename_key(dct, old, new):
val = dct.pop(old)
dct[new] = val
def make_linear_from_emb(emb):
vocab_size, emb_size = emb.weight.shape
lin_layer = nn.Linear(vocab_size, emb_size, bias=False)
lin_layer.weight.data = emb.weight.data
return lin_layer
# In this section, you need to replace calls to Graphormer by calls to TokenGT models.
# Graphormer model gets replaced by the original TokenGT model
# Transformers model gets replaced by the format in Transformers
@torch.no_grad()
def convert_graphormer_checkpoint(
args, checkpoint_name, pytorch_dump_folder_path
):
pytorch_dump_folder_path = f"{pytorch_dump_folder_path}/{checkpoint_name}"
cfg = convert_namespace_to_omegaconf(args)
task = tasks.setup_task(cfg.task)
# Graphormer model
graphormer_model = task.build_model(cfg.model)
graphormer_state = torch.load(checkpoint_name)["model"]
graphormer_model.load_state_dict(graphormer_state, strict=True, model_cfg=cfg.model)
graphormer_model.upgrade_state_dict(graphormer_model.state_dict())
# Transformers model
config = GraphormerConfig(
num_labels=1,
share_input_output_embed=False,
num_layers=12,
embedding_dim=768,
ffn_embedding_dim=768,
num_attention_heads=32,
dropout=0.0,
attention_dropout=0.1,
activation_dropout=0.1,
encoder_normalize_before=True,
pre_layernorm=False,
apply_graphormer_init=True,
activation_fn="gelu",
no_token_positional_embeddings=False,
)
transformers_model = GraphormerForGraphClassification(config)
# We copy the state dictionary from the original model to our format
state_dict = graphormer_model.state_dict()
remove_ignore_keys_(state_dict)
for src, dest in rename_keys:
rename_key(state_dict, src, dest)
transformers_model.load_state_dict(state_dict)
# Check results
graphormer_model.eval()
transformers_model.eval()
split = args.split
task.load_dataset(split)
batch_iterator = task.get_batch_iterator(
dataset=task.dataset(split),
max_tokens=cfg.dataset.max_tokens_valid,
max_sentences=2, #cfg.dataset.batch_size_valid,
max_positions=utils.resolve_max_positions(
task.max_positions(),
graphormer_model.max_positions(),
),
ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test,
required_batch_size_multiple=cfg.dataset.required_batch_size_multiple,
seed=cfg.common.seed,
num_workers=cfg.dataset.num_workers,
epoch=0,
data_buffer_size=cfg.dataset.data_buffer_size,
disable_iterator_cache=False,
)
itr = batch_iterator.next_epoch_itr(
shuffle=False, set_dataset_epoch=False
)
progress = progress_bar.progress_bar(
itr,
log_format=cfg.common.log_format,
log_interval=cfg.common.log_interval,
default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple")
)
# Inference
collator = GraphormerDataCollator() #on_the_fly_processing=True)
ys_graphormer = []
ys_transformers = []
with torch.no_grad():
for i, sample in enumerate(progress):
y_graphormer = graphormer_model(**sample["net_input"])[:, 0, :].reshape(-1)
ys_graphormer.extend(y_graphormer.detach())
#print(sample["net_input"]["batched_data"])
transformer_sample = sample["net_input"]["batched_data"] # data is already collated - collator(sample["net_input"]["batched_data"])
transformer_sample.pop("idx")
transformer_sample["labels"] = transformer_sample.pop("y")
transformer_sample["node_input"] = transformer_sample.pop("x")
torch.set_printoptions(profile="full")
y_transformer = transformers_model(**transformer_sample)["logits"] #[:, 0, :].reshape(-1)
ys_transformers.extend(y_transformer.detach())
ys_graphormer = torch.stack(ys_graphormer)
ys_transformers = torch.stack(ys_transformers).squeeze(-1)
assert ys_graphormer.shape == ys_transformers.shape
assert (ys_graphormer == ys_transformers).all().item()
print("All good :)")
Path(pytorch_dump_folder_path).mkdir(exist_ok=True)
transformers_model.save_pretrained(pytorch_dump_folder_path)
transformers_model.push_to_hub(checkpoint_name, use_auth_token="replace by your token")
if __name__ == "__main__":
parser = options.get_training_parser()
# Required parameters
parser.add_argument(
"--checkpoint_name",
type=str,
help="name of a model to load", # path to a model.pt on local filesystem."
)
parser.add_argument(
"--pytorch_dump_folder_path",
default=None,
type=str,
help="Path to the output PyTorch model.",
)
parser.add_argument(
"--split",
type=str,
)
parser.add_argument(
"--metric",
type=str,
)
args = options.parse_args_and_arch(parser, modify_parser=None)
print(args)
#args = parser.parse_args()
convert_graphormer_checkpoint(
args,
args.checkpoint_name,
args.pytorch_dump_folder_path,
)
```<|||||>new to deep learning
I am using macbook air m1
While running command `pip install -e ".[dev]"` for transformers repo,
It shows some error for tensorflow
So, I am using `pip install -e ".[dev-torch]"`, which works fine.
what argument list do you supply when running the above script for Graphormer? @clefourrier <|||||>Hi @Raman-Kumar!
I don't think the tensorflow error is very important atm, don't worry :smile:
Here is my argument list: `--checkpoint_name Name_of_the_checkpoint_you_downloaded_for_tokenGT --pytorch_dump_folder_path tmp --user-dir "Directory where you cloned the code from the TokenGT repository" --num-workers 16 --ddp-backend=legacy_ddp --dataset-name MUTAG_0 --user-data-dir "custom_datasets" --task graph_prediction --criterion l1_loss --arch graphormer_base --num-classes 1 --batch-size 64 --pretrained-model-name pcqm4mv1_graphormer_base --load-pretrained-model-output-layer --split valid --seed 1`
From `ddp-backend` on, you will need to adapt the parameters to launch one of the available datasets in TokenGT, or you could add a `custom_datasets` loader in `tokengt/data/predict_custom`.
For the latter, I think there is a sample script, but if not you can take inspiration from this, which loads MUTAG from the hub to load it in TokenGT:
```python
from datasets import load_dataset
from tokengt.data import register_dataset
from tokengt.data.pyg_datasets.pyg_dataset import TokenGTPYGDataset
import torch
from torch_geometric.data import Data, Dataset, InMemoryDataset
import numpy as np
class TmpDataset(InMemoryDataset):
def __init__(self, root, data_list):
self.data_list = data_list
super().__init__(root, None, None, None)
@property
def raw_file_names(self):
return []
@property
def processed_file_names(self):
return ["data.pt"]
def len(self):
return len(self.data_list)
def get(self, idx):
data = self.data_list[idx]
return data
def create_customized_dataset(dataset_name, ix_xval):
graphs_dataset = load_dataset(f"graphs-datasets/{dataset_name}")
graphs_dataset = graphs_dataset.shuffle(0)
key = "full" if "full" in graphs_dataset.keys() else "train"
graphs_list = [
Data(
**{
"edge_index": torch.tensor(graph["edge_index"], dtype=torch.long),
"y": torch.tensor(graph["y"], dtype=torch.long),
"num_nodes": graph["num_nodes"],
#"x": torch.ones(graph["num_nodes"], 1, dtype=torch.long), # same embedding for all
#"edge_attr": torch.ones(len(graph["edge_index"][0]), 1, dtype=torch.long), # same embedding for all
"x": torch.tensor(graph["node_feat"], dtype=torch.long) if "node_feat" in graph.keys() else torch.ones(graph["num_nodes"], 1, dtype=torch.long), # same embedding for all
"edge_attr": torch.tensor(graph["edge_attr"], dtype=torch.long) if "edge_attr" in graph.keys() else torch.ones(len(graph["edge_index"][0]), 1, dtype=torch.long), # same embedding for all
}
)
for graph in graphs_dataset[key]
]
len_dataset = len(graphs_dataset[key])
len_xval_batch = int(len_dataset / 10)
cur_val_range_int = list(range(ix_xval * len_xval_batch, (ix_xval + 1) * len_xval_batch))
cur_val_range = np.array(cur_val_range_int, dtype=np.int64)
cur_train_range = np.array(
[ix for ix in range(len_dataset) if ix not in cur_val_range_int], dtype=np.int64
)
dataset = TmpDataset("", graphs_list)
return {
"dataset": TokenGTPYGDataset(
dataset=dataset,
seed=0,
train_idx=torch.tensor([0]), #cur_train_range),
valid_idx=torch.tensor(cur_val_range),
test_idx=torch.tensor(cur_val_range),
),
"source": "pyg",
"train_idx":torch.tensor(cur_train_range),
"valid_idx":torch.tensor(cur_val_range),
"test_idx":torch.tensor(cur_val_range),
}
@register_dataset("MUTAG_0")
def m0():
return create_customized_dataset("MUTAG", 0)
```
Tell me if anything is unclear! :hugs: <|||||>Right now I am running this script
script.py
```
import argparse
import os, sys
from pathlib import Path
import torch
from torch import nn
from torch.hub import load_state_dict_from_url
from transformers.utils import logging
import tokengt
import tokengt.tasks.graph_prediction
import tokengt.models.tokengt
from tokengt.evaluate.evaluate import convert_namespace_to_omegaconf, tasks, options
from fairseq import utils
from fairseq.logging import progress_bar
@torch.no_grad()
def convert_tokengt_checkpoint(
args, checkpoint_name, pytorch_dump_folder_path
):
pytorch_dump_folder_path = f"{pytorch_dump_folder_path}/{checkpoint_name}"
cfg = convert_namespace_to_omegaconf(args)
# task = tasks.setup_task(cfg.task)
if __name__ == "__main__":
parser = options.get_training_parser()
# Required parameters
parser.add_argument(
"--checkpoint_name",
type=str,
help="name of a model to load", # path to a model.pt on local filesystem."
)
parser.add_argument(
"--pytorch_dump_folder_path",
default=None,
type=str,
help="Path to the output PyTorch model.",
)
parser.add_argument(
"--split",
type=str,
)
parser.add_argument(
"--metric",
type=str,
)
args = options.parse_args_and_arch(parser, modify_parser=None)
print(args.pytorch_dump_folder_path)
args = parser.parse_args()
convert_tokengt_checkpoint(
args,
args.checkpoint_name,
args.pytorch_dump_folder_path,
)
```
with command
` .....script.py --checkpoint_name pcqv2-tokengt-orf64-trained --pytorch_dump_folder_path tmp --user-dir "../tokengt" --num-workers 16 --ddp-backend=legacy_ddp --dataset-name PCQM4Mv2 --user-data-dir "tokengt/data/ogb_datasets" --task graph_prediction --criterion l1_loss --arch tokengt_base --num-classes 1 --batch-size 64 --pretrained-model-name mytokengt --load-pretrained-model-output-layer --split valid --seed 1`
in `cfg = convert_namespace_to_omegaconf(args)`
I am getting this error
```
2023-02-09 13:05:21 | ERROR | fairseq.dataclass.utils | Error when composing. Overrides: ['common.no_progress_bar=False', 'common.log_interval=100', 'common.log_format=null', 'common.log_file=null', 'common.aim_repo=null', 'common.aim_run_hash=null', 'common.tensorboard_logdir=null', 'common.wandb_project=null', 'common.azureml_logging=False', 'common.seed=1', 'common.cpu=False', 'common.tpu=False', 'common.bf16=False', 'common.memory_efficient_bf16=False', 'common.fp16=False', 'common.memory_efficient_fp16=False', 'common.fp16_no_flatten_grads=False', 'common.fp16_init_scale=128', 'common.fp16_scale_window=null', 'common.fp16_scale_tolerance=0.0', 'common.on_cpu_convert_precision=False', 'common.min_loss_scale=0.0001', 'common.threshold_loss_scale=null', 'common.amp=False', 'common.amp_batch_retries=2', 'common.amp_init_scale=128', 'common.amp_scale_window=null', "common.user_dir='../tokengt'", 'common.empty_cache_freq=0', 'common.all_gather_list_size=16384', 'common.model_parallel_size=1', 'common.quantization_config_path=null', 'common.profile=False', 'common.reset_logging=False', 'common.suppress_crashes=False', 'common.use_plasma_view=False', "common.plasma_path='/tmp/plasma'", 'common_eval.path=null', 'common_eval.post_process=null', 'common_eval.quiet=False', "common_eval.model_overrides='{}'", 'common_eval.results_path=null', 'distributed_training.distributed_world_size=1', 'distributed_training.distributed_num_procs=1', 'distributed_training.distributed_rank=0', "distributed_training.distributed_backend='nccl'", 'distributed_training.distributed_init_method=null', 'distributed_training.distributed_port=-1', 'distributed_training.device_id=0', 'distributed_training.distributed_no_spawn=False', "distributed_training.ddp_backend='legacy_ddp'", "distributed_training.ddp_comm_hook='none'", 'distributed_training.bucket_cap_mb=25', 'distributed_training.fix_batches_to_gpus=False', 'distributed_training.find_unused_parameters=False', 'distributed_training.gradient_as_bucket_view=False', 'distributed_training.fast_stat_sync=False', 'distributed_training.heartbeat_timeout=-1', 'distributed_training.broadcast_buffers=False', 'distributed_training.slowmo_momentum=null', "distributed_training.slowmo_base_algorithm='localsgd'", 'distributed_training.localsgd_frequency=3', 'distributed_training.nprocs_per_node=1', 'distributed_training.pipeline_model_parallel=False', 'distributed_training.pipeline_balance=null', 'distributed_training.pipeline_devices=null', 'distributed_training.pipeline_chunks=0', 'distributed_training.pipeline_encoder_balance=null', 'distributed_training.pipeline_encoder_devices=null', 'distributed_training.pipeline_decoder_balance=null', 'distributed_training.pipeline_decoder_devices=null', "distributed_training.pipeline_checkpoint='never'", "distributed_training.zero_sharding='none'", 'distributed_training.fp16=False', 'distributed_training.memory_efficient_fp16=False', 'distributed_training.tpu=False', 'distributed_training.no_reshard_after_forward=False', 'distributed_training.fp32_reduce_scatter=False', 'distributed_training.cpu_offload=False', 'distributed_training.use_sharded_state=False', 'distributed_training.not_fsdp_flatten_parameters=False', 'dataset.num_workers=16', 'dataset.skip_invalid_size_inputs_valid_test=False', 'dataset.max_tokens=null', 'dataset.batch_size=64', 'dataset.required_batch_size_multiple=8', 'dataset.required_seq_len_multiple=1', 'dataset.dataset_impl=null', 'dataset.data_buffer_size=10', "dataset.train_subset='train'", "dataset.valid_subset='valid'", 'dataset.combine_valid_subsets=null', 'dataset.ignore_unused_valid_subsets=False', 'dataset.validate_interval=1', 'dataset.validate_interval_updates=0', 'dataset.validate_after_updates=0', 'dataset.fixed_validation_seed=null', 'dataset.disable_validation=False', 'dataset.max_tokens_valid=null', 'dataset.batch_size_valid=null', 'dataset.max_valid_steps=null', 'dataset.curriculum=0', "dataset.gen_subset='test'", 'dataset.num_shards=1', 'dataset.shard_id=0', 'dataset.grouped_shuffling=False', 'dataset.update_epoch_batch_itr=null', 'dataset.update_ordered_indices_seed=False', 'optimization.max_epoch=0', 'optimization.max_update=0', 'optimization.stop_time_hours=0.0', 'optimization.clip_norm=0.0', 'optimization.sentence_avg=False', 'optimization.update_freq=[1]', 'optimization.lr=[0.25]', 'optimization.stop_min_lr=-1.0', 'optimization.use_bmuf=False', 'optimization.skip_remainder_batch=False', "checkpoint.save_dir='checkpoints'", "checkpoint.restore_file='checkpoint_last.pt'", 'checkpoint.continue_once=null', 'checkpoint.finetune_from_model=null', 'checkpoint.reset_dataloader=False', 'checkpoint.reset_lr_scheduler=False', 'checkpoint.reset_meters=False', 'checkpoint.reset_optimizer=False', "checkpoint.optimizer_overrides='{}'", 'checkpoint.save_interval=1', 'checkpoint.save_interval_updates=0', 'checkpoint.keep_interval_updates=-1', 'checkpoint.keep_interval_updates_pattern=-1', 'checkpoint.keep_last_epochs=-1', 'checkpoint.keep_best_checkpoints=-1', 'checkpoint.no_save=False', 'checkpoint.no_epoch_checkpoints=False', 'checkpoint.no_last_checkpoints=False', 'checkpoint.no_save_optimizer_state=False', "checkpoint.best_checkpoint_metric='loss'", 'checkpoint.maximize_best_checkpoint_metric=False', 'checkpoint.patience=-1', "checkpoint.checkpoint_suffix=''", 'checkpoint.checkpoint_shard_count=1', 'checkpoint.load_checkpoint_on_all_dp_ranks=False', 'checkpoint.write_checkpoints_asynchronously=False', 'checkpoint.model_parallel_size=1', 'bmuf.block_lr=1.0', 'bmuf.block_momentum=0.875', 'bmuf.global_sync_iter=50', 'bmuf.warmup_iterations=500', 'bmuf.use_nbm=False', 'bmuf.average_sync=False', 'bmuf.distributed_world_size=1', 'generation.beam=5', 'generation.nbest=1', 'generation.max_len_a=0.0', 'generation.max_len_b=200', 'generation.min_len=1', 'generation.match_source_len=False', 'generation.unnormalized=False', 'generation.no_early_stop=False', 'generation.no_beamable_mm=False', 'generation.lenpen=1.0', 'generation.unkpen=0.0', 'generation.replace_unk=null', 'generation.sacrebleu=False', 'generation.score_reference=False', 'generation.prefix_size=0', 'generation.no_repeat_ngram_size=0', 'generation.sampling=False', 'generation.sampling_topk=-1', 'generation.sampling_topp=-1.0', 'generation.constraints=null', 'generation.temperature=1.0', 'generation.diverse_beam_groups=-1', 'generation.diverse_beam_strength=0.5', 'generation.diversity_rate=-1.0', 'generation.print_alignment=null', 'generation.print_step=False', 'generation.lm_path=null', 'generation.lm_weight=0.0', 'generation.iter_decode_eos_penalty=0.0', 'generation.iter_decode_max_iter=10', 'generation.iter_decode_force_max_iter=False', 'generation.iter_decode_with_beam=1', 'generation.iter_decode_with_external_reranker=False', 'generation.retain_iter_history=False', 'generation.retain_dropout=False', 'generation.retain_dropout_modules=null', 'generation.decoding_format=null', 'generation.no_seed_provided=False', 'generation.eos_token=null', 'eval_lm.output_word_probs=False', 'eval_lm.output_word_stats=False', 'eval_lm.context_window=0', 'eval_lm.softmax_batch=9223372036854775807', 'interactive.buffer_size=0', "interactive.input='-'", 'ema.store_ema=False', 'ema.ema_decay=0.9999', 'ema.ema_start_update=0', 'ema.ema_seed_model=null', 'ema.ema_update_freq=1', 'ema.ema_fp32=False', 'task=graph_prediction', 'task._name=graph_prediction', "task.dataset_name='PCQM4Mv2'", 'task.num_classes=1', 'task.max_nodes=128', "task.dataset_source='pyg'", 'task.num_atoms=4608', 'task.num_edges=1536', 'task.num_in_degree=512', 'task.num_out_degree=512', 'task.num_spatial=512', 'task.num_edge_dis=128', 'task.multi_hop_max_dist=5', 'task.spatial_pos_max=1024', "task.edge_type='multi_hop'", 'task.seed=1', "task.pretrained_model_name='mytokengt'", 'task.load_pretrained_model_output_layer=True', 'task.train_epoch_shuffle=True', "task.user_data_dir='tokengt/data/ogb_datasets'", 'criterion=l1_loss', 'criterion._name=l1_loss', 'lr_scheduler=fixed', 'lr_scheduler._name=fixed', 'lr_scheduler.force_anneal=null', 'lr_scheduler.lr_shrink=0.1', 'lr_scheduler.warmup_updates=0', 'lr_scheduler.lr=[0.25]', 'scoring=bleu', 'scoring._name=bleu', 'scoring.pad=1', 'scoring.eos=2', 'scoring.unk=3']
Traceback (most recent call last):
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py", line 513, in _apply_overrides_to_config
OmegaConf.update(cfg, key, value, merge=True)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/omegaconf.py", line 613, in update
root.__setattr__(last_key, value)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py", line 285, in __setattr__
raise e
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py", line 282, in __setattr__
self.__set_impl(key, value)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py", line 266, in __set_impl
self._set_item_impl(key, value)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/basecontainer.py", line 398, in _set_item_impl
self._validate_set(key, value)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py", line 143, in _validate_set
self._validate_set_merge_impl(key, value, is_assign=True)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/dictconfig.py", line 156, in _validate_set_merge_impl
self._format_and_raise(
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/base.py", line 95, in _format_and_raise
format_and_raise(
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/_utils.py", line 694, in format_and_raise
_raise(ex, cause)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ValidationError: child 'dataset.update_epoch_batch_itr' is not Optional
full_key: dataset.update_epoch_batch_itr
reference_type=DatasetConfig
object_type=DatasetConfig
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/ramankumar/OpenSource/script.py", line 106, in <module>
convert_graphormer_checkpoint(
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/Users/ramankumar/OpenSource/script.py", line 74, in convert_graphormer_checkpoint
cfg = convert_namespace_to_omegaconf(args)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/fairseq/dataclass/utils.py", line 399, in convert_namespace_to_omegaconf
composed_cfg = compose("config", overrides=overrides, strict=False)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/experimental/compose.py", line 31, in compose
cfg = gh.hydra.compose_config(
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/hydra.py", line 507, in compose_config
cfg = self.config_loader.load_configuration(
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py", line 151, in load_configuration
return self._load_configuration(
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py", line 277, in _load_configuration
ConfigLoaderImpl._apply_overrides_to_config(config_overrides, cfg)
File "/Users/ramankumar/OpenSource/transformers/.env/lib/python3.9/site-packages/hydra/_internal/config_loader_impl.py", line 520, in _apply_overrides_to_config
raise ConfigCompositionException(
hydra.errors.ConfigCompositionException: Error merging override dataset.update_epoch_batch_itr=null
```
child 'dataset.update_epoch_batch_itr' is not Optional ??
@clefourrier <|||||>I think you read the error correctly, apparently for TokenGT+fairseq it does not seem to be.
You could try passing it as `False` (I think it's a boolean), or looking for it either in the loading scripts or config files to see how it is managed for the project.<|||||>Once again explain how to supply datasets in an argument
I created a file `predict_custom.py` alongside (in same folder) conversion `script.py` and pasted all code you gave
```
from datasets import load_dataset
....
class TmpDataset(InMemoryDataset):
....
def create_customized_dataset(dataset_name, ix_xval):
....
@register_dataset("MUTAG_0")
def m0():
return create_customized_dataset("MUTAG", 0)
```
--dataset-name --MUTAG_0 --user-data-dir "/tokengt/data/ogb_datasets"
How I should write here? @clefourrier <|||||>The simplest would be to do what you did initially, and use one of the native datasets for TokenGT with `--dataset-name PCQM4Mv2`.
If you want to use custom datasets, your `--user-data-dir` must point to the folder containing your dataset script if I remember well.<|||||>๐ Got familiar with PyTorch geometric and Graph Neural Network Project
I read about parameters and datasets for Graph from [Graphormer](https://github.com/microsoft/Graphormer)/[docs](https://github.com/microsoft/Graphormer/tree/main/docs)
here at [tokengt](https://github.com/jw9730/tokengt)/[large-scale-regression](https://github.com/jw9730/tokengt/tree/main/large-scale-regression)/[scripts](https://github.com/jw9730/tokengt/tree/main/large-scale-regression/scripts) was training script for tokengt using` fairseq-train` with argument list
Initially, I assumed that those argument list only used with `fairseq-train` but (!No) same applies to conversion script as well (I did not try this.๐ so sad!! )
Now everything works fine. yay ๐
<|||||>Congratulations, that's very cool! :hugs:
Do you know what your next steps are?<|||||>Next
I added some import-related code in transformers folder like `src/transformers/__init__.py `and other files (taking the help of Graphormer PR )
after that I was successfully able to import HF๐คtokegt in my conversion script.py
```
from transformers import (
AutoModel,
TokenGTConfig,
TokenGTForGraphClassification,
)
```
```
tokengt_model = task.build_model(cfg.model)
tokengt_state = torch.load(checkpoint_name)["model"]
tokengt_model.load_state_dict(tokengt_state, strict=True, model_cfg=cfg.model)
tokengt_model.upgrade_state_dict(tokengt_model.state_dict())
# upto these lines works fine no error
# Transformers model
config = TokenGTConfig(
tasks_weights=None, # added this
num_labels=1,
share_input_output_embed=False,
num_layers=12,
embedding_dim=768,
ffn_embedding_dim=768,
num_attention_heads=32,
dropout=0.0,
attention_dropout=0.1,
activation_dropout=0.1,
encoder_normalize_before=True,
pre_layernorm=False,
apply_graphormer_init=True,
activation_fn="gelu",
no_token_positional_embeddings=False,
)
transformers_model = TokenGTForGraphClassification(config)
state_dict = tokengt_model.state_dict()
transformers_model.load_state_dict(state_dict) # here shows me following error
```
```
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for TokenGTForGraphClassification:
Missing key(s) in state_dict: "decoder.lm_output_learned_bias", "decoder.embed_out.weight".
Unexpected key(s) in state_dict: "encoder.lm_output_learned_bias", "encoder.embed_out.weight", "encoder.graph_encoder.final_layer_norm.weight", "encoder.graph_encoder.final_layer_norm.bias", "encoder.graph_encoder.graph_feature.orf_encoder.weight", "encoder.graph_encoder.graph_feature.order_encoder.weight".
size mismatch for encoder.graph_encoder.graph_feature.edge_encoder.weight: copying a param with shape torch.Size([1536, 768]) from checkpoint, the shape in current model is torch.Size([2048, 768]).
```
there are two checkpoints lap16, orf64.
Both gives same error
except
"encoder.graph_encoder.graph_feature.**lap**_encoder.weight"
"encoder.graph_encoder.graph_feature.**orf**_encoder.weight"
these are error
Missing key(s), Unexpected key(s), size mismatch
need help @clefourrier
edit : adding num_edges=1536 in config removed size mismatch error<|||||>I think this should be managed with the `remove_ignore_keys_` and `rename_keys` parts: you need to find what the "unexpected keys" in the original checkpoint map to in the new format, and rename them accordingly. In essence, you are going from one format (tokenGT format) to another format (transformers style) for your checkpoint, so you need to do this mapping.
Congrats on debugging the other error! :clap: <|||||>Initially, I had no idea how to map them and to what. I don't even know what they mean. So, I spent some time studying transformers and looking at code.
suddenly I thought let's print models
So, I printed both original models and HF๐ค model
```
print(transformers_model)
print(tokengt_model)
```
and compared the differences.
Accordingly, I added these arguments to the config
```
# config for lap16
config = TokenGTConfig(
...
lap_node_id=True,
lap_node_id_k=16,
id2label = {"1":"className"}, # I added a dictionary explained below why I did this
type_id=True,
prenorm=True,
...
)
```
and renamed keys
```
rename_keys = [
("encoder.embed_out.weight", "decoder.embed_out.weight"),
# I did not find lm_output_learned_bias in models So, I checked code and doing this made most sense
("encoder.lm_output_learned_bias", "decoder.lm_output_learned_bias"),
]
```
Doing this works fine. no error.
if I don't do this `id2label = {"1":"className"}`
putting argument `num_labels = 1` in `config = TokenGTConfig(` has no effect
because `num_labels` gets a default value `2` in `PretrainedConfig` (see code below) file (super class of `TokenGTConfig(PretrainedConfig)`)
which would give a size mismatch error
https://github.com/huggingface/transformers/blob/9d1116e9951686f937d17697820117636bfc05a5/src/transformers/configuration_utils.py#L326-L330
<|||||>It's really great to see your motivation, good job! :sparkles:
I'll try to check the code to confirm the key renames you made, but I think they do make sense because of the naming changes between the original and new models.
For the id2label, I don't think it is such a good idea to modify things outside of the TokenGT files - normally the parent class (`PretrainedConfig`) is overwritten by the child class (`TokenGTConfig`), are you sure this modification is happening here?
I think you could also try changing the `TokenGTConfig` `num_labels` default value to 1 instead of None and see what happens.<|||||>Yes, I am sure<|||||>Hi @Raman-Kumar !
I took some time to clean the code a bit and edited some parts, it should be better now for the problems you mentioned. If problems occur in the future, fyi the Graphormer code which was integrated in the lib is quite similar to this one, so you can look at how they are managed there.
Because of a mixup on my github I had to create a new PR for this https://github.com/huggingface/transformers/pull/21745 and this is where you'll find the new code. Hoping it helps you! :hugs: <|||||>Hi, @clefourrier
I had already figured it out but I was very sick for few days ๐.
In the previous PR, I did three changes after that it printed "All good :)"
1. changing `num_labels` to `num_classes` (after that no need to add `id2label` which you suggested not to add)
2. In File models/tokengt/configuration_tokengt.py,
`import torch.nn.functional as F` is missing
3. `decode` name was wrongly written in `TokenGTForGraphClassification` class in forward function
I was just going to upload the newly created config.json and pytorch_model.bin file to hugging face id.
Now I will look at new PR and will send changes with Tests and Docs to new PR.
<|||||>That sounds good, these changes sound similar to the ones in the new PR.
I hope you take rest and get better soon :hugs: <|||||>Hi, back again
Uploaded converted checkpoint and config for
lap - https://huggingface.co/raman-ai/tokengt-base-lap-pcqm4mv2
orf - https://huggingface.co/raman-ai/tokengt-base-orf-pcqm4mv2
Now, I am writing tests,
I tried to push some changes to [PR](https://github.com/huggingface/transformers/pull/21745)
But it says like authentication failed, do not have permission etc.
How should I push new commits to your PR? @clefourrier
Need to add me as a collaborator to your forked repo
in my terminal
```
$ git remote -v
github-desktop-clefourrier https://github.com/clefourrier/transformers.git (fetch)
github-desktop-clefourrier https://github.com/clefourrier/transformers.git (push)
origin https://github.com/Raman-Kumar/transformers.git (fetch)
origin https://github.com/Raman-Kumar/transformers.git (push)
upstream https://github.com/huggingface/transformers.git (fetch)
upstream https://github.com/huggingface/transformers.git (push)
```<|||||>@Raman-Kumar added you to my fork!<|||||>I created a new PR #22042 just for making a lot of commits and see where circleci do fail. So, I can correct it.
Later I will do a single commit in your PR.
I have added a new dependency `einops` in setup.py. In entire repo, it's fist time being used in tokengt model.
I added TokenGTModelIntegrationTest. and now it passes all circleci checks.
I have a question. @clefourrier
How to know the shape of inputs `node_data,num_nodes,edge_index,edge_data,edge_num,in_degree,out_degree,lap_eigvec,lap_eigval,labels`
of Tokengt for `ids_tensor()` function?
Like in Graphormer
```
attn_bias = ids_tensor(
[self.batch_size, self.graph_size + 1, self.graph_size + 1], self.num_atoms
) # Def not sure here
attn_edge_type = ids_tensor([self.batch_size, self.graph_size, self.graph_size, 1], self.num_edges)
spatial_pos = ids_tensor([self.batch_size, self.graph_size, self.graph_size], self.num_spatial)
in_degree = ids_tensor([self.batch_size, self.graph_size], self.num_in_degree)
out_degree = ids_tensor([self.batch_size, self.graph_size], self.num_out_degree)
input_nodes = ids_tensor([self.batch_size, self.graph_size, 1], self.num_atoms)
input_edges = ids_tensor(
[self.batch_size, self.graph_size, self.graph_size, self.multi_hop_max_dist, 1], self.num_edges
)
labels = ids_tensor([self.batch_size], self.num_classes)
```
<|||||>Ok, great for the PR, and congrats for the tests!
For einops, do you need a lot of code? It would be better to copy paste the functions we will need (citing them and if the license allows ofc) as we only allow new dependencies for very specific cases.
For TokenGT, are you talking about the shape of inputs provided to the test suite?
Most attributes will have the same shape as for Graphormer (`batch_size` in position one, then `graph_size` or linked to it for inputs which look over the whole graph, like those pertaining to edges/nodes (includes the degrees for example)). The collation function should be able to help you with the specifics, since the shape must be provided there. Last resort, to confirm your intuition, you can also print all the dimensions for the elements you want.
|
transformers | 21,078 | closed | batched generate using forced_decoder_ids | ### Feature request
Currently, forced_decoder_ids only accepts a single list of [index,token_id] to force the decoder output given an input. However, it does not support batched output forcing, where input itself is a batch. Can we have this support to have forced_decoder_ids = List(List(List([int,int]))) where 0th dimension corresponds to batch dimension?
### Motivation
This will help forcing the output for different inputs simultaneously.
### Your contribution
I could help submit a PR, wanted to first understand the feasibility of feature request and/or other implications. | 01-10-2023 18:59:52 | 01-10-2023 18:59:52 | cc @gante <|||||>Hey @ghadiaravi13 ๐
I see no major technical hurdles to implementing the feature. However, before we go that route, what is the use case you expect from that feature? (As any increase in complexity, we should make sure it is done for a good reason :) )<|||||>Hi @gante!
The expected usage is when we use generate function over a batch of inputs, and want to force the decoder output for each input in the batch. Having batch support rather than iterating manually will have computational benefits, I suppose. You may correct me though.<|||||>@ghadiaravi13 `forced_decoder_ids` should already work at batch level, assuming you want the same forced tokens for all members of the batch. Doesn't this solve your use case?
The alternative, where each member of the batch has its own `forced_decoder_ids`, requires significantly increasing the complexity of the code. As such, to include it in `transformers`, we need some demonstration that it is a valued feature :)<|||||>Yes I was referring to the latter case actually. Sure I could imagine the increase in code complexity, just wanted to check. I'll stick with manually iterating for now. Thanks for responding!<|||||>> @ghadiaravi13 `forced_decoder_ids` should already work at batch level, assuming you want the same forced tokens for all members of the batch. Doesn't this solve your use case?
>
> The alternative, where each member of the batch has its own `forced_decoder_ids`, requires significantly increasing the complexity of the code. As such, to include it in `transformers`, we need some demonstration that it is a valued feature :)
I think it is crucial for cases when you want to force different prompt for member in the batch, e.g.: Training Whisper on transcribe and translate tasks in the same dataset. In this case, some members in the batch needs to use the transcribe token forced and some the translate token.
How is it possible to solve it otherwise? <|||||>`.generate()` is not used at training time, so that question doesn't apply. See our [blog post on fine-tuning Whisper](https://huggingface.co/blog/fine-tune-whisper#prepare-feature-extractor-tokenizer-and-data) for further reference.
At inference time, it is possible to build a solution to handle both tasks at once. However, the benefits are small (vs separating different tasks in different data batches) and we'd have the burden of long-term maintenance of the code. I'd still encourage you to build your own custom `LogitsProcessor` to solve the problem if it is relevant to your use case -- we've built a modular codebase precisely so anyone can easily build their custom solutions without depending on us ๐ค
Finally, I'd like to mention that [Whisper has its own `.generate()` function](https://github.com/huggingface/transformers/blob/d3046dad809b7b10019b142ae20b49fb58d21c28/src/transformers/models/whisper/modeling_whisper.py#L1232) that easily abstracts the parameterization for each task.
|
transformers | 21,077 | closed | TypeError (NumPy concatenation) in modeling_wav2vec2 at _sample_negative_indices | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.27
- Python version: 3.9.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes - Driver Version: 495.29.05 CUDA Version: 11.5
- Using distributed or parallel set-up in script?: no
### Who can help?
transformers library
@patrickvonplaten
### Information
- The official example scripts
- [X] My own modified scripts
### Tasks
- An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Error log
```
speechbrain.utils.checkpoints - Would load a checkpoint here, but none found yet.
speechbrain.utils.epoch_loop - Going into epoch 1
speechbrain.core - Exception:
Traceback (most recent call last):
File "recipes/CommonVoice/self-supervised-learning/wav2vec2/train_hf_wav2vec2.py", line 357, in <module>
asr_brain.fit(
File "speechbrain/core.py", line 1207, in fit
self._fit_train(train_set=train_set, epoch=epoch, enable=enable)
File "speechbrain/core.py", line 1060, in _fit_train
loss = self.fit_batch(batch)
File recipes/CommonVoice/self-supervised-learning/wav2vec2/train_hf_wav2vec2.py", line 111, in fit_batch
predictions = self.compute_forward(batch, sb.Stage.TRAIN)
File "recipes/CommonVoice/self-supervised-learning/wav2vec2/train_hf_wav2vec2.py", line 54, in compute_forward
out, mask = self.modules.wav2vec2(wavs)
File "torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "speechbrain/lobes/models/huggingface_wav2vec.py", line 434, in forward
transformers.models.wav2vec2.modeling_wav2vec2._sample_negative_indices(
File "transformers/models/wav2vec2/modeling_wav2vec2.py", line 285, in _sample_negative_indices
sampled_negative_indices[batch_idx] += batch_idx * sequence_length
TypeError: Concatenation operation is not implemented for NumPy arrays, use np.concatenate() instead. Please do not rely on this error; it may not be given on all Python implementations.
```
where
```
numpy 1.23.4
scipy 1.8.1
```
This issue came up during [reworking testing for SpeechBrain](https://github.com/speechbrain/speechbrain/pull/1600). As part of refactoring & expanding our integration of the HuggingFace transformers library, we ensured to have testing for all SpeechBrain recipes. After lifting an extra_dependency restriction, this error occured.
What was changed?
https://github.com/speechbrain/speechbrain/blob/801b1501b0bde2a940fcb71af44b69b07eafb9f5/recipes/CommonVoice/self-supervised-learning/wav2vec2/extra_requirements.txt#L1
to
https://github.com/anautsch/speechbrain/blob/b7e1b02a8cb3be81640c40c23a99d5af646a24e5/recipes/CommonVoice/self-supervised-learning/wav2vec2/extra_requirements.txt#L1
How to reproduce?
a) either run the [recipe script](https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonVoice/self-supervised-learning/wav2vec2) from scratch (might be too hardware intense)
b) or use a testing tool, we created that runs the recipe in a very light debug mode
To use the recipe testing, please create an environment using this SpeechBrain version (from our mentioned PR).
```
git clone https://github.com/anautsch/speechbrain.git
cd speechbrain
git checkout refactor-recipe-testing
pip install -r requirements.txt
pip install transformers==4.25.1 huggingface-hub==0.11.1 datasets==2.7.1
pip install --editable .
```
The particular recipe can then be tested using this command:
```
python -c 'from tests.utils.recipe_tests import run_recipe_tests; print("TEST FAILED!") if not(run_recipe_tests(filters_fields=["Hparam_file"], filters=[["recipes/CommonVoice/self-supervised-learning/wav2vec2/hparams/wav2vec2_base.yaml"]], do_checks=False, run_opts="--device=cuda")) else print("TEST PASSED")'
```
This will result in
```
(1/1) Running test for CommonVoice_row_18...
ERROR: Error in CommonVoice_row_18 (recipes/CommonVoice/self-supervised-learning/wav2vec2/hparams/wav2vec2_base.yaml). Check tests/tmp/CommonVoice_row_18/stderr.txt and tests/tmp/CommonVoice_row_18/stdout.txt for more info.
```
and the above stacktrace is available through either `cat`
```
cat tests/tmp/CommonVoice_row_18/std*
cat tests/tmp/CommonVoice_row_18/log.txt
```
I started an [issue on our end](https://github.com/speechbrain/speechbrain/issues/1787) as part of keeping track of all issues that surfaced during testing all SpeechBrain recipes. There, the suggestion is to reintroduce the dependency restriction `transformers==4.15`.
### Expected behavior
It would be great to lift all extra_dependency restrictions in SpeechBrain recipes and move them to the latest versions, e.g., `transformers>=4.25.2` instead of fixing it to a specific & older version (v4.15 dates back to Dec 22, 2021). | 01-10-2023 16:18:04 | 01-10-2023 16:18:04 | cc @sanchit-gandhi <|||||>Hey @anautsch! Thanks for opening this issue ๐ค Would it be possible to provide a reproducible code snippet that only uses `transformers`? This way we can pinpoint the exact issue in the library.
In an attempt to try and reproduce your issue, I tested the Wav2Vec2 pretraining script from the `transformers` library:
[run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py). However, I was not able to reproduce the error you encountered; both the script and model worked fine without the aforementioned numpy concatenation error.
In these tests, I used the ['base' Wav2Vec2 model](https://huggingface.co/patrickvonplaten/wav2vec2-base-v2/blob/main/config.json) and pre-trained on a [dummy dataset](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/tree/main) consisting of 73 samples from the LibriSpeech ASR corpus (~9MB of data).
You can reproduce this dummy run using the following command:
```
accelerate launch run_wav2vec2_pretraining_no_trainer.py \
--dataset_name="hf-internal-testing/librispeech_asr_dummy" \
--dataset_config_name="clean" \
--dataset_split_names validation \
--model_name_or_path="patrickvonplaten/wav2vec2-base-v2" \
--output_dir="./wav2vec2-pretrain-issue" \
--num_train_epoch="1" \
--max_duration_in_seconds="20.0" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--validation_split_percentage="10" \
--gradient_checkpointing
```
**Print Output:**
```
Gradients have overflown - skipping update step... Updating gradient scale to 65536.0...
Gradients have overflown - skipping update step... Updating gradient scale to 32768.0...
Gradients have overflown - skipping update step... Updating gradient scale to 16384.0...
Gradients have overflown - skipping update step... Updating gradient scale to 8192.0...
Gradients have overflown - skipping update step... Updating gradient scale to 4096.0...
Gradients have overflown - skipping update step... Updating gradient scale to 2048.0...
| val_loss: 4.825e+00| val_contrastive_loss: 4.732e+00| val_diversity_loss: 9.208e-01| val_num_losses: 1.000e+00
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:09<00:00, 1.08it/s]
Configuration saved in ./wav2vec2-pretrain-issue/config.json
Model weights saved in ./wav2vec2-pretrain-issue/pytorch_model.bin
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:11<00:00, 1.63s/it]
```
As you can see, the script and model worked fine for me here! If you can provide a similar code snippet that demonstrates your issue that would be grand!<|||||>Hi @sanchit-gandhi thank you for providing an alternate example (I couldn't get it running right away). But thanks for nudging me towards a minimal example. After looking at the inputs, it turned out we were giving as input arguments for `(batch_size, sequence_length)` a `(2, tensor(157))` mix. It worked when using `.item()` on a previous variable, so the input argument is `(2, 157)` instead.<|||||>Hey @anautsch! Very glad to hear that - best of luck with your development! |
transformers | 21,076 | closed | Pushing T5ForConditionalGeneration to hub | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`trained_model.model.push_to_hub("model")`
```AttributeError Traceback (most recent call last)
[<ipython-input-21-b193d55bd628>](https://localhost:8080/#) in <module>
----> 1 trained_model.model.push_to_hub("model")
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)
1263 if name in modules:
1264 return modules[name]
-> 1265 raise AttributeError("'{}' object has no attribute '{}'".format(
1266 type(self).__name__, name))
1267
AttributeError: 'T5ForConditionalGeneration' object has no attribute 'push_to_hub'
```
### Expected behavior
I want to push T5ForConditionalGeneration to te hub, it doesn't work, I don't know if you need more info | 01-10-2023 12:59:02 | 01-10-2023 12:59:02 | It will be very hard to help you without a reproducible example. `T5ForConditionalGeneration` is a `PreTrainedModel` and so does have a `push_to_hub` method.<|||||>Oh, Iโm sorry, @sgugger, here is my code
https://colab.research.google.com/drive/1a59B4e8AooTFkUeMOs5Zsa0DIBWGM3k6?usp=sharing
the relevant part is at the bottom<|||||>during making a minimal example, the code started working, sorry for wasting your time, have a great day! |
transformers | 21,075 | closed | CompVis/stable-diffusion-v1-4 does not appear to have a file named tokenizer/config.json | ### System Info
when I use AutoTokenizer to load tokenizer๏ผuse the code below๏ผ
tokenizer = transformers.AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path,
subfolder="tokenizer",
revision=args.revision,
use_fast=False,
)
but I found can't get the right tokenizer_config.json file. Indeed the functions try to find config.json file instead of tokenizer_config.json file. So i don't know how to solve it.
โ โ
โ /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/tokeniz โ
โ ation_auto.py:564 in from_pretrained โ
โ โ
โ 561 โ โ # If that did not work, let's try to use the config. โ
โ 562 โ โ if config_tokenizer_class is None: โ
โ 563 โ โ โ if not isinstance(config, PretrainedConfig): โ
โ โฑ 564 โ โ โ โ config = AutoConfig.from_pretrained( โ
โ 565 โ โ โ โ โ pretrained_model_name_or_path, trust_remote_code=trust_remote_code, โ
โ 566 โ โ โ โ ) โ
โ 567 โ โ โ config_tokenizer_class = config.tokenizer_class โ
โ โ
โ /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/configu โ
โ ration_auto.py:746 in from_pretrained โ
โ โ
โ 743 โ โ kwargs["_from_auto"] = True โ
โ 744 โ โ kwargs["name_or_path"] = pretrained_model_name_or_path โ
โ 745 โ โ trust_remote_code = kwargs.pop("trust_remote_code", False) โ
โ โฑ 746 โ โ config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_n โ
โ 747 โ โ if "auto_map" in config_dict and "AutoConfig" in config_dict["auto_map"]: โ
โ 748 โ โ โ if not trust_remote_code: โ
โ 749 โ โ โ โ raise ValueError( โ
โ โ
โ /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils โ
โ .py:553 in get_config_dict โ
โ โ
โ 550 โ โ """ โ
โ 551 โ โ original_kwargs = copy.deepcopy(kwargs) โ
โ 552 โ โ # Get config dict associated with the base config file โ
โ โฑ 553 โ โ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwar โ
โ 554 โ โ if "_commit_hash" in config_dict: โ
โ 555 โ โ โ original_kwargs["_commit_hash"] = config_dict["_commit_hash"] โ
โ 556 โ
โ โ
โ /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils โ
โ .py:608 in _get_config_dict โ
โ โ
โ 605 โ โ โ โ
โ 606 โ โ โ try: โ
โ 607 โ โ โ โ # Load from local folder or from cache or download from model Hub and ca โ
โ โฑ 608 โ โ โ โ resolved_config_file = cached_file( โ
โ 609 โ โ โ โ โ pretrained_model_name_or_path, โ
โ 610 โ โ โ โ โ configuration_file, โ
โ 611 โ โ โ โ โ cache_dir=cache_dir, โ
โ โ
โ /home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/utils/hub.py:453 in โ
โ cached_file โ
โ โ
โ 450 โ โ โ return None โ
โ 451 โ โ if revision is None: โ
โ 452 โ โ โ revision = "main" โ
โ โฑ 453 โ โ raise EnvironmentError( โ
โ 454 โ โ โ f"{path_or_repo_id} does not appear to have a file named {full_filename}. Ch โ
โ 455 โ โ โ f"'https://huggingface.co/{path_or_repo_id}/{revision}' for available files. โ
โ 456 โ โ ) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
OSError: CompVis/stable-diffusion-v1-4 does not appear to have a file named tokenizer/config.json. Checkout 'https://huggingface.co/CompVis/stable-diffusion-v1-4/main' for available files.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
tokenizer = transformers.AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path,
subfolder="tokenizer",
revision=args.revision,
use_fast=False,
)
### Expected behavior
Can someone help me ? | 01-10-2023 03:43:04 | 01-10-2023 03:43:04 | The following sample runs without any issue:
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"CompVis/stable-diffusion-v1-4",
subfolder="tokenizer",
use_fast=False,
)
```
Please include a code reproducer of your issue.<|||||>> The following sample runs without any issue:
>
> ```python
> from transformers import AutoTokenizer
>
> tokenizer = AutoTokenizer.from_pretrained(
> "CompVis/stable-diffusion-v1-4",
> subfolder="tokenizer",
> use_fast=False,
> )
> ```
>
> Please include a code reproducer of your issue.
`from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"CompVis/stable-diffusion-v1-4",
subfolder="tokenizer",
use_fast=False,
)`
### **I used your codes as above, but still get the same error:**
(diffusion) qll@longyuan:/data/qll/ColossalAI_2/ColossalAI/examples/images/dreambooth$ python test.py
Traceback (most recent call last):
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 239, in hf_raise_for_status
response.raise_for_status()
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/utils/hub.py", line 408, in cached_file
resolved_file = hf_hub_download(
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1067, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1376, in get_hf_file_metadata
hf_raise_for_status(r)
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 257, in hf_raise_for_status
raise EntryNotFoundError(message, response) from e
huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: Root=1-63be11d8-73a357726aa9511757d467c4)
Entry Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/config.json.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/qll/ColossalAI_2/ColossalAI/examples/images/dreambooth/test.py", line 3, in <module>
tokenizer = AutoTokenizer.from_pretrained(
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 564, in from_pretrained
config = AutoConfig.from_pretrained(
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 746, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils.py", line 553, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/configuration_utils.py", line 608, in _get_config_dict
resolved_config_file = cached_file(
File "/home/qll/anaconda3/envs/diffusion/lib/python3.10/site-packages/transformers/utils/hub.py", line 453, in cached_file
raise EnvironmentError(
OSError: CompVis/stable-diffusion-v1-4 does not appear to have a file named tokenizer/config.json. Checkout 'https://huggingface.co/CompVis/stable-diffusion-v1-4/main' for available files.
<|||||>> The following sample runs without any issue:
>
> ```python
> from transformers import AutoTokenizer
>
> tokenizer = AutoTokenizer.from_pretrained(
> "CompVis/stable-diffusion-v1-4",
> subfolder="tokenizer",
> use_fast=False,
> )
> ```
>
> Please include a code reproducer of your issue.
transformers 4.22.2
torch 1.12.1+cu113
<|||||>You should upgrade to the latest version of Transformers, this is probably why I don't have the bug on my side, it has been fixed.<|||||>> You should upgrade to the latest version of Transformers, this is probably why I don't have the bug on my side, it has been fixed.
thx, it works by upgrading to the latest version of Transformers! |
transformers | 21,074 | closed | [WIP]add transformer transducer | # What does this PR do?
#20961
This PR adds [Transformer-Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss](https://arxiv.org/abs/2002.02562)
This model is streaming model that model recognize text from audio in real-time and There is no site where model weight has been uploaded.
Transformer-Transducer implement: https://github.com/jp1924/transformer-transducer
RNN-Transducer reference:ย [https://lorenlugosch.github.io/posts/2020/11/transducer/](https://lorenlugosch.github.io/posts/2020/11/transducer/)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models: speech models
@sanchit-gandhi
Library: generate
@gante
Maintained examples: pytorch
@sgugger
| 01-10-2023 03:16:28 | 01-10-2023 03:16:28 | Hey @jp1924! As discussed on the issue https://github.com/huggingface/transformers/issues/20961#issuecomment-1382245091, it's not possible to add a model to `transformers` when official weights are not available (please refer to the message thread for more details).
I would advise that you focus the implementation on a Transformer-Transducer codebase where strong pre-trained weights are available and open-sourced. I'm more than happy to help find a suitable codebase + weights to port! This would be a valuable addition to the `transformers` library.<|||||>hey @sanchit-gandhi
I saw several papers(like [MS T-T](https://arxiv.org/pdf/1910.12977.pdf), [Google T-T](https://arxiv.org/pdf/2002.02562.pdf), [Facebook T-T](https://arxiv.org/pdf/2010.11395.pdf)) related to the T-T model, but there is no content related to the official github code in the paper like BERT. So, while looking for an alternative, i found Transformer-Transducer implemented in a library called [openspeech](https://github.com/openspeech-team/openspeech).
The model weight is not disclosed, but there is a code that can train [T-T](https://github.com/openspeech-team/openspeech/tree/main/openspeech/models/transformer_transducer). So I'm thinking of using openspeech to get the weight of the T-T first and then transferring the model and code to the hugingface, is it possible?<|||||>Hey @jp1924!
Cool that you've been able to dig so deeply into the ASR literature! Indeed, these are all fantastic research papers that highlight the strong performance of the T-T architecture. It's a real pity that neither MS, Google nor Meta released open-sourced weights for these papers, as I think a performant T-T model would be of great use to the open-source community, especially for low-latency/on-device applications.
Unfortunately, again with OpenSpeech it's an unofficial implementation without weights, so we probably can't add this to `transformers` either.
I'm really sorry you've invested time here without luck finding a performant set of open-source T-T weights. I had a look through your wandb logs, it does look as though the model is working. We can leave this PR open if you want to continue iterating and provide updates, but we won't be able to merge it to `transformers` without weights from a well established research paper (e.g. from MS, Google, Meta, etc)<|||||>Hey @sanchit-gandhi @flozi00 @fxtentacle @YooSungHyun!
Thank you so much for your interest in the T-T model! Unfortunately, it looks like we'll have to close the PR.....
I understand that this sophistication and prudence makes Transformers even better! Most of all, it was really nice to have access to Transformers' philosophy and new features!
---
Hey @sanchit-gandhi!
I have a question about PR. What is the range of the official code and weight?
I think the emformer of #17302 is an example. The emformer paper does not have an official github code & weight, such as BERT.
However, the emformer code and weight have been uploaded to torchaudio. So my question is, can I contribute to Transformer by using the code and weight in the certified library even if it's not the code listed in the official paper?<|||||>Hey @jp1924! Great question! The short answer is: code and weights in a certified library = yes! Just code = no |
transformers | 21,073 | closed | Pre-trained tokenizer `repr` is inconsistent with attribute name | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
Hi @ArthurZucker, since this is tokeniser-related, do you mind having a look?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This one's pretty straightforward:
1. Using a pre-trained tokeniser (`PreTrainedTokenizerFast` or `PreTrainedTokenizerBase`), print out the object.
2. The `repr`, which is defined in [`tokenization_utils_base`](https://github.com/huggingface/transformers/blob/a3c37825cc1e305dde63455b5f321586e6d29e07/src/transformers/tokenization_utils_base.py#L1573), returns something like this:
`PreTrainedTokenizerFast(name_or_path='gpt2', vocab_size=50257, model_max_len=1024, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]'})`
3. Note the `model_max_len` attribute.
### Expected behavior
The repr should display `model_max_length=1024` instead, since that is the actual name of the attribute. Other attribute labels in the repr seem consistent with the name, which leads me to believe this is a typo.
I came across this because I printed out the object, and then tried to access that tokeniser's `model_max_len`, which of course errors out since there's no attribute with that name. | 01-10-2023 00:37:40 | 01-10-2023 00:37:40 | The `PreTrainedTokenizerFast` and `PreTrainedTokenizerBase` are abstract classes and should not really be used. The `model_max_len` is a vestige of a previous argument, opening a PR to fix this typo as indeed the attribute does not exist. |
transformers | 21,072 | closed | Fix header level | Fixes header level for the last two sections in the pipeline tutorial. | 01-10-2023 00:15:05 | 01-10-2023 00:15:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,071 | closed | Fix git model for generate with beam search. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21070
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 01-10-2023 00:04:07 | 01-10-2023 00:04:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, there seems to be an issue with the Circle CI. Will ping @sgugger for this.
Can you add the error you are getting on your issue?
I think we should also add a test to make sure that this model is ran, the fix LGTM ๐ thanks<|||||>It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>Thanks! Can you now run `make style` to fix the test failure you see on the CI?<|||||>@NielsRogge Please help adding the test. I am having limited capacity here. |
transformers | 21,070 | closed | GIT does not work with beam search | ### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@gante @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Take the script from official doc (https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM)
Add `num_beams=3`. Git model will report error.
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import requests
from PIL import Image
processor = AutoProcessor.from_pretrained("microsoft/git-base-coco")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50, num_beams=3)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```
### Expected behavior
GIT model should work with beam search. | 01-10-2023 00:02:07 | 01-10-2023 00:02:07 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.