text
stringlengths 2
11.8k
|
---|
Use case 4: web page question answering (inference), parse_html=True
For question answering tasks on web pages, you can provide a question to the processor. By default, the
processor will use the feature extractor to get all nodes and xpaths, and create [CLS] question tokens [SEP] word tokens [SEP].
thon |
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
html_string = """
<!DOCTYPE html>
Hello world
Welcome
My name is Niels.
"""
question = "What's his name?"
encoding = processor(html_string, questions=question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) |
Use case 5: web page question answering (inference), parse_html=False
For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted
all nodes and xpaths yourself, you can provide them directly to the processor. Make sure to set parse_html to False.
thon |
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
question = "What's his name?"
encoding = processor(nodes=nodes, xpaths=xpaths, questions=question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) |
Resources
Demo notebooks
Text classification task guide
Token classification task guide
Question answering task guide |
MarkupLMConfig
[[autodoc]] MarkupLMConfig
- all
MarkupLMFeatureExtractor
[[autodoc]] MarkupLMFeatureExtractor
- call
MarkupLMTokenizer
[[autodoc]] MarkupLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
MarkupLMTokenizerFast
[[autodoc]] MarkupLMTokenizerFast
- all
MarkupLMProcessor
[[autodoc]] MarkupLMProcessor
- call
MarkupLMModel
[[autodoc]] MarkupLMModel
- forward
MarkupLMForSequenceClassification
[[autodoc]] MarkupLMForSequenceClassification
- forward
MarkupLMForTokenClassification
[[autodoc]] MarkupLMForTokenClassification
- forward
MarkupLMForQuestionAnswering
[[autodoc]] MarkupLMForQuestionAnswering
- forward |
BEiT
Overview
The BEiT model was proposed in BEiT: BERT Pre-Training of Image Transformers by
Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of
Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class
of an image (as done in the original ViT paper), BEiT models are pre-trained to
predict visual tokens from the codebook of OpenAI's DALL-E model given masked
patches.
The abstract from the paper is the following:
We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation
from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image
modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e, image
patches (such as 16x16 pixels), and visual tokens (i.e., discrete tokens). We first "tokenize" the original image into
visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training
objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we
directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder.
Experimental results on image classification and semantic segmentation show that our model achieves competitive results
with previous pre-training methods. For example, base-size BEiT achieves 83.2% top-1 accuracy on ImageNet-1K,
significantly outperforming from-scratch DeiT training (81.8%) with the same setup. Moreover, large-size BEiT obtains
86.3% only using ImageNet-1K, even outperforming ViT-L with supervised pre-training on ImageNet-22K (85.2%).
This model was contributed by nielsr. The JAX/FLAX version of this model was
contributed by kamalkraj. The original code can be found here.
Usage tips |
BEiT models are regular Vision Transformers, but pre-trained in a self-supervised way rather than supervised. They
outperform both the original model (ViT) as well as Data-efficient Image Transformers (DeiT) when fine-tuned on ImageNet-1K and CIFAR-100. You can check out demo notebooks regarding inference as well as
fine-tuning on custom data here (you can just replace
[ViTFeatureExtractor] by [BeitImageProcessor] and
[ViTForImageClassification] by [BeitForImageClassification]).
There's also a demo notebook available which showcases how to combine DALL-E's image tokenizer with BEiT for
performing masked image modeling. You can find it here.
As the BEiT models expect each image to be of the same size (resolution), one can use
[BeitImageProcessor] to resize (or rescale) and normalize images for the model.
Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of
each checkpoint. For example, microsoft/beit-base-patch16-224 refers to a base-sized architecture with patch
resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub.
The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of
14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
BEiT uses relative position embeddings, inspired by the T5 model. During pre-training, the authors shared the
relative position bias among the several self-attention layers. During fine-tuning, each layer's relative position
bias is initialized with the shared relative position bias obtained after pre-training. Note that, if one wants to
pre-train a model from scratch, one needs to either set the use_relative_position_bias or the
use_relative_position_bias attribute of [BeitConfig] to True in order to add
position embeddings. |
BEiT pre-training. Taken from the original paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT.
[BeitForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
Semantic segmentation
- Semantic segmentation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
BEiT specific outputs
[[autodoc]] models.beit.modeling_beit.BeitModelOutputWithPooling
[[autodoc]] models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling
BeitConfig
[[autodoc]] BeitConfig
BeitFeatureExtractor
[[autodoc]] BeitFeatureExtractor
- call
- post_process_semantic_segmentation
BeitImageProcessor
[[autodoc]] BeitImageProcessor
- preprocess
- post_process_semantic_segmentation |
BeitModel
[[autodoc]] BeitModel
- forward
BeitForMaskedImageModeling
[[autodoc]] BeitForMaskedImageModeling
- forward
BeitForImageClassification
[[autodoc]] BeitForImageClassification
- forward
BeitForSemanticSegmentation
[[autodoc]] BeitForSemanticSegmentation
- forward
FlaxBeitModel
[[autodoc]] FlaxBeitModel
- call
FlaxBeitForMaskedImageModeling
[[autodoc]] FlaxBeitForMaskedImageModeling
- call
FlaxBeitForImageClassification
[[autodoc]] FlaxBeitForImageClassification
- call |
RoCBert
Overview
The RoCBert model was proposed in RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
It's a pretrained Chinese language model that is robust under various forms of adversarial attacks.
The abstract from the paper is the following:
Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown
vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose
ROCBERT: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation,
synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency
under different synthesized adversarial examples. The model takes as input multimodal information including the
semantic, phonetic and visual features. We show all these features are important to the model robustness since the
attack can be performed in all the three forms. Across 5 Chinese NLU tasks, ROCBERT outperforms strong baselines under
three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best
in the toxic content detection task under human-made attacks.
This model was contributed by weiweishi.
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
RoCBertConfig
[[autodoc]] RoCBertConfig
- all
RoCBertTokenizer
[[autodoc]] RoCBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
RoCBertModel
[[autodoc]] RoCBertModel
- forward
RoCBertForPreTraining
[[autodoc]] RoCBertForPreTraining
- forward
RoCBertForCausalLM
[[autodoc]] RoCBertForCausalLM
- forward
RoCBertForMaskedLM
[[autodoc]] RoCBertForMaskedLM
- forward
RoCBertForSequenceClassification
[[autodoc]] transformers.RoCBertForSequenceClassification
- forward
RoCBertForMultipleChoice
[[autodoc]] transformers.RoCBertForMultipleChoice
- forward
RoCBertForTokenClassification
[[autodoc]] transformers.RoCBertForTokenClassification
- forward
RoCBertForQuestionAnswering
[[autodoc]] RoCBertForQuestionAnswering
- forward |
SwiftFormer
Overview
The SwiftFormer model was proposed in SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.
The abstract from the paper is the following:
Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "SwiftFormer" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2.
This model was contributed by shehan97.
The original code can be found here.
SwiftFormerConfig
[[autodoc]] SwiftFormerConfig
SwiftFormerModel
[[autodoc]] SwiftFormerModel
- forward
SwiftFormerForImageClassification
[[autodoc]] SwiftFormerForImageClassification
- forward |
SeamlessM4T-v2
Overview
The SeamlessM4T-v2 model was proposed in Seamless: Multilingual Expressive and Streaming Speech Translation by the Seamless Communication team from Meta AI.
SeamlessM4T-v2 is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. It is an improvement on the previous version. For more details on the differences between v1 and v2, refer to section Difference with SeamlessM4T-v1.
SeamlessM4T-v2 enables multiple tasks without relying on separate models: |
Speech-to-speech translation (S2ST)
Speech-to-text translation (S2TT)
Text-to-speech translation (T2ST)
Text-to-text translation (T2TT)
Automatic speech recognition (ASR) |
[SeamlessM4Tv2Model] can perform all the above tasks, but each task also has its own dedicated sub-model.
The abstract from the paper is the following:
Recent advancements in automatic speech translation have dramatically expanded language coverage, improved multimodal capabilities, and enabled a wide range of tasks and functionalities. That said, large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model—SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. The expanded version of SeamlessAlign adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on which our two newest models, SeamlessExpressive and SeamlessStreaming, are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention (EMMA) mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we combined novel and modified versions of existing automatic metrics to evaluate prosody, latency, and robustness. For human evaluations, we adapted existing protocols tailored for measuring the most relevant attributes in the preservation of meaning, naturalness, and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. In sum, Seamless gives us a pivotal look at the technical foundation needed to turn the Universal Speech Translator from a science fiction concept into a real-world technology. Finally, contributions in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below.
Usage
In the following example, we'll load an Arabic audio sample and an English text sample and convert them into Russian speech and French text.
First, load the processor and a checkpoint of the model:
thon |
from transformers import AutoProcessor, SeamlessM4Tv2Model
processor = AutoProcessor.from_pretrained("facebook/seamless-m4t-v2-large")
model = SeamlessM4Tv2Model.from_pretrained("facebook/seamless-m4t-v2-large")
You can seamlessly use this model on text or on audio, to generated either translated text or translated audio.
Here is how to use the processor to process text and audio:
thon |
let's load an audio sample from an Arabic speech corpus
from datasets import load_dataset
dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True)
audio_sample = next(iter(dataset))["audio"]
now, process it
audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt")
now, process some English text as well
text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") |
Speech
[SeamlessM4Tv2Model] can seamlessly generate text or speech with few or no changes. Let's target Russian voice translation:
thon
audio_array_from_text = model.generate(text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze()
audio_array_from_audio = model.generate(audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() |
With basically the same code, I've translated English text and Arabic speech to Russian speech samples.
Text
Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass generate_speech=False to [SeamlessM4Tv2Model.generate].
This time, let's translate to French.
thon |
from audio
output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False)
translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
from text
output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False)
translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) |
Tips
1. Use dedicated models
[SeamlessM4Tv2Model] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint.
For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code:
thon |
from transformers import SeamlessM4Tv2ForSpeechToSpeech
model = SeamlessM4Tv2ForSpeechToSpeech.from_pretrained("facebook/seamless-m4t-v2-large")
Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove generate_speech=False.
thon
from transformers import SeamlessM4Tv2ForTextToText
model = SeamlessM4Tv2ForTextToText.from_pretrained("facebook/seamless-m4t-v2-large") |
Feel free to try out [SeamlessM4Tv2ForSpeechToText] and [SeamlessM4Tv2ForTextToSpeech] as well.
2. Change the speaker identity
You have the possibility to change the speaker used for speech synthesis with the speaker_id argument. Some speaker_id works better than other for some languages!
3. Change the generation strategy
You can use different generation strategies for text generation, e.g .generate(input_ids=input_ids, text_num_beams=4, text_do_sample=True) which will perform multinomial beam-search decoding on the text model. Note that speech generation only supports greedy - by default - or multinomial sampling, which can be used with e.g. .generate(, speech_do_sample=True, speech_temperature=0.6).
4. Generate speech and text at the same time
Use return_intermediate_token_ids=True with [SeamlessM4Tv2Model] to return both speech and text !
Model architecture
SeamlessM4T-v2 features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text.
Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the HiFi-GAN architecture is placed on top of the second seq2seq model.
Difference with SeamlessM4T-v1
The architecture of this new version differs from the first in a few aspects:
Improvements on the second-pass model
The second seq2seq model, named text-to-unit model, is now non-auto regressive, meaning that it computes units in a single forward pass. This achievement is made possible by:
- the use of character-level embeddings, meaning that each character of the predicted translated text has its own embeddings, which are then used to predict the unit tokens.
- the use of an intermediate duration predictor, that predicts speech duration at the character-level on the predicted translated text.
- the use of a new text-to-unit decoder mixing convolutions and self-attention to handle longer context.
Difference in the speech encoder
The speech encoder, which is used during the first-pass generation process to predict the translated text, differs mainly from the previous speech encoder through these mechanisms:
- the use of chunked attention mask to prevent attention across chunks, ensuring that each position attends only to positions within its own chunk and a fixed number of previous chunks.
- the use of relative position embeddings which only considers distance between sequence elements rather than absolute positions. Please refer to Self-Attentionwith Relative Position Representations (Shaw et al.) for more details.
- the use of a causal depth-wise convolution instead of a non-causal one.
Generation process
Here's how the generation process works: |
Input text or speech is processed through its specific encoder.
A decoder creates text tokens in the desired language.
If speech generation is required, the second seq2seq model, generates unit tokens in an non auto-regressive way.
These unit tokens are then passed through the final vocoder to produce the actual speech. |
This model was contributed by ylacombe. The original code can be found here.
SeamlessM4Tv2Model
[[autodoc]] SeamlessM4Tv2Model
- generate
SeamlessM4Tv2ForTextToSpeech
[[autodoc]] SeamlessM4Tv2ForTextToSpeech
- generate
SeamlessM4Tv2ForSpeechToSpeech
[[autodoc]] SeamlessM4Tv2ForSpeechToSpeech
- generate
SeamlessM4Tv2ForTextToText
[[autodoc]] transformers.SeamlessM4Tv2ForTextToText
- forward
- generate
SeamlessM4Tv2ForSpeechToText
[[autodoc]] transformers.SeamlessM4Tv2ForSpeechToText
- forward
- generate
SeamlessM4Tv2Config
[[autodoc]] SeamlessM4Tv2Config |
ViTMSN
Overview
The ViTMSN model was proposed in Masked Siamese Networks for Label-Efficient Learning by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes,
Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. The paper presents a joint-embedding architecture to match the prototypes
of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot
regimes.
The abstract from the paper is the following:
We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our
approach matches the representation of an image view containing randomly masked patches to the representation of the original
unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the
unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures,
while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance,
on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy,
and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark.
MSN architecture. Taken from the original paper.
This model was contributed by sayakpaul. The original code can be found here.
Usage tips |
MSN (masked siamese networks) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training
objective is to match the prototypes assigned to the unmasked views of the images to that of the masked views of the same images.
The authors have only released pre-trained weights of the backbone (ImageNet-1k pre-training). So, to use that on your own image classification dataset,
use the [ViTMSNForImageClassification] class which is initialized from [ViTMSNModel]. Follow
this notebook for a detailed tutorial on fine-tuning.
MSN is particularly useful in the low-shot and extreme low-shot regimes. Notably, it achieves 75.7% top-1 accuracy with only 1% of ImageNet-1K
labels when fine-tuned. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT MSN.
[ViTMSNForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTMSNConfig
[[autodoc]] ViTMSNConfig
ViTMSNModel
[[autodoc]] ViTMSNModel
- forward
ViTMSNForImageClassification
[[autodoc]] ViTMSNForImageClassification
- forward |
OneFormer
Overview
The OneFormer model was proposed in OneFormer: One Transformer to Rule Universal Image Segmentation by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference. |
The abstract from the paper is the following:
Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.
The figure below illustrates the architecture of OneFormer. Taken from the original paper. |
This model was contributed by Jitesh Jain. The original code can be found here.
Usage tips |
OneFormer requires two inputs during inference: image and task token.
During training, OneFormer only uses panoptic annotations.
If you want to train the model in a distributed environment across multiple nodes, then one should update the
get_num_masks function inside in the OneFormerLoss class of modeling_oneformer.py. When training on multiple nodes, this should be
set to the average number of target masks across all nodes, as can be seen in the original implementation here.
One can use [OneFormerProcessor] to prepare input images and task inputs for the model and optional targets for the model. [OneformerProcessor] wraps [OneFormerImageProcessor] and [CLIPTokenizer] into a single instance to both prepare the images and encode the task inputs.
To get the final segmentation, depending on the task, you can call [~OneFormerProcessor.post_process_semantic_segmentation] or [~OneFormerImageProcessor.post_process_instance_segmentation] or [~OneFormerImageProcessor.post_process_panoptic_segmentation]. All three tasks can be solved using [OneFormerForUniversalSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OneFormer.
Demo notebooks regarding inference + fine-tuning on custom data can be found here. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
OneFormer specific outputs
[[autodoc]] models.oneformer.modeling_oneformer.OneFormerModelOutput
[[autodoc]] models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput
OneFormerConfig
[[autodoc]] OneFormerConfig
OneFormerImageProcessor
[[autodoc]] OneFormerImageProcessor
- preprocess
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
OneFormerProcessor
[[autodoc]] OneFormerProcessor
OneFormerModel
[[autodoc]] OneFormerModel
- forward
OneFormerForUniversalSegmentation
[[autodoc]] OneFormerForUniversalSegmentation
- forward |
SEW
Overview
SEW (Squeezed and Efficient Wav2Vec) was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training
for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q.
Weinberger, Yoav Artzi.
The abstract from the paper is the following:
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
time, SEW reduces word error rate by 25-50% across different model sizes.
This model was contributed by anton-l.
Usage tips |
SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
Resources
Audio classification task guide
Automatic speech recognition task guide |
Resources
Audio classification task guide
Automatic speech recognition task guide
SEWConfig
[[autodoc]] SEWConfig
SEWModel
[[autodoc]] SEWModel
- forward
SEWForCTC
[[autodoc]] SEWForCTC
- forward
SEWForSequenceClassification
[[autodoc]] SEWForSequenceClassification
- forward |
AltCLIP
Overview
The AltCLIP model was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP
(Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP's
text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP's capabilities such as multilingual understanding.
The abstract from the paper is the following:
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model.
Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained
multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of
teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art
performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with
CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.
This model was contributed by jongjyh.
Usage tips and example
The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention
and we take the [CLS] token in XLM-R to represent text embedding.
AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
The [CLIPImageProcessor] can be used to resize (or rescale) and normalize images for the model.
The [AltCLIPProcessor] wraps a [CLIPImageProcessor] and a [XLMRobertaTokenizer] into a single instance to both
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
[AltCLIPProcessor] and [AltCLIPModel].
thon |
from PIL import Image
import requests
from transformers import AltCLIPModel, AltCLIPProcessor
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities |
This model is based on CLIPModel, use it like you would use the original CLIP. |
AltCLIPConfig
[[autodoc]] AltCLIPConfig
- from_text_vision_configs
AltCLIPTextConfig
[[autodoc]] AltCLIPTextConfig
AltCLIPVisionConfig
[[autodoc]] AltCLIPVisionConfig
AltCLIPProcessor
[[autodoc]] AltCLIPProcessor
AltCLIPModel
[[autodoc]] AltCLIPModel
- forward
- get_text_features
- get_image_features
AltCLIPTextModel
[[autodoc]] AltCLIPTextModel
- forward
AltCLIPVisionModel
[[autodoc]] AltCLIPVisionModel
- forward |
Encoder Decoder Models
Overview
The [EncoderDecoderModel] can be used to initialize a sequence-to-sequence model with any
pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks
was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by
Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
After such an [EncoderDecoderModel] has been trained/fine-tuned, it can be saved/loaded just like
any other models (see the examples for more information).
An application of this architecture could be to leverage two pretrained [BertModel] as the encoder
and decoder for a summarization model as was shown in: Text Summarization with Pretrained Encoders by Yang Liu and Mirella Lapata.
Randomly initializing EncoderDecoderModel from model configurations.
[EncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [BertModel] configuration for the encoder and the default [BertForCausalLM] configuration for the decoder.
thon |
from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
config_encoder = BertConfig()
config_decoder = BertConfig()
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = EncoderDecoderModel(config=config) |
Initialising EncoderDecoderModel from a pretrained encoder and a pretrained decoder.
[EncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, e.g. BERT, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing [EncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post.
To do so, the EncoderDecoderModel class provides a [EncoderDecoderModel.from_encoder_decoder_pretrained] method.
thon |
from transformers import EncoderDecoderModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") |
Loading an existing EncoderDecoderModel checkpoint and perform inference.
To load fine-tuned checkpoints of the EncoderDecoderModel class, [EncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers.
To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
thon |
from transformers import AutoTokenizer, EncoderDecoderModel
load a fine-tuned seq2seq model and corresponding tokenizer
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")
let's perform inference on a long piece of text
ARTICLE_TO_SUMMARIZE = (
"PG&E stated it scheduled the blackouts in response to forecasts for high winds "
"amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
"scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
)
input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids
autoregressively generate summary (uses greedy decoding by default)
generated_ids = model.generate(input_ids)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. |
Loading a PyTorch checkpoint into TFEncoderDecoderModel.
[TFEncoderDecoderModel.from_pretrained] currently doesn't support initializing the model from a
pytorch checkpoint. Passing from_pt=True to this method will throw an exception. If there are only pytorch
checkpoints for a particular encoder-decoder model, a workaround is:
thon |
a workaround to load from pytorch checkpoint
from transformers import EncoderDecoderModel, TFEncoderDecoderModel
_model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
_model.encoder.save_pretrained("./encoder")
_model.decoder.save_pretrained("./decoder")
model = TFEncoderDecoderModel.from_encoder_decoder_pretrained(
"./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True
)
This is only for copying some specific attributes of this particular model.
model.config = _model.config |
Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model.
As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the
input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded
target sequence).
thon |
from transformers import BertTokenizer, EncoderDecoderModel
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased")
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
input_ids = tokenizer(
"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.",
return_tensors="pt",
).input_ids
labels = tokenizer(
"the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.",
return_tensors="pt",
).input_ids
the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss |
Detailed colab for training.
This model was contributed by thomwolf. This model's TensorFlow and Flax versions
were contributed by ydshieh.
EncoderDecoderConfig
[[autodoc]] EncoderDecoderConfig
EncoderDecoderModel
[[autodoc]] EncoderDecoderModel
- forward
- from_encoder_decoder_pretrained
TFEncoderDecoderModel
[[autodoc]] TFEncoderDecoderModel
- call
- from_encoder_decoder_pretrained |
TFEncoderDecoderModel
[[autodoc]] TFEncoderDecoderModel
- call
- from_encoder_decoder_pretrained
FlaxEncoderDecoderModel
[[autodoc]] FlaxEncoderDecoderModel
- call
- from_encoder_decoder_pretrained |
SigLIP
Overview
The SigLIP model was proposed in Sigmoid Loss for Language Image Pre-Training by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer. SigLIP proposes to replace the loss function used in CLIP by a simple pairwise sigmoid loss. This results in better performance in terms of zero-shot classification accuracy on ImageNet.
The abstract from the paper is the following:
We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP). Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes. Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that achieves 84.5% ImageNet zero-shot accuracy in two days. The disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio. Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32k being sufficient.
Usage tips |
Usage of SigLIP is similar to CLIP. The main difference is the training loss, which does not require a global view of all the pairwise similarities of images and texts within a batch. One needs to apply the sigmoid activation function to the logits, rather than the softmax.
Training is not yet supported. If you want to fine-tune SigLIP or train from scratch, refer to the loss function from OpenCLIP, which leverages various torch.distributed utilities.
When using the standalone [SiglipTokenizer] or [SiglipProcessor], make sure to pass padding="max_length" as that's how the model was trained. |
SigLIP evaluation results compared to CLIP. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage example
There are 2 main ways to use SigLIP: either using the pipeline API, which abstracts away all the complexity for you, or by using the SiglipModel class yourself.
Pipeline API
The pipeline allows to use the model in a few lines of code:
thon |
from transformers import pipeline
from PIL import Image
import requests
load pipe
image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-224")
load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inference
outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"])
outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs]
print(outputs)
[{'score': 0.1979, 'label': '2 cats'}, {'score': 0.0, 'label': 'a remote'}, {'score': 0.0, 'label': 'a plane'}] |
Using the model yourself
If you want to do the pre- and postprocessing yourself, here's how to do that:
thon |
from PIL import Image
import requests
from transformers import AutoProcessor, AutoModel
import torch
model = AutoModel.from_pretrained("google/siglip-base-patch16-224")
processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-224")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["a photo of 2 cats", "a photo of 2 dogs"]
important: we pass padding=max_length since the model was trained with this
inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image) # these are the probabilities
print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'")
31.9% that image 0 is 'a photo of 2 cats' |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SigLIP.
Zero-shot image classification task guide
Demo notebooks for SigLIP can be found here. 🌎 |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SiglipConfig
[[autodoc]] SiglipConfig
- from_text_vision_configs
SiglipTextConfig
[[autodoc]] SiglipTextConfig
SiglipVisionConfig
[[autodoc]] SiglipVisionConfig
SiglipTokenizer
[[autodoc]] SiglipTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SiglipImageProcessor
[[autodoc]] SiglipImageProcessor
- preprocess
SiglipProcessor
[[autodoc]] SiglipProcessor
SiglipModel
[[autodoc]] SiglipModel
- forward
- get_text_features
- get_image_features
SiglipTextModel
[[autodoc]] SiglipTextModel
- forward
SiglipVisionModel
[[autodoc]] SiglipVisionModel
- forward
SiglipForImageClassification
[[autodoc]] SiglipForImageClassification
- forward |
PLBart
Overview
The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base has been trained using multilingual denoising task
on Java, Python and English.
According to the abstract
Code summarization and generation empower conversion between programming language (PL) and natural language (NL),
while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART,
a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks.
PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding.
Experiments on code summarization in the English language, code generation, and code translation in seven programming languages
show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program
repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding.
Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow
(e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels
even with limited annotations.
This model was contributed by gchhablani. The Authors' code can be found here.
Usage examples
PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The
target text format is [tgt_lang_code] X [eos]. bos is never used.
However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to the paper to learn more about this.
In cases where the language code is needed, the regular [~PLBartTokenizer.__call__] will encode source text format
when you pass texts as the first argument or with the keyword argument text, and will encode target text format if
it's passed with the text_target keyword argument.
Supervised training
thon |
from transformers import PLBartForConditionalGeneration, PLBartTokenizer
tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-base", src_lang="en_XX", tgt_lang="python")
example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
expected_translation_english = "Returns the maximum value of a b c."
inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt")
model(**inputs) |
Generation
While generating the target text set the decoder_start_token_id to the target language id. The following
example shows how to translate Python to English using the uclanlp/plbart-python-en_XX model.
thon |
from transformers import PLBartForConditionalGeneration, PLBartTokenizer
tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX")
example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
inputs = tokenizer(example_python_phrase, return_tensors="pt")
model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-python-en_XX")
translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Returns the maximum value of a b c." |
Resources
Text classification task guide
Causal language modeling task guide
Translation task guide
Summarization task guide |
PLBartConfig
[[autodoc]] PLBartConfig
PLBartTokenizer
[[autodoc]] PLBartTokenizer
- build_inputs_with_special_tokens
PLBartModel
[[autodoc]] PLBartModel
- forward
PLBartForConditionalGeneration
[[autodoc]] PLBartForConditionalGeneration
- forward
PLBartForSequenceClassification
[[autodoc]] PLBartForSequenceClassification
- forward
PLBartForCausalLM
[[autodoc]] PLBartForCausalLM
- forward |
T5 |
Overview
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
The abstract from the paper is the following:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream
task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning
has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of
transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a
text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer
approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration
with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering
summarization, question answering, text classification, and more. To facilitate future work on transfer learning for
NLP, we release our dataset, pre-trained models, and code.
All checkpoints can be found on the hub.
This model was contributed by thomwolf. The original code can be found here.
Usage tips |
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which
each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a
different prefix to the input corresponding to each task, e.g., for translation: translate English to German: ,
for summarization: summarize: .
The pretraining includes both supervised and self-supervised training. Supervised training is conducted on downstream tasks provided by the GLUE and SuperGLUE benchmarks (converting them into text-to-text tasks as explained above). |
Self-supervised training uses corrupted tokens, by randomly removing 15% of the tokens and replacing them with individual sentinel tokens (if several consecutive tokens are marked for removal, the whole group is replaced with a single sentinel token). The input of the encoder is the corrupted sentence, the input of the decoder is the original sentence and the target is then the dropped out tokens delimited by their sentinel tokens. |
T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right.
See the training, inference and resources sections below for all details regarding usage.
T5 comes in different sizes:
google-t5/t5-small
google-t5/t5-base
google-t5/t5-large
google-t5/t5-3b
google-t5/t5-11b.
Based on the original T5 model, Google has released some follow-up works: |
T5 comes in different sizes:
google-t5/t5-small
google-t5/t5-base
google-t5/t5-large
google-t5/t5-3b
google-t5/t5-11b.
Based on the original T5 model, Google has released some follow-up works:
T5v1.1: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without
mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found here. |
mT5: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to
the documentation of mT5 which can be found here.
byT5: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer
to the documentation of byT5 which can be found here.
UL2: UL2 is a T5 like model pretrained on various denoising objectives |
UL2: UL2 is a T5 like model pretrained on various denoising objectives
Flan-T5: Flan is a pretraining methods that is based on prompting. The Flan-T5 are T5 models trained on the Flan collection of
datasets which include: taskmaster2, djaym7/wiki_dialog, deepmind/code_contests, lambada, gsm8k, aqua_rat, esnli, quasc and qed.
FLan-UL2 : the UL2 model finetuned using the "Flan" prompt tuning and dataset collection. |
FLan-UL2 : the UL2 model finetuned using the "Flan" prompt tuning and dataset collection.
UMT5: UmT5 is a multilingual T5 model trained on an improved and refreshed mC4 multilingual corpus, 29 trillion characters across 107 language, using a new sampling method, UniMax. Refer to
the documentation of mT5 which can be found here. |
Training
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher
forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input
sequence is fed to the model using input_ids. The target sequence is shifted to the right, i.e., prepended by a
start-sequence token and fed to the decoder using the decoder_input_ids. In teacher-forcing style, the target
sequence is then appended by the EOS token and corresponds to the labels. The PAD token is hereby used as the
start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
One can use [T5ForConditionalGeneration] (or the Tensorflow/Flax variant), which includes the
language modeling head on top of the decoder. |
Unsupervised denoising training |
In this setup, spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and
the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. Each
sentinel token represents a unique mask token for this sentence and should start with <extra_id_0>,
<extra_id_1>, up to <extra_id_99>. As a default, 100 sentinel tokens are available in
[T5Tokenizer].
For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be
processed as follows:
thon |
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
input_ids = tokenizer("The walks in park", return_tensors="pt").input_ids
labels = tokenizer(" cute dog the ", return_tensors="pt").input_ids
the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
loss.item()
3.7837 |
If you're interested in pre-training T5 on a new corpus, check out the run_t5_mlm_flax.py script in the Examples
directory.
Supervised training |
Supervised training
In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping.
Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input
sequence "The house is wonderful." and output sequence "Das Haus ist wunderbar.", then they should be prepared for
the model as follows:
thon |
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids
labels = tokenizer("Das Haus ist wunderbar.", return_tensors="pt").input_ids
the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
loss.item()
0.2542 |
As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the
input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded
target sequence). The model will automatically create the decoder_input_ids based on the labels, by
shifting them one position to the right and prepending the config.decoder_start_token_id, which for T5 is
equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with 'translate
English to German: ' before encoding it. This will help in improving the performance, as this task prefix was used
during T5's pre-training.
However, the example above only shows a single training example. In practice, one trains deep learning models in
batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one
typically defines a max_source_length and max_target_length, which determine the maximum length of the
input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on
the task.
In addition, we must make sure that padding token id's of the labels are not taken into account by the loss
function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the ignore_index
of the CrossEntropyLoss. In Flax, one can use the decoder_attention_mask to ignore padded tokens from
the loss (see the Flax summarization script for details). We also pass
attention_mask as additional input to the model, which makes sure that padding tokens of the inputs are
ignored. The code example below illustrates all of this.
thon |
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
the following 2 hyperparameters are task-specific
max_source_length = 512
max_target_length = 128
Suppose we have the following 2 training examples:
input_sequence_1 = "Welcome to NYC"
output_sequence_1 = "Bienvenue à NYC"
input_sequence_2 = "HuggingFace is a company"
output_sequence_2 = "HuggingFace est une entreprise"
encode the inputs
task_prefix = "translate English to French: "
input_sequences = [input_sequence_1, input_sequence_2]
encoding = tokenizer(
[task_prefix + sequence for sequence in input_sequences],
padding="longest",
max_length=max_source_length,
truncation=True,
return_tensors="pt",
)
input_ids, attention_mask = encoding.input_ids, encoding.attention_mask
encode the targets
target_encoding = tokenizer(
[output_sequence_1, output_sequence_2],
padding="longest",
max_length=max_target_length,
truncation=True,
return_tensors="pt",
)
labels = target_encoding.input_ids
replace padding token id's of the labels by -100 so it's ignored by the loss
labels[labels == tokenizer.pad_token_id] = -100
forward pass
loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss
loss.item()
0.188 |
Additional training tips:
T5 models need a slightly higher learning rate than the default one set in the Trainer when using the AdamW
optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question
answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer. |
According to this forum post, task prefixes matter when
(1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5's
pre-training mixture (see Appendix D of the paper for the task prefixes
used).
If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of
pad_to_multiple_of to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding
batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is
encountered during training thus significantly slowing down the training. only padding up to the longest example in a
batch) leads to very slow training on TPU.
Inference
At inference time, it is recommended to use [~generation.GenerationMixin.generate]. This
method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder
and auto-regressively generates the decoder output. Check out this blog post to know all the details about generating text with Transformers.
There's also this blog post which explains how
generation works in general in encoder-decoder models.
thon |
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Das Haus ist wunderbar. |
Note that T5 uses the pad_token_id as the decoder_start_token_id, so when doing generation without using
[~generation.GenerationMixin.generate], make sure you start it with the pad_token_id.
The example above only shows a single example. You can also do batched inference, like so:
thon |
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
task_prefix = "translate English to German: "
use different length sentences to test batching
sentences = ["The house is wonderful.", "I like to work in NYC."]
inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors="pt", padding=True)
output_sequences = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
do_sample=False, # disable sampling to test if batching affects output
)
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.'] |
Because T5 has been trained with the span-mask denoising objective,
it can be used to predict the sentinel (masked-out) tokens during inference.
The predicted tokens will then be placed between the sentinel tokens.
thon |
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
input_ids = tokenizer("The walks in park", return_tensors="pt").input_ids
sequence_ids = model.generate(input_ids)
sequences = tokenizer.batch_decode(sequence_ids)
sequences
[' park offers the park.'] |
Performance
If you'd like a faster training and inference performance, install NVIDIA APEX for NVIDIA GPUs, or ROCm APEX for AMD GPUs and then the model will automatically use apex.normalization.FusedRMSNorm instead of T5LayerNorm. The former uses an optimized fused kernel which is several times faster than the latter.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with T5. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A notebook for how to finetune T5 for classification and multiple choice.
A notebook for how to finetune T5 for sentiment span extraction. 🌎
A notebook for how to finetune T5 for named entity recognition. 🌎
A notebook for Finetuning CodeT5 for generating docstrings from Ruby code. |
A notebook to Finetune T5-base-dutch to perform Dutch abstractive summarization on a TPU.
A notebook for how to finetune T5 for summarization in PyTorch and track experiments with WandB. 🌎
A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker.
[T5ForConditionalGeneration] is supported by this example script and notebook.
[TFT5ForConditionalGeneration] is supported by this example script and notebook.
[FlaxT5ForConditionalGeneration] is supported by this example script.
Summarization chapter of the 🤗 Hugging Face course.
Summarization task guide |
[FlaxT5ForConditionalGeneration] is supported by this example script for training T5 with a span-masked language model objective. The script also shows how to train a T5 tokenizer. [FlaxT5ForConditionalGeneration] is also supported by this notebook.
[T5ForConditionalGeneration] is supported by this example script and notebook.
[TFT5ForConditionalGeneration] is supported by this example script and notebook.
Translation task guide |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.