text
stringlengths 2
11.8k
|
---|
TFXLMRobertaModel
[[autodoc]] TFXLMRobertaModel
- call
TFXLMRobertaForCausalLM
[[autodoc]] TFXLMRobertaForCausalLM
- call
TFXLMRobertaForMaskedLM
[[autodoc]] TFXLMRobertaForMaskedLM
- call
TFXLMRobertaForSequenceClassification
[[autodoc]] TFXLMRobertaForSequenceClassification
- call
TFXLMRobertaForMultipleChoice
[[autodoc]] TFXLMRobertaForMultipleChoice
- call
TFXLMRobertaForTokenClassification
[[autodoc]] TFXLMRobertaForTokenClassification
- call
TFXLMRobertaForQuestionAnswering
[[autodoc]] TFXLMRobertaForQuestionAnswering
- call |
FlaxXLMRobertaModel
[[autodoc]] FlaxXLMRobertaModel
- call
FlaxXLMRobertaForCausalLM
[[autodoc]] FlaxXLMRobertaForCausalLM
- call
FlaxXLMRobertaForMaskedLM
[[autodoc]] FlaxXLMRobertaForMaskedLM
- call
FlaxXLMRobertaForSequenceClassification
[[autodoc]] FlaxXLMRobertaForSequenceClassification
- call
FlaxXLMRobertaForMultipleChoice
[[autodoc]] FlaxXLMRobertaForMultipleChoice
- call
FlaxXLMRobertaForTokenClassification
[[autodoc]] FlaxXLMRobertaForTokenClassification
- call
FlaxXLMRobertaForQuestionAnswering
[[autodoc]] FlaxXLMRobertaForQuestionAnswering
- call |
DiT
Overview
DiT was proposed in DiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
DiT applies the self-supervised objective of BEiT (BERT pre-training of Image Transformers) to 42 million document images, allowing for state-of-the-art results on tasks including: |
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
document layout analysis: the PubLayNet dataset (a collection of more
than 360,000 document images constructed by automatically parsing PubMed XML files).
table detection: the ICDAR 2019 cTDaR dataset (a collection of
600 training images and 240 testing images). |
The abstract from the paper is the following:
*Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, as well as table detection. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and table detection (94.23 → 96.55). *
Summary of the approach. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Usage tips
One can directly use the weights of DiT with the AutoModel API:
thon
from transformers import AutoModel
model = AutoModel.from_pretrained("microsoft/dit-base") |
This will load the model pre-trained on masked image modeling. Note that this won't include the language modeling head on top, used to predict visual tokens.
To include the head, you can load the weights into a BeitForMaskedImageModeling model, like so:
thon
from transformers import BeitForMaskedImageModeling
model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-base") |
You can also load a fine-tuned model from the hub, like so:
thon
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("microsoft/dit-base-finetuned-rvlcdip") |
This particular checkpoint was fine-tuned on RVL-CDIP, an important benchmark for document image classification.
A notebook that illustrates inference for document image classification can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiT.
[BeitForImageClassification] is supported by this example script and notebook. |
[BeitForImageClassification] is supported by this example script and notebook.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
As DiT's architecture is equivalent to that of BEiT, one can refer to BEiT's documentation page for all tips, code examples and notebooks. |
TVLT
Overview
The TVLT model was proposed in TVLT: Textless Vision-Language Transformer
by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal (the first three authors contributed equally). The Textless Vision-Language Transformer (TVLT) is a model that uses raw visual and audio inputs for vision-and-language representation learning, without using text-specific modules such as tokenization or automatic speech recognition (ASR). It can perform various audiovisual and vision-language tasks like retrieval, question answering, etc.
The abstract from the paper is the following:
In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text. |
TVLT architecture. Taken from the https://arxiv.org/abs/2102.03334">original paper.
The original code can be found here. This model was contributed by Zineng Tang.
Usage tips |
TVLT is a model that takes both pixel_values and audio_values as input. One can use [TvltProcessor] to prepare data for the model.
This processor wraps an image processor (for the image/video modality) and an audio feature extractor (for the audio modality) into one.
TVLT is trained with images/videos and audios of various sizes: the authors resize and crop the input images/videos to 224 and limit the length of audio spectrogram to 2048. To make batching of videos and audios possible, the authors use a pixel_mask that indicates which pixels are real/padding and audio_mask that indicates which audio values are real/padding.
The design of TVLT is very similar to that of a standard Vision Transformer (ViT) and masked autoencoder (MAE) as in ViTMAE. The difference is that the model includes embedding layers for the audio modality.
The PyTorch version of this model is only available in torch 1.10 and higher. |
TvltConfig
[[autodoc]] TvltConfig
TvltProcessor
[[autodoc]] TvltProcessor
- call
TvltImageProcessor
[[autodoc]] TvltImageProcessor
- preprocess
TvltFeatureExtractor
[[autodoc]] TvltFeatureExtractor
- call
TvltModel
[[autodoc]] TvltModel
- forward
TvltForPreTraining
[[autodoc]] TvltForPreTraining
- forward
TvltForAudioVisualClassification
[[autodoc]] TvltForAudioVisualClassification
- forward |
Time Series Transformer
Overview
The Time Series Transformer model is a vanilla encoder-decoder Transformer for time series forecasting.
This model was contributed by kashif.
Usage tips |
Similar to other models in the library, [TimeSeriesTransformerModel] is the raw Transformer without any head on top, and [TimeSeriesTransformerForPrediction]
adds a distribution head on top of the former, which can be used for time-series forecasting. Note that this is a so-called probabilistic forecasting model, not a
point forecasting model. This means that the model learns a distribution, from which one can sample. The model doesn't directly output values.
[TimeSeriesTransformerForPrediction] consists of 2 blocks: an encoder, which takes a context_length of time series values as input (called past_values),
and a decoder, which predicts a prediction_length of time series values into the future (called future_values). During training, one needs to provide
pairs of (past_values and future_values) to the model.
In addition to the raw (past_values and future_values), one typically provides additional features to the model. These can be the following:
past_time_features: temporal features which the model will add to past_values. These serve as "positional encodings" for the Transformer encoder.
Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector).
e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year").
future_time_features: temporal features which the model will add to future_values. These serve as "positional encodings" for the Transformer decoder.
Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector).
e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year").
static_categorical_features: categorical features which are static over time (i.e., have the same value for all past_values and future_values).
An example here is the store ID or region ID that identifies a given time-series.
Note that these features need to be known for ALL data points (also those in the future).
static_real_features: real-valued features which are static over time (i.e., have the same value for all past_values and future_values).
An example here is the image representation of the product for which you have the time-series values (like the ResNet embedding of a "shoe" picture,
if your time-series is about the sales of shoes).
Note that these features need to be known for ALL data points (also those in the future). |
The model is trained using "teacher-forcing", similar to how a Transformer is trained for machine translation. This means that, during training, one shifts the
future_values one position to the right as input to the decoder, prepended by the last value of past_values. At each time step, the model needs to predict the
next target. So the set-up of training is similar to a GPT model for language, except that there's no notion of decoder_start_token_id (we just use the last value
of the context as initial input for the decoder).
At inference time, we give the final value of the past_values as input to the decoder. Next, we can sample from the model to make a prediction at the next time step,
which is then fed to the decoder in order to make the next prediction (also called autoregressive generation). |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Time Series Transformer blog-post in HuggingFace blog: Probabilistic Time Series Forecasting with 🤗 Transformers |
Check out the Time Series Transformer blog-post in HuggingFace blog: Probabilistic Time Series Forecasting with 🤗 Transformers
TimeSeriesTransformerConfig
[[autodoc]] TimeSeriesTransformerConfig
TimeSeriesTransformerModel
[[autodoc]] TimeSeriesTransformerModel
- forward
TimeSeriesTransformerForPrediction
[[autodoc]] TimeSeriesTransformerForPrediction
- forward |
GPT Neo
Overview
The GPTNeo model was released in the EleutherAI/gpt-neo repository by Sid
Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. It is a GPT2 like causal language model trained on the
Pile dataset.
The architecture is similar to GPT2 except that GPT Neo uses local attention in every other layer with a window size of
256 tokens.
This model was contributed by valhalla.
Usage example
The generate() method can be used to generate text using GPT Neo model.
thon |
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0] |
Combining GPT-Neo and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature, and make sure your hardware is compatible with Flash-Attention 2. More details are available here concerning the installation.
Make sure as well to load your model in half-precision (e.g. torch.float16).
To load and run a model using Flash Attention 2, refer to the snippet below:
thon |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
prompt = "def hello_world():"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"def hello_world():\n >>> run_script("hello.py")\n >>> exit(0)\n<|endoftext|>" |
Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using EleutherAI/gpt-neo-2.7B checkpoint and the Flash Attention 2 version of the model.
Note that for GPT-Neo it is not possible to train / run on very long context as the max position embeddings is limited to 2048 - but this is applicable to all gpt-neo models and not specific to FA-2
Resources
Text classification task guide
Causal language modeling task guide |
Resources
Text classification task guide
Causal language modeling task guide
GPTNeoConfig
[[autodoc]] GPTNeoConfig
GPTNeoModel
[[autodoc]] GPTNeoModel
- forward
GPTNeoForCausalLM
[[autodoc]] GPTNeoForCausalLM
- forward
GPTNeoForQuestionAnswering
[[autodoc]] GPTNeoForQuestionAnswering
- forward
GPTNeoForSequenceClassification
[[autodoc]] GPTNeoForSequenceClassification
- forward
GPTNeoForTokenClassification
[[autodoc]] GPTNeoForTokenClassification
- forward |
FlaxGPTNeoModel
[[autodoc]] FlaxGPTNeoModel
- call
FlaxGPTNeoForCausalLM
[[autodoc]] FlaxGPTNeoForCausalLM
- call |
Hubert
Overview
Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan
Salakhutdinov, Abdelrahman Mohamed.
The abstract from the paper is the following:
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are
multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training
phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we
propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an
offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our
approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined
acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised
clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means
teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the
state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h,
10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER
reduction on the more challenging dev-other and test-other evaluation subsets.
This model was contributed by patrickvonplaten.
Usage tips |
Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer].
Resources
Audio classification task guide
Automatic speech recognition task guide
HubertConfig
[[autodoc]] HubertConfig |
Resources
Audio classification task guide
Automatic speech recognition task guide
HubertConfig
[[autodoc]] HubertConfig
HubertModel
[[autodoc]] HubertModel
- forward
HubertForCTC
[[autodoc]] HubertForCTC
- forward
HubertForSequenceClassification
[[autodoc]] HubertForSequenceClassification
- forward
TFHubertModel
[[autodoc]] TFHubertModel
- call
TFHubertForCTC
[[autodoc]] TFHubertForCTC
- call |
Qwen2
Overview
Qwen2 is the new model series of large language models from the Qwen team. Previously, we released the Qwen series, including Qwen-72B, Qwen-1.8B, Qwen-VL, Qwen-Audio, etc.
Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
Usage tips
Qwen2-7B-beta and Qwen2-7B-Chat-beta can be found on the Huggingface Hub
In the following, we demonstrate how to use Qwen2-7B-Chat-beta for the inference. Note that we have used the ChatML format for dialog, in this demo we show how to leverage apply_chat_template for this purpose.
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-7B-Chat", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
Qwen2Config
[[autodoc]] Qwen2Config
Qwen2Tokenizer
[[autodoc]] Qwen2Tokenizer
- save_vocabulary
Qwen2TokenizerFast
[[autodoc]] Qwen2TokenizerFast
Qwen2Model
[[autodoc]] Qwen2Model
- forward
Qwen2ForCausalLM
[[autodoc]] Qwen2ForCausalLM
- forward
Qwen2ForSequenceClassification
[[autodoc]] Qwen2ForSequenceClassification
- forward |
LayoutLM
Overview
The LayoutLM model was proposed in the paper LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and
Ming Zhou. It's a simple but effective pretraining method of text and layout for document image understanding and
information extraction tasks, such as form understanding and receipt understanding. It obtains state-of-the-art results
on several downstream tasks: |
form understanding: the FUNSD dataset (a collection of 199 annotated
forms comprising more than 30,000 words).
receipt understanding: the SROIE dataset (a collection of 626 receipts for
training and 347 receipts for testing).
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes). |
The abstract from the paper is the following:
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the
widespread use of pretraining models for NLP applications, they almost exclusively focus on text-level manipulation,
while neglecting layout and style information that is vital for document image understanding. In this paper, we propose
the LayoutLM to jointly model interactions between text and layout information across scanned document images, which is
beneficial for a great number of real-world document image understanding tasks such as information extraction from
scanned documents. Furthermore, we also leverage image features to incorporate words' visual information into LayoutLM.
To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for
document-level pretraining. It achieves new state-of-the-art results in several downstream tasks, including form
understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification
(from 93.07 to 94.42).
Usage tips |
In addition to input_ids, [~transformers.LayoutLMModel.forward] also expects the input bbox, which are
the bounding boxes (i.e. 2D-positions) of the input tokens. These can be obtained using an external OCR engine such
as Google's Tesseract (there's a Python wrapper available). Each bounding box should be in (x0, y0, x1, y1) format, where
(x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the
position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000
scale. To normalize, you can use the following function: |
python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs. Those can be obtained using the Python Image Library (PIL) library for example, as follows:
thon
from PIL import Image
Document can be a png, jpg, etc. PDFs must be converted to images.
image = Image.open(name_of_your_document).convert("RGB")
width, height = image.size |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on fine-tuning
LayoutLM for document-understanding using Keras & Hugging Face
Transformers. |
A blog post on fine-tuning
LayoutLM for document-understanding using Keras & Hugging Face
Transformers.
A blog post on how to fine-tune LayoutLM for document-understanding using only Hugging Face Transformers.
A notebook on how to fine-tune LayoutLM on the FUNSD dataset with image embeddings.
See also: Document question answering task guide
A notebook on how to fine-tune LayoutLM for sequence classification on the RVL-CDIP dataset.
Text classification task guide |
See also: Document question answering task guide
A notebook on how to fine-tune LayoutLM for sequence classification on the RVL-CDIP dataset.
Text classification task guide
A notebook on how to fine-tune LayoutLM for token classification on the FUNSD dataset.
Token classification task guide
Other resources
- Masked language modeling task guide
🚀 Deploy
A blog post on how to Deploy LayoutLM with Hugging Face Inference Endpoints. |
Other resources
- Masked language modeling task guide
🚀 Deploy
A blog post on how to Deploy LayoutLM with Hugging Face Inference Endpoints.
LayoutLMConfig
[[autodoc]] LayoutLMConfig
LayoutLMTokenizer
[[autodoc]] LayoutLMTokenizer
LayoutLMTokenizerFast
[[autodoc]] LayoutLMTokenizerFast |
LayoutLMConfig
[[autodoc]] LayoutLMConfig
LayoutLMTokenizer
[[autodoc]] LayoutLMTokenizer
LayoutLMTokenizerFast
[[autodoc]] LayoutLMTokenizerFast
LayoutLMModel
[[autodoc]] LayoutLMModel
LayoutLMForMaskedLM
[[autodoc]] LayoutLMForMaskedLM
LayoutLMForSequenceClassification
[[autodoc]] LayoutLMForSequenceClassification
LayoutLMForTokenClassification
[[autodoc]] LayoutLMForTokenClassification
LayoutLMForQuestionAnswering
[[autodoc]] LayoutLMForQuestionAnswering |
TFLayoutLMModel
[[autodoc]] TFLayoutLMModel
TFLayoutLMForMaskedLM
[[autodoc]] TFLayoutLMForMaskedLM
TFLayoutLMForSequenceClassification
[[autodoc]] TFLayoutLMForSequenceClassification
TFLayoutLMForTokenClassification
[[autodoc]] TFLayoutLMForTokenClassification
TFLayoutLMForQuestionAnswering
[[autodoc]] TFLayoutLMForQuestionAnswering |
Table Transformer
Overview
The Table Transformer model was proposed in PubTables-1M: Towards comprehensive table extraction from unstructured documents by
Brandon Smock, Rohith Pesala, Robin Abraham. The authors introduce a new dataset, PubTables-1M, to benchmark progress in table extraction from unstructured documents,
as well as table structure recognition and functional analysis. The authors train 2 DETR models, one for table detection and one for table structure recognition, dubbed Table Transformers.
The abstract from the paper is the following:
Recently, significant progress has been made applying machine learning to the problem of table structure inference and extraction from unstructured documents.
However, one of the greatest challenges remains the creation of datasets with complete, unambiguous ground truth at scale. To address this, we develop a new, more
comprehensive dataset for table extraction, called PubTables-1M. PubTables-1M contains nearly one million tables from scientific articles, supports multiple input
modalities, and contains detailed header and location information for table structures, making it useful for a wide variety of modeling approaches. It also addresses a significant
source of ground truth inconsistency observed in prior datasets called oversegmentation, using a novel canonicalization procedure. We demonstrate that these improvements lead to a
significant increase in training performance and a more reliable estimate of model performance at evaluation for table structure recognition. Further, we show that transformer-based
object detection models trained on PubTables-1M produce excellent results for all three tasks of detection, structure recognition, and functional analysis without the need for any
special customization for these tasks. |
Table detection and table structure recognition clarified. Taken from the original paper.
The authors released 2 models, one for table detection in
documents, one for table structure recognition
(the task of recognizing the individual rows, columns etc. in a table).
This model was contributed by nielsr. The original code can be
found here.
Resources |
A demo notebook for the Table Transformer can be found here.
It turns out padding of images is quite important for detection. An interesting Github thread with replies from the authors can be found here.
TableTransformerConfig
[[autodoc]] TableTransformerConfig
TableTransformerModel
[[autodoc]] TableTransformerModel
- forward
TableTransformerForObjectDetection
[[autodoc]] TableTransformerForObjectDetection
- forward |
LiLT
Overview
The LiLT model was proposed in LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding by Jiapeng Wang, Lianwen Jin, Kai Ding.
LiLT allows to combine any pre-trained RoBERTa text encoder with a lightweight Layout Transformer, to enable LayoutLM-like document understanding for many
languages.
The abstract from the paper is the following:
Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure. |
LiLT architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the hub, refer to this guide.
The script will result in config.json and pytorch_model.bin files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account): |
thon
from transformers import LiltModel
model = LiltModel.from_pretrained("path_to_your_files")
model.push_to_hub("name_of_repo_on_the_hub") |
When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer.
As lilt-roberta-en-base uses the same vocabulary as LayoutLMv3, one can use [LayoutLMv3TokenizerFast] to prepare data for the model.
The same is true for lilt-roberta-en-base: one can use [LayoutXLMTokenizerFast] for that model.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LiLT. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LiLT.
Demo notebooks for LiLT can be found here. |
Documentation resources
- Text classification task guide
- Token classification task guide
- Question answering task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LiltConfig
[[autodoc]] LiltConfig
LiltModel
[[autodoc]] LiltModel
- forward
LiltForSequenceClassification
[[autodoc]] LiltForSequenceClassification
- forward
LiltForTokenClassification
[[autodoc]] LiltForTokenClassification
- forward
LiltForQuestionAnswering
[[autodoc]] LiltForQuestionAnswering
- forward |
M2M100
Overview
The M2M100 model was proposed in Beyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky,
Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy
Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
The abstract from the paper is the following:
Existing work in translation demonstrated the potential of massively multilingual machine translation by training a
single model able to translate between any pair of languages. However, much of this work is English-Centric by training
only on data which was translated from or to English. While this is supported by large sources of training data, it
does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation
model that can translate directly between any pair of 100 languages. We build and open source a training dataset that
covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how
to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters
to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly
translating between non-English directions while performing competitively to the best single systems of WMT. We
open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.
This model was contributed by valhalla.
Usage tips and examples
M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is
multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the
source and target text. The source text format is [lang_code] X [eos], where lang_code is source language
id for source text and target language id for target text, with X being the source or target text.
The [M2M100Tokenizer] depends on sentencepiece so be sure to install it before running the
examples. To install sentencepiece run pip install sentencepiece.
Supervised Training
thon
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
loss = model(**model_inputs).loss # forward pass |
Generation
M2M100 uses the eos_token_id as the decoder_start_token_id for generation with the target language id
being forced as the first generated token. To force the target language id as the first generated token, pass the
forced_bos_token_id parameter to the generate method. The following example shows how to translate between
Hindi to French and Chinese to English using the facebook/m2m100_418M checkpoint.
thon |
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"La vie est comme une boîte de chocolat."
translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"Life is like a box of chocolate." |
Resources
Translation task guide
Summarization task guide
M2M100Config
[[autodoc]] M2M100Config
M2M100Tokenizer
[[autodoc]] M2M100Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
M2M100Model
[[autodoc]] M2M100Model
- forward
M2M100ForConditionalGeneration
[[autodoc]] M2M100ForConditionalGeneration
- forward |
OWLv2
Overview
OWLv2 was proposed in Scaling Open-Vocabulary Object Detection by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2 scales up OWL-ViT using self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. This results in large gains over the previous state-of-the-art for zero-shot object detection.
The abstract from the paper is the following:
Open-vocabulary object detection has benefited greatly from pretrained vision-language models, but is still limited by the amount of available detection training data. While detection training data can be expanded by using Web image-text pairs as weak supervision, this has not been done at scales comparable to image-level pretraining. Here, we scale up detection data with self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. Major challenges in scaling self-training are the choice of label space, pseudo-annotation filtering, and training efficiency. We present the OWLv2 model and OWL-ST self-training recipe, which address these challenges. OWLv2 surpasses the performance of previous state-of-the-art open-vocabulary detectors already at comparable training scales (~10M examples). However, with OWL-ST, we can scale to over 1B examples, yielding further large improvement: With an L/14 architecture, OWL-ST improves AP on LVIS rare classes, for which the model has seen no human box annotations, from 31.2% to 44.6% (43% relative improvement). OWL-ST unlocks Web-scale training for open-world localization, similar to what has been seen for image classification and language modelling. |
OWLv2 high-level overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage example
OWLv2 is, just like its predecessor OWL-ViT, a zero-shot text-conditioned object detection model. OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
[Owlv2ImageProcessor] can be used to resize (or rescale) and normalize images for the model and [CLIPTokenizer] is used to encode the text. [Owlv2Processor] wraps [Owlv2ImageProcessor] and [CLIPTokenizer] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [Owlv2Processor] and [Owlv2ForObjectDetection].
thon |
import requests
from PIL import Image
import torch
from transformers import Owlv2Processor, Owlv2ForObjectDetection
processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble")
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
Convert outputs (bounding boxes and class logits) to Pascal VOC Format (xmin, ymin, xmax, ymax)
results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
Detected a photo of a cat with confidence 0.614 at location [341.67, 17.54, 642.32, 278.51]
Detected a photo of a cat with confidence 0.665 at location [6.75, 38.97, 326.62, 354.85] |
Resources
A demo notebook on using OWLv2 for zero- and one-shot (image-guided) object detection can be found here.
Zero-shot object detection task guide |
The architecture of OWLv2 is identical to OWL-ViT, however the object detection head now also includes an objectness classifier, which predicts the (query-agnostic) likelihood that a predicted box contains an object (as opposed to background). The objectness score can be used to rank or filter predictions independently of text queries.
Usage of OWLv2 is identical to OWL-ViT with a new, updated image processor ([Owlv2ImageProcessor]). |
Owlv2Config
[[autodoc]] Owlv2Config
- from_text_vision_configs
Owlv2TextConfig
[[autodoc]] Owlv2TextConfig
Owlv2VisionConfig
[[autodoc]] Owlv2VisionConfig
Owlv2ImageProcessor
[[autodoc]] Owlv2ImageProcessor
- preprocess
- post_process_object_detection
- post_process_image_guided_detection
Owlv2Processor
[[autodoc]] Owlv2Processor
Owlv2Model
[[autodoc]] Owlv2Model
- forward
- get_text_features
- get_image_features
Owlv2TextModel
[[autodoc]] Owlv2TextModel
- forward
Owlv2VisionModel
[[autodoc]] Owlv2VisionModel
- forward
Owlv2ForObjectDetection
[[autodoc]] Owlv2ForObjectDetection
- forward
- image_guided_detection |
Funnel Transformer |
Overview
The Funnel Transformer model was proposed in the paper Funnel-Transformer: Filtering out Sequential Redundancy for
Efficient Language Processing. It is a bidirectional transformer model, like
BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks
(CNN) in computer vision.
The abstract from the paper is the following:
With the success of language pretraining, it is highly desirable to develop more efficient architectures of good
scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the
much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only
require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which
gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More
importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further
improve the model capacity. In addition, to perform token-level predictions as required by common pretraining
objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence
via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on
a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading
comprehension.
This model was contributed by sgugger. The original code can be found here.
Usage tips |
Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states.
The base model therefore has a final sequence length that is a quarter of the original one. This model can be used
directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other
tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same
sequence length as the input.
For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That's why there are two versions of each checkpoint. The version suffixed with “-base” contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers.
The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be
used for [FunnelModel], [FunnelForPreTraining],
[FunnelForMaskedLM], [FunnelForTokenClassification] and
[FunnelForQuestionAnswering]. The second ones should be used for
[FunnelBaseModel], [FunnelForSequenceClassification] and
[FunnelForMultipleChoice]. |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
FunnelConfig
[[autodoc]] FunnelConfig
FunnelTokenizer
[[autodoc]] FunnelTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
FunnelTokenizerFast
[[autodoc]] FunnelTokenizerFast
Funnel specific outputs
[[autodoc]] models.funnel.modeling_funnel.FunnelForPreTrainingOutput
[[autodoc]] models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput |
FunnelBaseModel
[[autodoc]] FunnelBaseModel
- forward
FunnelModel
[[autodoc]] FunnelModel
- forward
FunnelModelForPreTraining
[[autodoc]] FunnelForPreTraining
- forward
FunnelForMaskedLM
[[autodoc]] FunnelForMaskedLM
- forward
FunnelForSequenceClassification
[[autodoc]] FunnelForSequenceClassification
- forward
FunnelForMultipleChoice
[[autodoc]] FunnelForMultipleChoice
- forward
FunnelForTokenClassification
[[autodoc]] FunnelForTokenClassification
- forward
FunnelForQuestionAnswering
[[autodoc]] FunnelForQuestionAnswering
- forward |
TFFunnelBaseModel
[[autodoc]] TFFunnelBaseModel
- call
TFFunnelModel
[[autodoc]] TFFunnelModel
- call
TFFunnelModelForPreTraining
[[autodoc]] TFFunnelForPreTraining
- call
TFFunnelForMaskedLM
[[autodoc]] TFFunnelForMaskedLM
- call
TFFunnelForSequenceClassification
[[autodoc]] TFFunnelForSequenceClassification
- call
TFFunnelForMultipleChoice
[[autodoc]] TFFunnelForMultipleChoice
- call
TFFunnelForTokenClassification
[[autodoc]] TFFunnelForTokenClassification
- call
TFFunnelForQuestionAnswering
[[autodoc]] TFFunnelForQuestionAnswering
- call |
Llama2
Overview
The Llama2 model was proposed in LLaMA: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. It is a collection of foundation language models ranging from 7B to 70B parameters, with checkpoints finetuned for chat application!
The abstract from the paper is the following:
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
Checkout all Llama2 model checkpoints here.
This model was contributed by Arthur Zucker with contributions from Lysandre Debut. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
Usage tips |
The Llama2 models were trained using bfloat16, but the original inference uses float16. The checkpoints uploaded on the Hub use torch_dtype = 'float16', which will be
used by the AutoModel API to cast the checkpoints from torch.float32 to torch.float16.
The dtype of the online weights is mostly irrelevant unless you are using torch_dtype="auto" when initializing a model using model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto"). The reason is that the model will first be downloaded ( using the dtype of the checkpoints online), then it will be casted to the default dtype of torch (becomes torch.float32), and finally, if there is a torch_dtype provided in the config, it will be used.
Training the model in float16 is not recommended and is known to produce nan; as such, the model should be trained in bfloat16. |
Tips: |
Weights for the Llama2 models can be obtained by filling out this form
The architecture is very similar to the first Llama, with the addition of Grouped Query Attention (GQA) following this paper
Setting config.pretraining_tp to a value different than 1 will activate the more accurate but slower computation of the linear layers, which should better match the original logits.
The original model uses pad_id = -1 which means that there is no padding token. We can't have the same logic, make sure to add a padding token using tokenizer.add_special_tokens({"pad_token":"<pad>"}) and resize the token embedding accordingly. You should also set the model.config.pad_token_id. The embed_tokens layer of the model is initialized with self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx), which makes sure that encoding the padding token will output zeros, so passing it when initializing is recommended.
After filling out the form and gaining access to the model checkpoints, you should be able to use the already converted checkpoints. Otherwise, if you are converting your own model, feel free to use the conversion script. The script can be called with the following (example) command: |
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path") |
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 75B model, it's thus 145GB of RAM needed. |
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string. |
When using Flash Attention 2 via attn_implementation="flash_attention_2", don't pass torch_dtype to the from_pretrained class method and use Automatic Mixed-Precision training. When using Trainer, it is simply specifying either fp16 or bf16 to True. Otherwise, make sure you are using torch.autocast. This is required because the Flash Attention only support fp16 and bf16 data type. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
Llama 2 is here - get it on Hugging Face, a blog post about Llama 2 and how to use it with 🤗 Transformers and 🤗 PEFT.
LLaMA 2 - Every Resource you need, a compilation of relevant resources to learn about LLaMA 2 and how to get started quickly.
A notebook on how to fine-tune Llama 2 in Google Colab using QLoRA and 4-bit precision. 🌎
A notebook on how to fine-tune the "Llama-v2-7b-guanaco" model with 4-bit QLoRA and generate Q&A datasets from PDFs. 🌎 |
A notebook on how to fine-tune the Llama 2 model with QLoRa, TRL, and Korean text classification dataset. 🌎🇰🇷 |
⚗️ Optimization
- Fine-tune Llama 2 with DPO, a guide to using the TRL library's DPO method to fine tune Llama 2 on a specific dataset.
- Extended Guide: Instruction-tune Llama 2, a guide to training Llama 2 to generate instructions from inputs, transforming the model from instruction-following to instruction-giving.
- A notebook on how to fine-tune the Llama 2 model on a personal computer using QLoRa and TRL. 🌎
⚡️ Inference
- A notebook on how to quantize the Llama 2 model using GPTQ from the AutoGPTQ library. 🌎
- A notebook on how to run the Llama 2 Chat Model with 4-bit quantization on a local computer or Google Colab. 🌎
🚀 Deploy
- Fine-tune LLaMA 2 (7-70B) on Amazon SageMaker, a complete guide from setup to QLoRA fine-tuning and deployment on Amazon SageMaker.
- Deploy Llama 2 7B/13B/70B on Amazon SageMaker, a guide on using Hugging Face's LLM DLC container for secure and scalable deployment.
LlamaConfig
[[autodoc]] LlamaConfig
LlamaTokenizer
[[autodoc]] LlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LlamaTokenizerFast
[[autodoc]] LlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
LlamaModel
[[autodoc]] LlamaModel
- forward
LlamaForCausalLM
[[autodoc]] LlamaForCausalLM
- forward
LlamaForSequenceClassification
[[autodoc]] LlamaForSequenceClassification
- forward |
M-CTC-T
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
Overview
The M-CTC-T model was proposed in Pseudo-Labeling For Massively Multilingual Speech Recognition by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.
The abstract from the paper is the following:
Semi-supervised learning through pseudo-labeling has become a staple of state-of-the-art monolingual
speech recognition systems. In this work, we extend pseudo-labeling to massively multilingual speech
recognition with 60 languages. We propose a simple pseudo-labeling recipe that works well even
with low-resource languages: train a supervised multilingual model, fine-tune it with semi-supervised
learning on a target language, generate pseudo-labels for that language, and train a final model using
pseudo-labels for all languages, either from scratch or by fine-tuning. Experiments on the labeled
Common Voice and unlabeled VoxPopuli datasets show that our recipe can yield a model with better
performance for many languages that also transfers well to LibriSpeech.
This model was contributed by cwkeam. The original code can be found here.
Usage tips
The PyTorch version of this model is only available in torch 1.9 and higher.
Resources |
Automatic speech recognition task guide
MCTCTConfig
[[autodoc]] MCTCTConfig
MCTCTFeatureExtractor
[[autodoc]] MCTCTFeatureExtractor
- call
MCTCTProcessor
[[autodoc]] MCTCTProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
MCTCTModel
[[autodoc]] MCTCTModel
- forward
MCTCTForCTC
[[autodoc]] MCTCTForCTC
- forward |
Blenderbot Small
Note that [BlenderbotSmallModel] and
[BlenderbotSmallForConditionalGeneration] are only used in combination with the checkpoint
facebook/blenderbot-90M. Larger Blenderbot checkpoints should
instead be used with [BlenderbotModel] and
[BlenderbotForConditionalGeneration]
Overview
The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.
This model was contributed by patrickvonplaten. The authors' code can be
found here.
Usage tips
Blenderbot Small is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
Resources |
Causal language modeling task guide
Translation task guide
Summarization task guide
BlenderbotSmallConfig
[[autodoc]] BlenderbotSmallConfig
BlenderbotSmallTokenizer
[[autodoc]] BlenderbotSmallTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
BlenderbotSmallTokenizerFast
[[autodoc]] BlenderbotSmallTokenizerFast |
BlenderbotSmallModel
[[autodoc]] BlenderbotSmallModel
- forward
BlenderbotSmallForConditionalGeneration
[[autodoc]] BlenderbotSmallForConditionalGeneration
- forward
BlenderbotSmallForCausalLM
[[autodoc]] BlenderbotSmallForCausalLM
- forward
TFBlenderbotSmallModel
[[autodoc]] TFBlenderbotSmallModel
- call
TFBlenderbotSmallForConditionalGeneration
[[autodoc]] TFBlenderbotSmallForConditionalGeneration
- call |
TFBlenderbotSmallModel
[[autodoc]] TFBlenderbotSmallModel
- call
TFBlenderbotSmallForConditionalGeneration
[[autodoc]] TFBlenderbotSmallForConditionalGeneration
- call
FlaxBlenderbotSmallModel
[[autodoc]] FlaxBlenderbotSmallModel
- call
- encode
- decode
FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotSmallForConditionalGeneration
- call
- encode
- decode |
ViTMatte
Overview
The ViTMatte model was proposed in Boosting Image Matting with Pretrained Plain Vision Transformers by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.
ViTMatte leverages plain Vision Transformers for the task of image matting, which is the process of accurately estimating the foreground object in images and videos.
The abstract from the paper is the following:
Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining. However, they have not yet conquered the problem of image matting. We hypothesize that image matting could also be boosted by ViTs and present a new efficient and robust ViT-based matting system, named ViTMatte. Our method utilizes (i) a hybrid attention mechanism combined with a convolution neck to help ViTs achieve an excellent performance-computation trade-off in matting tasks. (ii) Additionally, we introduce the detail capture module, which just consists of simple lightweight convolutions to complement the detailed information required by matting. To the best of our knowledge, ViTMatte is the first work to unleash the potential of ViT on image matting with concise adaptation. It inherits many superior properties from ViT to matting, including various pretraining strategies, concise architecture design, and flexible inference strategies. We evaluate ViTMatte on Composition-1k and Distinctions-646, the most commonly used benchmark for image matting, our method achieves state-of-the-art performance and outperforms prior matting works by a large margin.
This model was contributed by nielsr.
The original code can be found here. |
ViTMatte high-level overview. Taken from the original paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMatte.
A demo notebook regarding inference with [VitMatteForImageMatting], including background replacement, can be found here.
The model expects both the image and trimap (concatenated) as input. Use [ViTMatteImageProcessor] for this purpose. |
The model expects both the image and trimap (concatenated) as input. Use [ViTMatteImageProcessor] for this purpose.
VitMatteConfig
[[autodoc]] VitMatteConfig
VitMatteImageProcessor
[[autodoc]] VitMatteImageProcessor
- preprocess
VitMatteForImageMatting
[[autodoc]] VitMatteForImageMatting
- forward |
BLOOM
Overview
The BLOOM model has been proposed with its various versions through the BigScience Workshop. BigScience is inspired by other open science initiatives where researchers have pooled their time and resources to collectively achieve a higher impact.
The architecture of BLOOM is essentially similar to GPT3 (auto-regressive model for next token prediction), but has been trained on 46 different languages and 13 programming languages.
Several smaller versions of the models have been trained on the same dataset. BLOOM is available in the following versions: |
bloom-560m
bloom-1b1
bloom-1b7
bloom-3b
bloom-7b1
bloom (176B parameters)
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLOOM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
[BloomForCausalLM] is supported by this causal language modeling example script and notebook. |
See also:
- Causal language modeling task guide
- Text classification task guide
- Token classification task guide
- Question answering task guide
⚡️ Inference
- A blog on Optimization story: Bloom inference.
- A blog on Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate.
⚙️ Training
- A blog on The Technology Behind BLOOM Training.
BloomConfig
[[autodoc]] BloomConfig
- all
BloomTokenizerFast
[[autodoc]] BloomTokenizerFast
- all |
BloomModel
[[autodoc]] BloomModel
- forward
BloomForCausalLM
[[autodoc]] BloomForCausalLM
- forward
BloomForSequenceClassification
[[autodoc]] BloomForSequenceClassification
- forward
BloomForTokenClassification
[[autodoc]] BloomForTokenClassification
- forward
BloomForQuestionAnswering
[[autodoc]] BloomForQuestionAnswering
- forward
FlaxBloomModel
[[autodoc]] FlaxBloomModel
- call
FlaxBloomForCausalLM
[[autodoc]] FlaxBloomForCausalLM
- call |
Speech2Text2
Overview
The Speech2Text2 model is used together with Wav2Vec2 for Speech Translation models proposed in
Large-Scale Self- and Semi-Supervised Learning for Speech Translation by
Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
Speech2Text2 is a decoder-only transformer model that can be used with any speech encoder-only, such as
Wav2Vec2 or HuBERT for Speech-to-Text tasks. Please refer to the
SpeechEncoderDecoder class on how to combine Speech2Text2 with any speech encoder-only
model.
This model was contributed by Patrick von Platen.
The original code can be found here.
Usage tips |
Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see
the official models .
Speech2Text2 is always used within the SpeechEncoderDecoder framework.
Speech2Text2's tokenizer is based on fastBPE. |
Inference
Speech2Text2's [SpeechEncoderDecoderModel] model accepts raw waveform input values from speech and
makes use of [~generation.GenerationMixin.generate] to translate the input speech
autoregressively to the target language.
The [Wav2Vec2FeatureExtractor] class is responsible for preprocessing the input speech and
[Speech2Text2Tokenizer] decodes the generated target tokens to the target string. The
[Speech2Text2Processor] wraps [Wav2Vec2FeatureExtractor] and
[Speech2Text2Tokenizer] into a single instance to both extract the input features and decode the
predicted token ids. |
Step-by-step Speech Translation
thon |
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(inputs=inputs["input_values"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.