text
stringlengths
96
319k
id
stringlengths
14
178
metadata
dict
# coding=utf-8 # Copyright 2024 The Fairseq Authors and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Wav2Vec2Bert model configuration""" from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) class Wav2Vec2BertConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`Wav2Vec2BertModel`]. It is used to instantiate an Wav2Vec2Bert model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Bert [facebook/wav2vec2-bert-rel-pos-large](https://huggingface.co/facebook/wav2vec2-bert-rel-pos-large) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*): Vocabulary size of the Wav2Vec2Bert model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Wav2Vec2BertModel`]. Vocabulary size of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward method of [`Wav2Vec2BertModel`]. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. feature_projection_input_dim (`int`, *optional*, defaults to 160): Input dimension of this model, i.e the dimension after processing input audios with [`SeamlessM4TFeatureExtractor`] or [`Wav2Vec2BertProcessor`]. hidden_act (`str` or `function`, *optional*, defaults to `"swish"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported. hidden_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for the feature projection. final_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the final projection layer of [`Wav2Vec2BertForCTC`]. layerdrop (`float`, *optional*, defaults to 0.1): The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. apply_spec_augment (`bool`, *optional*, defaults to `True`): Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). mask_time_prob (`float`, *optional*, defaults to 0.05): Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates `mask_time_prob*len(time_axis)/mask_time_length ``independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2): The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if `mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks`. mask_feature_prob (`float`, *optional*, defaults to 0.0): Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates `mask_feature_prob*len(feature_axis)/mask_time_length` independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_feature_length (`int`, *optional*, defaults to 10): Length of vector span along the feature axis. mask_feature_min_masks (`int`, *optional*, defaults to 0): The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if `mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks`. ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`): Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [`Wav2Vec2BertForCTC`]. ctc_zero_infinity (`bool`, *optional*, defaults to `False`): Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [`Wav2Vec2BertForCTC`]. use_weighted_layer_sum (`bool`, *optional*, defaults to `False`): Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [`Wav2Vec2BertForSequenceClassification`]. classifier_proj_size (`int`, *optional*, defaults to 768): Dimensionality of the projection before token mean-pooling for classification. tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`): A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN* module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers. tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the *XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*. tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`): A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the *XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*. xvector_output_dim (`int`, *optional*, defaults to 512): Dimensionality of the *XVector* embedding vectors. pad_token_id (`int`, *optional*, defaults to 0): The id of the _beginning-of-stream_ token. bos_token_id (`int`, *optional*, defaults to 1): The id of the _padding_ token. eos_token_id (`int`, *optional*, defaults to 2): The id of the _end-of-stream_ token. add_adapter (`bool`, *optional*, defaults to `False`): Whether a convolutional attention network should be stacked on top of the Wav2Vec2Bert Encoder. Can be very useful for warm-starting Wav2Vec2Bert for SpeechEncoderDecoder models. adapter_kernel_size (`int`, *optional*, defaults to 3): Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. adapter_stride (`int`, *optional*, defaults to 2): Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. num_adapter_layers (`int`, *optional*, defaults to 1): Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is True`. adapter_act (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the adapter layers. If string, `"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported. use_intermediate_ffn_before_adapter (`bool`, *optional*, defaults to `False`): Whether an intermediate feed-forward block should be stacked on top of the Wav2Vec2Bert Encoder and before the adapter network. Only relevant if `add_adapter is True`. output_hidden_size (`int`, *optional*): Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant if `add_adapter is True`. position_embeddings_type (`str`, *optional*, defaults to `"relative_key"`): Can be specified to : - `rotary`, for rotary position embeddings. - `relative`, for relative position embeddings. - `relative_key`, for relative position embeddings as defined by Shaw in [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). If left to `None`, no relative position embeddings is applied. rotary_embedding_base (`int`, *optional*, defaults to 10000): If `"rotary"` position embeddings are used, defines the size of the embedding base. max_source_positions (`int`, *optional*, defaults to 5000): if `"relative"` position embeddings are used, defines the maximum source input positions. left_max_position_embeddings (`int`, *optional*, defaults to 64): If `"relative_key"` (aka Shaw) position embeddings are used, defines the left clipping value for relative positions. right_max_position_embeddings (`int`, *optional*, defaults to 8): If `"relative_key"` (aka Shaw) position embeddings are used, defines the right clipping value for relative positions. conv_depthwise_kernel_size (`int`, *optional*, defaults to 31): Kernel size of convolutional depthwise 1D layer in Conformer blocks. conformer_conv_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all convolutional layers in Conformer blocks. Example: ```python >>> from transformers import Wav2Vec2BertConfig, Wav2Vec2BertModel >>> # Initializing a Wav2Vec2Bert facebook/wav2vec2-bert-rel-pos-large style configuration >>> configuration = Wav2Vec2BertConfig() >>> # Initializing a model (with random weights) from the facebook/wav2vec2-bert-rel-pos-large style configuration >>> model = Wav2Vec2BertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "wav2vec2-bert" def __init__( self, vocab_size=None, hidden_size=1024, num_hidden_layers=24, num_attention_heads=16, intermediate_size=4096, feature_projection_input_dim=160, hidden_act="swish", hidden_dropout=0.0, activation_dropout=0.0, attention_dropout=0.0, feat_proj_dropout=0.0, final_dropout=0.1, layerdrop=0.1, initializer_range=0.02, layer_norm_eps=1e-5, apply_spec_augment=True, mask_time_prob=0.05, mask_time_length=10, mask_time_min_masks=2, mask_feature_prob=0.0, mask_feature_length=10, mask_feature_min_masks=0, ctc_loss_reduction="sum", ctc_zero_infinity=False, use_weighted_layer_sum=False, classifier_proj_size=768, tdnn_dim=(512, 512, 512, 512, 1500), tdnn_kernel=(5, 3, 3, 1, 1), tdnn_dilation=(1, 2, 3, 1, 1), xvector_output_dim=512, pad_token_id=0, bos_token_id=1, eos_token_id=2, add_adapter=False, adapter_kernel_size=3, adapter_stride=2, num_adapter_layers=1, adapter_act="relu", use_intermediate_ffn_before_adapter=False, output_hidden_size=None, position_embeddings_type="relative_key", rotary_embedding_base=10000, max_source_positions=5000, left_max_position_embeddings=64, right_max_position_embeddings=8, conv_depthwise_kernel_size=31, conformer_conv_dropout=0.1, **kwargs, ): super().__init__(**kwargs, pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id) self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.num_attention_heads = num_attention_heads self.feature_projection_input_dim = feature_projection_input_dim self.hidden_dropout = hidden_dropout self.attention_dropout = attention_dropout self.activation_dropout = activation_dropout self.feat_proj_dropout = feat_proj_dropout self.final_dropout = final_dropout self.layerdrop = layerdrop self.layer_norm_eps = layer_norm_eps self.initializer_range = initializer_range self.vocab_size = vocab_size self.use_weighted_layer_sum = use_weighted_layer_sum self.max_source_positions = max_source_positions if position_embeddings_type is not None and position_embeddings_type not in [ "rotary", "relative", "relative_key", ]: raise ValueError( """ `position_embeddings_type` is not valid. It must be one of the following values: `["rotary", "relative", "relative_key"]` or left as `None`. """ ) self.position_embeddings_type = position_embeddings_type self.rotary_embedding_base = rotary_embedding_base self.left_max_position_embeddings = left_max_position_embeddings self.right_max_position_embeddings = right_max_position_embeddings # Conformer-block related self.conv_depthwise_kernel_size = conv_depthwise_kernel_size self.conformer_conv_dropout = conformer_conv_dropout # fine-tuning config parameters for SpecAugment: https://arxiv.org/abs/1904.08779 self.apply_spec_augment = apply_spec_augment self.mask_time_prob = mask_time_prob self.mask_time_length = mask_time_length self.mask_time_min_masks = mask_time_min_masks self.mask_feature_prob = mask_feature_prob self.mask_feature_length = mask_feature_length self.mask_feature_min_masks = mask_feature_min_masks # ctc loss self.ctc_loss_reduction = ctc_loss_reduction self.ctc_zero_infinity = ctc_zero_infinity # adapter self.add_adapter = add_adapter self.adapter_kernel_size = adapter_kernel_size self.adapter_stride = adapter_stride self.num_adapter_layers = num_adapter_layers self.adapter_act = adapter_act self.output_hidden_size = output_hidden_size if output_hidden_size is not None else hidden_size if use_intermediate_ffn_before_adapter and not add_adapter: raise ValueError("`use_intermediate_ffn_before_adapter` is `True` but `add_adapter` is `False`.") self.use_intermediate_ffn_before_adapter = use_intermediate_ffn_before_adapter # SequenceClassification-specific parameter. Feel free to ignore for other classes. self.classifier_proj_size = classifier_proj_size # XVector-specific parameters. Feel free to ignore for other classes. self.tdnn_dim = list(tdnn_dim) self.tdnn_kernel = list(tdnn_kernel) self.tdnn_dilation = list(tdnn_dilation) self.xvector_output_dim = xvector_output_dim @property def inputs_to_logits_ratio(self): ratio = self.feature_projection_input_dim * 2 if self.add_adapter: ratio = ratio * (self.adapter_stride**self.num_adapter_layers) return ratio __all__ = ["Wav2Vec2BertConfig"]
transformers/src/transformers/models/wav2vec2_bert/configuration_wav2vec2_bert.py/0
{ "file_path": "transformers/src/transformers/models/wav2vec2_bert/configuration_wav2vec2_bert.py", "repo_id": "transformers", "token_count": 7080 }
# coding=utf-8 # Copyright 2021 The Fairseq Authors, Microsoft Research, and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch WavLM model.""" import math import warnings from typing import Optional, Tuple, Union import numpy as np import torch import torch.nn.functional as F import torch.utils.checkpoint from torch import nn from torch.nn import CrossEntropyLoss from ...activations import ACT2FN from ...integrations.deepspeed import is_deepspeed_zero3_enabled from ...integrations.fsdp import is_fsdp_managed_module from ...modeling_outputs import ( BaseModelOutput, CausalLMOutput, SequenceClassifierOutput, TokenClassifierOutput, Wav2Vec2BaseModelOutput, XVectorOutput, ) from ...modeling_utils import PreTrainedModel from ...utils import ( add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, is_peft_available, logging, ) from .configuration_wavlm import WavLMConfig logger = logging.get_logger(__name__) _HIDDEN_STATES_START_POSITION = 2 # General docstring _CONFIG_FOR_DOC = "WavLMConfig" # Base docstring _CHECKPOINT_FOR_DOC = "patrickvonplaten/wavlm-libri-clean-100h-base-plus" _EXPECTED_OUTPUT_SHAPE = [1, 292, 768] # CTC docstring _CTC_EXPECTED_OUTPUT = "'mister quilter is the aposle of the middle classes and we are glad to welcome his gospel'" _CTC_EXPECTED_LOSS = 12.51 # Frame class docstring _FRAME_CLASS_CHECKPOINT = "microsoft/wavlm-base-plus-sd" _FRAME_EXPECTED_OUTPUT = [0, 0] # Speaker Verification docstring _XVECTOR_CHECKPOINT = "microsoft/wavlm-base-plus-sv" _XVECTOR_EXPECTED_OUTPUT = 0.97 # Copied from transformers.models.wav2vec2.modeling_wav2vec2._compute_mask_indices def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.LongTensor] = None, min_masks: int = 0, ) -> np.ndarray: """ Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on CPU as part of the preprocessing during training. Args: shape: The shape for which to compute masks. This should be of a tuple of size 2 where the first element is the batch size and the second element is the length of the axis to span. mask_prob: The percentage of the whole axis (between 0 and 1) which will be masked. The number of independently generated mask spans of length `mask_length` is computed by `mask_prob*shape[1]/mask_length`. Note that due to overlaps, `mask_prob` is an upper bound and the actual percentage will be smaller. mask_length: size of the mask min_masks: minimum number of masked spans attention_mask: A (right-padded) attention mask which independently shortens the feature axis of each batch dimension. """ batch_size, sequence_length = shape if mask_length < 1: raise ValueError("`mask_length` has to be bigger than 0.") if mask_length > sequence_length: raise ValueError( f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length}" f" and `sequence_length`: {sequence_length}`" ) # epsilon is used for probabilistic rounding epsilon = np.random.rand(1).item() def compute_num_masked_span(input_length): """Given input length, compute how many spans should be masked""" num_masked_span = int(mask_prob * input_length / mask_length + epsilon) num_masked_span = max(num_masked_span, min_masks) # make sure num masked span <= sequence_length if num_masked_span * mask_length > sequence_length: num_masked_span = sequence_length // mask_length # make sure num_masked span is also <= input_length - (mask_length - 1) if input_length - (mask_length - 1) < num_masked_span: num_masked_span = max(input_length - (mask_length - 1), 0) return num_masked_span # compute number of masked spans in batch input_lengths = ( attention_mask.sum(-1).detach().tolist() if attention_mask is not None else [sequence_length for _ in range(batch_size)] ) # SpecAugment mask to fill spec_aug_mask = np.zeros((batch_size, sequence_length), dtype=bool) spec_aug_mask_idxs = [] max_num_masked_span = compute_num_masked_span(sequence_length) if max_num_masked_span == 0: return spec_aug_mask for input_length in input_lengths: # compute num of masked spans for this input num_masked_span = compute_num_masked_span(input_length) # get random indices to mask spec_aug_mask_idx = np.random.choice( np.arange(input_length - (mask_length - 1)), num_masked_span, replace=False ) # pick first sampled index that will serve as a dummy index to pad vector # to ensure same dimension for all batches due to probabilistic rounding # Picking first sample just pads those vectors twice. if len(spec_aug_mask_idx) == 0: # this case can only happen if `input_length` is strictly smaller then # `sequence_length` in which case the last token has to be a padding # token which we can use as a dummy mask id dummy_mask_idx = sequence_length - 1 else: dummy_mask_idx = spec_aug_mask_idx[0] spec_aug_mask_idx = np.concatenate( [spec_aug_mask_idx, np.ones(max_num_masked_span - num_masked_span, dtype=np.int32) * dummy_mask_idx] ) spec_aug_mask_idxs.append(spec_aug_mask_idx) spec_aug_mask_idxs = np.array(spec_aug_mask_idxs) # expand masked indices to masked spans spec_aug_mask_idxs = np.broadcast_to( spec_aug_mask_idxs[:, :, None], (batch_size, max_num_masked_span, mask_length) ) spec_aug_mask_idxs = spec_aug_mask_idxs.reshape(batch_size, max_num_masked_span * mask_length) # add offset to the starting indexes so that indexes now create a span offsets = np.arange(mask_length)[None, None, :] offsets = np.broadcast_to(offsets, (batch_size, max_num_masked_span, mask_length)).reshape( batch_size, max_num_masked_span * mask_length ) spec_aug_mask_idxs = spec_aug_mask_idxs + offsets # ensure that we cannot have indices larger than sequence_length if spec_aug_mask_idxs.max() > sequence_length - 1: spec_aug_mask_idxs[spec_aug_mask_idxs > sequence_length - 1] = sequence_length - 1 # scatter indices to mask np.put_along_axis(spec_aug_mask, spec_aug_mask_idxs, 1, -1) return spec_aug_mask # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2NoLayerNormConvLayer with Wav2Vec2->WavLM class WavLMNoLayerNormConvLayer(nn.Module): def __init__(self, config, layer_id=0): super().__init__() self.in_conv_dim = config.conv_dim[layer_id - 1] if layer_id > 0 else 1 self.out_conv_dim = config.conv_dim[layer_id] self.conv = nn.Conv1d( self.in_conv_dim, self.out_conv_dim, kernel_size=config.conv_kernel[layer_id], stride=config.conv_stride[layer_id], bias=config.conv_bias, ) self.activation = ACT2FN[config.feat_extract_activation] def forward(self, hidden_states): hidden_states = self.conv(hidden_states) hidden_states = self.activation(hidden_states) return hidden_states # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2LayerNormConvLayer with Wav2Vec2->WavLM class WavLMLayerNormConvLayer(nn.Module): def __init__(self, config, layer_id=0): super().__init__() self.in_conv_dim = config.conv_dim[layer_id - 1] if layer_id > 0 else 1 self.out_conv_dim = config.conv_dim[layer_id] self.conv = nn.Conv1d( self.in_conv_dim, self.out_conv_dim, kernel_size=config.conv_kernel[layer_id], stride=config.conv_stride[layer_id], bias=config.conv_bias, ) self.layer_norm = nn.LayerNorm(self.out_conv_dim, elementwise_affine=True) self.activation = ACT2FN[config.feat_extract_activation] def forward(self, hidden_states): hidden_states = self.conv(hidden_states) hidden_states = hidden_states.transpose(-2, -1) hidden_states = self.layer_norm(hidden_states) hidden_states = hidden_states.transpose(-2, -1) hidden_states = self.activation(hidden_states) return hidden_states # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2GroupNormConvLayer with Wav2Vec2->WavLM class WavLMGroupNormConvLayer(nn.Module): def __init__(self, config, layer_id=0): super().__init__() self.in_conv_dim = config.conv_dim[layer_id - 1] if layer_id > 0 else 1 self.out_conv_dim = config.conv_dim[layer_id] self.conv = nn.Conv1d( self.in_conv_dim, self.out_conv_dim, kernel_size=config.conv_kernel[layer_id], stride=config.conv_stride[layer_id], bias=config.conv_bias, ) self.activation = ACT2FN[config.feat_extract_activation] self.layer_norm = nn.GroupNorm(num_groups=self.out_conv_dim, num_channels=self.out_conv_dim, affine=True) def forward(self, hidden_states): hidden_states = self.conv(hidden_states) hidden_states = self.layer_norm(hidden_states) hidden_states = self.activation(hidden_states) return hidden_states # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2PositionalConvEmbedding with Wav2Vec2->WavLM class WavLMPositionalConvEmbedding(nn.Module): def __init__(self, config): super().__init__() self.conv = nn.Conv1d( config.hidden_size, config.hidden_size, kernel_size=config.num_conv_pos_embeddings, padding=config.num_conv_pos_embeddings // 2, groups=config.num_conv_pos_embedding_groups, ) weight_norm = nn.utils.weight_norm if hasattr(nn.utils.parametrizations, "weight_norm"): weight_norm = nn.utils.parametrizations.weight_norm if is_deepspeed_zero3_enabled(): import deepspeed with deepspeed.zero.GatheredParameters(self.conv.weight, modifier_rank=0): self.conv = weight_norm(self.conv, name="weight", dim=2) if hasattr(self.conv, "parametrizations"): weight_g = self.conv.parametrizations.weight.original0 weight_v = self.conv.parametrizations.weight.original1 else: weight_g = self.conv.weight_g weight_v = self.conv.weight_v deepspeed.zero.register_external_parameter(self, weight_v) deepspeed.zero.register_external_parameter(self, weight_g) else: self.conv = weight_norm(self.conv, name="weight", dim=2) self.padding = WavLMSamePadLayer(config.num_conv_pos_embeddings) self.activation = ACT2FN[config.feat_extract_activation] def forward(self, hidden_states): hidden_states = hidden_states.transpose(1, 2) hidden_states = self.conv(hidden_states) hidden_states = self.padding(hidden_states) hidden_states = self.activation(hidden_states) hidden_states = hidden_states.transpose(1, 2) return hidden_states # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2SamePadLayer with Wav2Vec2->WavLM class WavLMSamePadLayer(nn.Module): def __init__(self, num_conv_pos_embeddings): super().__init__() self.num_pad_remove = 1 if num_conv_pos_embeddings % 2 == 0 else 0 def forward(self, hidden_states): if self.num_pad_remove > 0: hidden_states = hidden_states[:, :, : -self.num_pad_remove] return hidden_states # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2FeatureEncoder with Wav2Vec2->WavLM class WavLMFeatureEncoder(nn.Module): """Construct the features from raw audio waveform""" def __init__(self, config): super().__init__() if config.feat_extract_norm == "group": conv_layers = [WavLMGroupNormConvLayer(config, layer_id=0)] + [ WavLMNoLayerNormConvLayer(config, layer_id=i + 1) for i in range(config.num_feat_extract_layers - 1) ] elif config.feat_extract_norm == "layer": conv_layers = [WavLMLayerNormConvLayer(config, layer_id=i) for i in range(config.num_feat_extract_layers)] else: raise ValueError( f"`config.feat_extract_norm` is {config.feat_extract_norm}, but has to be one of ['group', 'layer']" ) self.conv_layers = nn.ModuleList(conv_layers) self.gradient_checkpointing = False self._requires_grad = True def _freeze_parameters(self): for param in self.parameters(): param.requires_grad = False self._requires_grad = False def forward(self, input_values): hidden_states = input_values[:, None] # make sure hidden_states require grad for gradient_checkpointing if self._requires_grad and self.training: hidden_states.requires_grad = True for conv_layer in self.conv_layers: if self._requires_grad and self.gradient_checkpointing and self.training: hidden_states = self._gradient_checkpointing_func( conv_layer.__call__, hidden_states, ) else: hidden_states = conv_layer(hidden_states) return hidden_states class WavLMFeatureExtractor(WavLMFeatureEncoder): def __init__(self, config): super().__init__(config) warnings.warn( f"The class `{self.__class__.__name__}` has been depreciated " "and will be removed in Transformers v5. " f"Use `{self.__class__.__bases__[0].__name__}` instead.", FutureWarning, ) # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2FeatureProjection with Wav2Vec2->WavLM class WavLMFeatureProjection(nn.Module): def __init__(self, config): super().__init__() self.layer_norm = nn.LayerNorm(config.conv_dim[-1], eps=config.layer_norm_eps) self.projection = nn.Linear(config.conv_dim[-1], config.hidden_size) self.dropout = nn.Dropout(config.feat_proj_dropout) def forward(self, hidden_states): # non-projected hidden states are needed for quantization norm_hidden_states = self.layer_norm(hidden_states) hidden_states = self.projection(norm_hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states, norm_hidden_states class WavLMAttention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" def __init__( self, embed_dim: int, num_heads: int, dropout: float = 0.0, num_buckets: int = 320, max_distance: int = 800, has_relative_position_bias: bool = True, ): super().__init__() self.embed_dim = embed_dim self.num_heads = num_heads self.dropout = dropout self.head_dim = embed_dim // num_heads if (self.head_dim * num_heads) != self.embed_dim: raise ValueError( f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}" f" and `num_heads`: {num_heads})." ) self.scaling = self.head_dim**-0.5 self.k_proj = nn.Linear(embed_dim, embed_dim) self.v_proj = nn.Linear(embed_dim, embed_dim) self.q_proj = nn.Linear(embed_dim, embed_dim) self.out_proj = nn.Linear(embed_dim, embed_dim) self.num_buckets = num_buckets self.max_distance = max_distance self.gru_rel_pos_const = nn.Parameter(torch.ones(1, self.num_heads, 1, 1)) self.gru_rel_pos_linear = nn.Linear(self.head_dim, 8) if has_relative_position_bias: self.rel_attn_embed = nn.Embedding(self.num_buckets, self.num_heads) def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, position_bias: Optional[torch.Tensor] = None, output_attentions: bool = False, index=0, ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: """Attention layer with relative attention""" bsz, tgt_len, _ = hidden_states.size() # first pass of attention layer creates position bias if position_bias is None: position_bias = self.compute_bias(tgt_len, tgt_len) position_bias = ( position_bias.unsqueeze(0).repeat(bsz, 1, 1, 1).view(bsz * self.num_heads, tgt_len, tgt_len) ) # Compute relative position bias: # 1) get reshape hidden_states gated_hidden_states = hidden_states.view(hidden_states.shape[:-1] + (self.num_heads, -1)) gated_hidden_states = gated_hidden_states.permute(0, 2, 1, 3) # 2) project hidden states relative_position_proj = self.gru_rel_pos_linear(gated_hidden_states) relative_position_proj = relative_position_proj.view(gated_hidden_states.shape[:-1] + (2, 4)).sum(-1) # 3) compute gate for position bias from projected hidden states gate_a, gate_b = torch.sigmoid(relative_position_proj).chunk(2, dim=-1) gate_output = gate_a * (gate_b * self.gru_rel_pos_const - 1.0) + 2.0 # 4) apply gate to position bias to compute gated position_bias gated_position_bias = gate_output.view(bsz * self.num_heads, -1, 1) * position_bias gated_position_bias = gated_position_bias.view((-1, tgt_len, tgt_len)) attn_output, attn_weights = self.torch_multi_head_self_attention( hidden_states, attention_mask, gated_position_bias, output_attentions ) return attn_output, attn_weights, position_bias def torch_multi_head_self_attention( self, hidden_states: torch.FloatTensor, attention_mask: Union[torch.LongTensor, torch.BoolTensor], gated_position_bias: torch.FloatTensor, output_attentions: bool, ) -> (torch.FloatTensor, torch.FloatTensor): """simple wrapper around torch's multi_head_attention_forward function""" # self-attention assumes q = k = v query = key = value = hidden_states.transpose(0, 1) key_padding_mask = attention_mask.ne(1) if attention_mask is not None else None # disable bias and add_zero_attn bias_k = bias_v = None add_zero_attn = False # PyTorch 1.3.0 has F.multi_head_attention_forward defined # so no problem with backwards compatibility attn_output, attn_weights = F.multi_head_attention_forward( query, key, value, self.embed_dim, self.num_heads, torch.empty([0]), torch.cat((self.q_proj.bias, self.k_proj.bias, self.v_proj.bias)), bias_k, bias_v, add_zero_attn, self.dropout, self.out_proj.weight, self.out_proj.bias, self.training, key_padding_mask, output_attentions, gated_position_bias, use_separate_proj_weight=True, q_proj_weight=self.q_proj.weight, k_proj_weight=self.k_proj.weight, v_proj_weight=self.v_proj.weight, ) # [Seq_Len, Batch Size, ...] -> [Batch Size, Seq_Len, ...] attn_output = attn_output.transpose(0, 1) if attn_weights is not None: # IMPORTANT: Attention weights are averaged weights # here which should not be the case. This is an open issue # on PyTorch: https://github.com/pytorch/pytorch/issues/32590 attn_weights = attn_weights[:, None].broadcast_to( attn_weights.shape[:1] + (self.num_heads,) + attn_weights.shape[1:] ) return attn_output, attn_weights def compute_bias(self, query_length: int, key_length: int) -> torch.FloatTensor: context_position = torch.arange(query_length, dtype=torch.long)[:, None] memory_position = torch.arange(key_length, dtype=torch.long)[None, :] relative_position = memory_position - context_position relative_position_bucket = self._relative_positions_bucket(relative_position) relative_position_bucket = relative_position_bucket.to(self.rel_attn_embed.weight.device) values = self.rel_attn_embed(relative_position_bucket) values = values.permute([2, 0, 1]) return values def _relative_positions_bucket(self, relative_positions: torch.FloatTensor) -> torch.FloatTensor: num_buckets = self.num_buckets // 2 relative_buckets = (relative_positions > 0).to(torch.long) * num_buckets relative_positions = torch.abs(relative_positions) max_exact = num_buckets // 2 is_small = relative_positions < max_exact relative_positions_if_large = torch.log(relative_positions.float() / max_exact) relative_positions_if_large = relative_positions_if_large / math.log(self.max_distance / max_exact) relative_positions_if_large = relative_positions_if_large * (num_buckets - max_exact) relative_position_if_large = (max_exact + relative_positions_if_large).to(torch.long) relative_position_if_large = torch.min( relative_position_if_large, torch.full_like(relative_position_if_large, num_buckets - 1) ) relative_buckets += torch.where(is_small, relative_positions, relative_position_if_large) return relative_buckets # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2FeedForward with Wav2Vec2->WavLM class WavLMFeedForward(nn.Module): def __init__(self, config): super().__init__() self.intermediate_dropout = nn.Dropout(config.activation_dropout) self.intermediate_dense = nn.Linear(config.hidden_size, config.intermediate_size) if isinstance(config.hidden_act, str): self.intermediate_act_fn = ACT2FN[config.hidden_act] else: self.intermediate_act_fn = config.hidden_act self.output_dense = nn.Linear(config.intermediate_size, config.hidden_size) self.output_dropout = nn.Dropout(config.hidden_dropout) def forward(self, hidden_states): hidden_states = self.intermediate_dense(hidden_states) hidden_states = self.intermediate_act_fn(hidden_states) hidden_states = self.intermediate_dropout(hidden_states) hidden_states = self.output_dense(hidden_states) hidden_states = self.output_dropout(hidden_states) return hidden_states class WavLMEncoderLayer(nn.Module): def __init__(self, config: WavLMConfig, has_relative_position_bias: bool = True): super().__init__() self.attention = WavLMAttention( embed_dim=config.hidden_size, num_heads=config.num_attention_heads, dropout=config.attention_dropout, num_buckets=config.num_buckets, max_distance=config.max_bucket_distance, has_relative_position_bias=has_relative_position_bias, ) self.dropout = nn.Dropout(config.hidden_dropout) self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.feed_forward = WavLMFeedForward(config) self.final_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) def forward(self, hidden_states, attention_mask=None, position_bias=None, output_attentions=False, index=0): attn_residual = hidden_states hidden_states, attn_weights, position_bias = self.attention( hidden_states, attention_mask=attention_mask, position_bias=position_bias, output_attentions=output_attentions, index=index, ) hidden_states = self.dropout(hidden_states) hidden_states = attn_residual + hidden_states hidden_states = self.layer_norm(hidden_states) hidden_states = hidden_states + self.feed_forward(hidden_states) hidden_states = self.final_layer_norm(hidden_states) outputs = (hidden_states, position_bias) if output_attentions: outputs += (attn_weights,) return outputs class WavLMEncoderLayerStableLayerNorm(nn.Module): def __init__(self, config: WavLMConfig, has_relative_position_bias: bool = True): super().__init__() self.attention = WavLMAttention( embed_dim=config.hidden_size, num_heads=config.num_attention_heads, dropout=config.attention_dropout, num_buckets=config.num_buckets, max_distance=config.max_bucket_distance, has_relative_position_bias=has_relative_position_bias, ) self.dropout = nn.Dropout(config.hidden_dropout) self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.feed_forward = WavLMFeedForward(config) self.final_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) def forward(self, hidden_states, attention_mask=None, position_bias=None, output_attentions=False): attn_residual = hidden_states hidden_states = self.layer_norm(hidden_states) hidden_states, attn_weights, position_bias = self.attention( hidden_states, attention_mask=attention_mask, position_bias=position_bias, output_attentions=output_attentions, ) hidden_states = self.dropout(hidden_states) hidden_states = attn_residual + hidden_states hidden_states = hidden_states + self.feed_forward(self.final_layer_norm(hidden_states)) outputs = (hidden_states, position_bias) if output_attentions: outputs += (attn_weights,) return outputs class WavLMEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.pos_conv_embed = WavLMPositionalConvEmbedding(config) self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout) self.layers = nn.ModuleList( [WavLMEncoderLayer(config, has_relative_position_bias=(i == 0)) for i in range(config.num_hidden_layers)] ) self.gradient_checkpointing = False def forward( self, hidden_states, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None if attention_mask is not None: # make sure padded tokens output 0 expand_attention_mask = attention_mask.unsqueeze(-1).repeat(1, 1, hidden_states.shape[2]) hidden_states[~expand_attention_mask] = 0 position_embeddings = self.pos_conv_embed(hidden_states) hidden_states = hidden_states + position_embeddings hidden_states = self.layer_norm(hidden_states) hidden_states = self.dropout(hidden_states) synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) position_bias = None for i, layer in enumerate(self.layers): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) dropout_probability = torch.rand([]) skip_the_layer = self.training and i > 0 and (dropout_probability < self.config.layerdrop) if not skip_the_layer or synced_gpus: # under fsdp or deepspeed zero3 all gpus must run in sync if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( layer.__call__, hidden_states, attention_mask, position_bias, output_attentions, ) else: layer_outputs = layer( hidden_states, attention_mask=attention_mask, position_bias=position_bias, output_attentions=output_attentions, index=i, ) hidden_states, position_bias = layer_outputs[:2] if skip_the_layer: layer_outputs = (None, None, None) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[2],) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_self_attentions, ) class WavLMEncoderStableLayerNorm(nn.Module): def __init__(self, config): super().__init__() self.config = config self.pos_conv_embed = WavLMPositionalConvEmbedding(config) self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout) self.layers = nn.ModuleList( [ WavLMEncoderLayerStableLayerNorm(config, has_relative_position_bias=(i == 0)) for i in range(config.num_hidden_layers) ] ) self.gradient_checkpointing = False def forward( self, hidden_states, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None if attention_mask is not None: # make sure padded tokens are not attended to expand_attention_mask = attention_mask.unsqueeze(-1).repeat(1, 1, hidden_states.shape[2]) hidden_states[~expand_attention_mask] = 0 position_embeddings = self.pos_conv_embed(hidden_states) hidden_states = hidden_states + position_embeddings hidden_states = self.dropout(hidden_states) synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) position_bias = None for i, layer in enumerate(self.layers): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) dropout_probability = torch.rand([]) skip_the_layer = self.training and i > 0 and (dropout_probability < self.config.layerdrop) if not skip_the_layer or synced_gpus: # under fsdp or deepspeed zero3 all gpus must run in sync # XXX: could optimize this like synced_gpus in generate_utils but not sure if it's worth the code complication if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( layer.__call__, hidden_states, attention_mask, position_bias, output_attentions, ) else: layer_outputs = layer( hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, position_bias=position_bias, ) hidden_states, position_bias = layer_outputs[:2] if skip_the_layer: layer_outputs = (None, None, None) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[2],) hidden_states = self.layer_norm(hidden_states) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_self_attentions ) class WavLMGumbelVectorQuantizer(nn.Module): """ Vector quantization using gumbel softmax. See [CATEGORICAL REPARAMETERIZATION WITH GUMBEL-SOFTMAX](https://arxiv.org/pdf/1611.01144.pdf) for more information. """ def __init__(self, config): super().__init__() self.num_groups = config.num_codevector_groups self.num_vars = config.num_codevectors_per_group if config.codevector_dim % self.num_groups != 0: raise ValueError( f"`config.codevector_dim {config.codevector_dim} must be divisible" f" by `config.num_codevector_groups` {self.num_groups} " "for concatenation." ) # storage for codebook variables (codewords) self.codevectors = nn.Parameter( torch.FloatTensor(1, self.num_groups * self.num_vars, config.codevector_dim // self.num_groups) ) self.weight_proj = nn.Linear(config.conv_dim[-1], self.num_groups * self.num_vars) # can be decayed for training self.temperature = 2 @staticmethod def _compute_perplexity(probs): marginal_probs = probs.mean(dim=0) perplexity = torch.exp(-torch.sum(marginal_probs * torch.log(marginal_probs + 1e-7), dim=-1)).sum() return perplexity def forward(self, hidden_states): batch_size, sequence_length, hidden_size = hidden_states.shape # project to codevector dim hidden_states = self.weight_proj(hidden_states) hidden_states = hidden_states.view(batch_size * sequence_length * self.num_groups, -1) if self.training: # sample code vector probs via gumbel in differentiateable way codevector_probs = nn.functional.gumbel_softmax(hidden_states.float(), tau=self.temperature, hard=True) codevector_probs = codevector_probs.type_as(hidden_states) # compute perplexity codevector_soft_dist = torch.softmax( hidden_states.view(batch_size * sequence_length, self.num_groups, -1).float(), dim=-1 ) perplexity = self._compute_perplexity(codevector_soft_dist) else: # take argmax in non-differentiable way # comptute hard codevector distribution (one hot) codevector_idx = hidden_states.argmax(dim=-1) codevector_probs = hidden_states.new_zeros(*hidden_states.shape).scatter_( -1, codevector_idx.view(-1, 1), 1.0 ) codevector_probs = codevector_probs.view(batch_size * sequence_length, self.num_groups, -1) perplexity = self._compute_perplexity(codevector_probs) codevector_probs = codevector_probs.view(batch_size * sequence_length, -1) # use probs to retrieve codevectors codevectors_per_group = codevector_probs.unsqueeze(-1) * self.codevectors codevectors = codevectors_per_group.view(batch_size * sequence_length, self.num_groups, self.num_vars, -1) codevectors = codevectors.sum(-2).view(batch_size, sequence_length, -1) return codevectors, perplexity # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Adapter with Wav2Vec2->WavLM class WavLMAdapter(nn.Module): def __init__(self, config): super().__init__() # feature dim might need to be down-projected if config.output_hidden_size != config.hidden_size: self.proj = nn.Linear(config.hidden_size, config.output_hidden_size) self.proj_layer_norm = nn.LayerNorm(config.output_hidden_size) else: self.proj = self.proj_layer_norm = None self.layers = nn.ModuleList(WavLMAdapterLayer(config) for _ in range(config.num_adapter_layers)) self.layerdrop = config.layerdrop def forward(self, hidden_states): # down project hidden_states if necessary if self.proj is not None and self.proj_layer_norm is not None: hidden_states = self.proj(hidden_states) hidden_states = self.proj_layer_norm(hidden_states) hidden_states = hidden_states.transpose(1, 2) for layer in self.layers: layerdrop_prob = np.random.random() if not self.training or (layerdrop_prob > self.layerdrop): hidden_states = layer(hidden_states) hidden_states = hidden_states.transpose(1, 2) return hidden_states # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2AdapterLayer with Wav2Vec2->WavLM class WavLMAdapterLayer(nn.Module): def __init__(self, config): super().__init__() self.conv = nn.Conv1d( config.output_hidden_size, 2 * config.output_hidden_size, config.adapter_kernel_size, stride=config.adapter_stride, padding=1, ) def forward(self, hidden_states): hidden_states = self.conv(hidden_states) hidden_states = nn.functional.glu(hidden_states, dim=1) return hidden_states class WavLMPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = WavLMConfig base_model_prefix = "wavlm" main_input_name = "input_values" supports_gradient_checkpointing = True def _init_weights(self, module): """Initialize the weights""" # gumbel softmax requires special init if isinstance(module, WavLMGumbelVectorQuantizer): module.weight_proj.weight.data.normal_(mean=0.0, std=1) module.weight_proj.bias.data.zero_() nn.init.uniform_(module.codevectors) elif isinstance(module, WavLMPositionalConvEmbedding): nn.init.normal_( module.conv.weight, mean=0, std=2 * math.sqrt(1 / (module.conv.kernel_size[0] * module.conv.in_channels)), ) nn.init.constant_(module.conv.bias, 0) elif isinstance(module, WavLMFeatureProjection): k = math.sqrt(1 / module.projection.in_features) nn.init.uniform_(module.projection.weight, a=-k, b=k) nn.init.uniform_(module.projection.bias, a=-k, b=k) elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, (nn.LayerNorm, nn.GroupNorm)): module.bias.data.zero_() module.weight.data.fill_(1.0) elif isinstance(module, nn.Conv1d): nn.init.kaiming_normal_(module.weight) if module.bias is not None: k = math.sqrt(module.groups / (module.in_channels * module.kernel_size[0])) nn.init.uniform_(module.bias, a=-k, b=k) def _get_feat_extract_output_lengths( self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None ): """ Computes the output length of the convolutional layers """ add_adapter = self.config.add_adapter if add_adapter is None else add_adapter def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken # from https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html return torch.div(input_length - kernel_size, stride, rounding_mode="floor") + 1 for kernel_size, stride in zip(self.config.conv_kernel, self.config.conv_stride): input_lengths = _conv_out_length(input_lengths, kernel_size, stride) if add_adapter: for _ in range(self.config.num_adapter_layers): input_lengths = _conv_out_length(input_lengths, 1, self.config.adapter_stride) return input_lengths def _get_feature_vector_attention_mask( self, feature_vector_length: int, attention_mask: torch.LongTensor, add_adapter=None ): # Effectively attention_mask.sum(-1), but not inplace to be able to run # on inference mode. non_padded_lengths = attention_mask.cumsum(dim=-1)[:, -1] output_lengths = self._get_feat_extract_output_lengths(non_padded_lengths, add_adapter=add_adapter) output_lengths = output_lengths.to(torch.long) batch_size = attention_mask.shape[0] attention_mask = torch.zeros( (batch_size, feature_vector_length), dtype=attention_mask.dtype, device=attention_mask.device ) # these two operations makes sure that all values before the output lengths idxs are attended to attention_mask[(torch.arange(attention_mask.shape[0], device=attention_mask.device), output_lengths - 1)] = 1 attention_mask = attention_mask.flip([-1]).cumsum(-1).flip([-1]).bool() return attention_mask WAVLM_START_DOCSTRING = r""" WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`WavLMConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. """ WAVLM_INPUTS_DOCSTRING = r""" Args: input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip install soundfile`). To prepare the array into `input_values`, the [`AutoProcessor`] should be used for padding and conversion into a tensor of type `torch.FloatTensor`. See [`Wav2Vec2Processor.__call__`] for details. attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing convolution and attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) <Tip warning={true}> `attention_mask` should only be passed if the corresponding processor has `config.return_attention_mask == True`. For all models whose processor has `config.return_attention_mask == False`, `attention_mask` should **not** be passed to avoid degraded performance when doing batched inference. For such models `input_values` should simply be padded with 0 and passed without `attention_mask`. Be aware that these models also yield slightly different results depending on whether `input_values` is padded or not. </Tip> output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ @add_start_docstrings( "The bare WavLM Model transformer outputting raw hidden-states without any specific head on top.", WAVLM_START_DOCSTRING, ) # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Model with Wav2Vec2->WavLM, wav2vec2->wavlm, WAV_2_VEC_2->WAVLM, WavLMBaseModelOutput->Wav2Vec2BaseModelOutput class WavLMModel(WavLMPreTrainedModel): def __init__(self, config: WavLMConfig): super().__init__(config) self.config = config self.feature_extractor = WavLMFeatureEncoder(config) self.feature_projection = WavLMFeatureProjection(config) # model only needs masking vector if mask prob is > 0.0 if config.mask_time_prob > 0.0 or config.mask_feature_prob > 0.0: self.masked_spec_embed = nn.Parameter(torch.Tensor(config.hidden_size).uniform_()) if config.do_stable_layer_norm: self.encoder = WavLMEncoderStableLayerNorm(config) else: self.encoder = WavLMEncoder(config) self.adapter = WavLMAdapter(config) if config.add_adapter else None # Initialize weights and apply final processing self.post_init() def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. " "Please use the equivalent `freeze_feature_encoder` method instead.", FutureWarning, ) self.freeze_feature_encoder() def freeze_feature_encoder(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ self.feature_extractor._freeze_parameters() def _mask_hidden_states( self, hidden_states: torch.FloatTensor, mask_time_indices: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, ): """ Masks extracted features along time axis and/or along feature axis according to [SpecAugment](https://arxiv.org/abs/1904.08779). """ # `config.apply_spec_augment` can set masking to False if not getattr(self.config, "apply_spec_augment", True): return hidden_states # generate indices & apply SpecAugment along time axis batch_size, sequence_length, hidden_size = hidden_states.size() if mask_time_indices is not None: # apply SpecAugment along time axis with given mask_time_indices hidden_states[mask_time_indices] = self.masked_spec_embed.to(hidden_states.dtype) elif self.config.mask_time_prob > 0 and self.training: mask_time_indices = _compute_mask_indices( (batch_size, sequence_length), mask_prob=self.config.mask_time_prob, mask_length=self.config.mask_time_length, attention_mask=attention_mask, min_masks=self.config.mask_time_min_masks, ) mask_time_indices = torch.tensor(mask_time_indices, device=hidden_states.device, dtype=torch.bool) hidden_states[mask_time_indices] = self.masked_spec_embed.to(hidden_states.dtype) if self.config.mask_feature_prob > 0 and self.training: # generate indices & apply SpecAugment along feature axis mask_feature_indices = _compute_mask_indices( (batch_size, hidden_size), mask_prob=self.config.mask_feature_prob, mask_length=self.config.mask_feature_length, min_masks=self.config.mask_feature_min_masks, ) mask_feature_indices = torch.tensor(mask_feature_indices, device=hidden_states.device, dtype=torch.bool) mask_feature_indices = mask_feature_indices[:, None].expand(-1, sequence_length, -1) hidden_states[mask_feature_indices] = 0 return hidden_states @add_start_docstrings_to_model_forward(WAVLM_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=Wav2Vec2BaseModelOutput, config_class=_CONFIG_FOR_DOC, modality="audio", expected_output=_EXPECTED_OUTPUT_SHAPE, ) def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, mask_time_indices: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, Wav2Vec2BaseModelOutput]: output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict extract_features = self.feature_extractor(input_values) extract_features = extract_features.transpose(1, 2) if attention_mask is not None: # compute reduced attention_mask corresponding to feature vectors attention_mask = self._get_feature_vector_attention_mask( extract_features.shape[1], attention_mask, add_adapter=False ) hidden_states, extract_features = self.feature_projection(extract_features) hidden_states = self._mask_hidden_states( hidden_states, mask_time_indices=mask_time_indices, attention_mask=attention_mask ) encoder_outputs = self.encoder( hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = encoder_outputs[0] if self.adapter is not None: hidden_states = self.adapter(hidden_states) if not return_dict: return (hidden_states, extract_features) + encoder_outputs[1:] return Wav2Vec2BaseModelOutput( last_hidden_state=hidden_states, extract_features=extract_features, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) @add_start_docstrings( """WavLM Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).""", WAVLM_START_DOCSTRING, ) # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC with Wav2Vec2->WavLM, wav2vec2->wavlm, WAV_2_VEC_2->WAVLM class WavLMForCTC(WavLMPreTrainedModel): def __init__(self, config, target_lang: Optional[str] = None): super().__init__(config) self.wavlm = WavLMModel(config) self.dropout = nn.Dropout(config.final_dropout) self.target_lang = target_lang if config.vocab_size is None: raise ValueError( f"You are trying to instantiate {self.__class__} with a configuration that " "does not define the vocabulary size of the language model head. Please " "instantiate the model as follows: `WavLMForCTC.from_pretrained(..., vocab_size=vocab_size)`. " "or define `vocab_size` of your model's configuration." ) output_hidden_size = ( config.output_hidden_size if hasattr(config, "add_adapter") and config.add_adapter else config.hidden_size ) self.lm_head = nn.Linear(output_hidden_size, config.vocab_size) # Initialize weights and apply final processing self.post_init() def tie_weights(self): """ This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when passing `target_lang=...` to `from_pretrained(...)`. This method is **not** supposed to be called by the user and is prone to be changed in the future. """ # Note that `tie_weights` is usually used to tie input and output embedding weights. The method is re-purposed to # correctly load adapter layers for WavLM so that we do not have to introduce a new API to # [`PreTrainedModel`]. While slightly hacky, WavLM never has to tie input and output embeddings, so that it is # ok to repurpose this function here. target_lang = self.target_lang if target_lang is not None and getattr(self.config, "adapter_attn_dim", None) is None: raise ValueError(f"Cannot pass `target_lang`: {target_lang} if `config.adapter_attn_dim` is not defined.") elif target_lang is None and getattr(self.config, "adapter_attn_dim", None) is not None: logger.info("By default `target_lang` is set to 'eng'.") elif target_lang is not None: self.load_adapter(target_lang, force_load=True) def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. " "Please use the equivalent `freeze_feature_encoder` method instead.", FutureWarning, ) self.freeze_feature_encoder() def freeze_feature_encoder(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ self.wavlm.feature_extractor._freeze_parameters() def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.wavlm.parameters(): param.requires_grad = False @add_start_docstrings_to_model_forward(WAVLM_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=CausalLMOutput, config_class=_CONFIG_FOR_DOC, expected_output=_CTC_EXPECTED_OUTPUT, expected_loss=_CTC_EXPECTED_LOSS, ) def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None, ) -> Union[Tuple, CausalLMOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*): Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size - 1]`. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict if labels is not None and labels.max() >= self.config.vocab_size: raise ValueError(f"Label values must be <= vocab_size: {self.config.vocab_size}") outputs = self.wavlm( input_values, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] hidden_states = self.dropout(hidden_states) logits = self.lm_head(hidden_states) loss = None if labels is not None: # retrieve loss input_lengths from attention_mask attention_mask = ( attention_mask if attention_mask is not None else torch.ones_like(input_values, dtype=torch.long) ) input_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(-1)).to(torch.long) # assuming that padded tokens are filled with -100 # when not being attended to labels_mask = labels >= 0 target_lengths = labels_mask.sum(-1) flattened_targets = labels.masked_select(labels_mask) # ctc_loss doesn't support fp16 log_probs = nn.functional.log_softmax(logits, dim=-1, dtype=torch.float32).transpose(0, 1) with torch.backends.cudnn.flags(enabled=False): loss = nn.functional.ctc_loss( log_probs, flattened_targets, input_lengths, target_lengths, blank=self.config.pad_token_id, reduction=self.config.ctc_loss_reduction, zero_infinity=self.config.ctc_zero_infinity, ) if not return_dict: output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:] return ((loss,) + output) if loss is not None else output return CausalLMOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions ) @add_start_docstrings( """ WavLM Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. """, WAVLM_START_DOCSTRING, ) class WavLMForSequenceClassification(WavLMPreTrainedModel): def __init__(self, config): super().__init__(config) if hasattr(config, "add_adapter") and config.add_adapter: raise ValueError( "Sequence classification does not support the use of WavLM adapters (config.add_adapter=True)" ) self.wavlm = WavLMModel(config) num_layers = config.num_hidden_layers + 1 # transformer layers + input embeddings if config.use_weighted_layer_sum: self.layer_weights = nn.Parameter(torch.ones(num_layers) / num_layers) self.projector = nn.Linear(config.hidden_size, config.classifier_proj_size) self.classifier = nn.Linear(config.classifier_proj_size, config.num_labels) # Initialize weights and apply final processing self.post_init() # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.freeze_feature_extractor def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. " "Please use the equivalent `freeze_feature_encoder` method instead.", FutureWarning, ) self.freeze_feature_encoder() # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.freeze_feature_encoder with wav2vec2->wavlm def freeze_feature_encoder(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ self.wavlm.feature_extractor._freeze_parameters() # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.freeze_base_model with wav2vec2->wavlm def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.wavlm.parameters(): param.requires_grad = False @add_start_docstrings_to_model_forward(WAVLM_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=SequenceClassifierOutput, config_class=_CONFIG_FOR_DOC, modality="audio", ) # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.forward with Wav2Vec2->WavLM, wav2vec2->wavlm def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None, ) -> Union[Tuple, SequenceClassifierOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states outputs = self.wavlm( input_values, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if self.config.use_weighted_layer_sum: hidden_states = outputs[_HIDDEN_STATES_START_POSITION] hidden_states = torch.stack(hidden_states, dim=1) norm_weights = nn.functional.softmax(self.layer_weights, dim=-1) hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1) else: hidden_states = outputs[0] hidden_states = self.projector(hidden_states) if attention_mask is None: pooled_output = hidden_states.mean(dim=1) else: padding_mask = self._get_feature_vector_attention_mask(hidden_states.shape[1], attention_mask) expand_padding_mask = padding_mask.unsqueeze(-1).repeat(1, 1, hidden_states.shape[2]) hidden_states[~expand_padding_mask] = 0.0 pooled_output = hidden_states.sum(dim=1) / padding_mask.sum(dim=1).view(-1, 1) logits = self.classifier(pooled_output) loss = None if labels is not None: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:] return ((loss,) + output) if loss is not None else output return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ WavLM Model with a frame classification head on top for tasks like Speaker Diarization. """, WAVLM_START_DOCSTRING, ) # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForAudioFrameClassification with Wav2Vec2->WavLM, wav2vec2->wavlm, WAV_2_VEC_2->WAVLM class WavLMForAudioFrameClassification(WavLMPreTrainedModel): def __init__(self, config): super().__init__(config) if hasattr(config, "add_adapter") and config.add_adapter: raise ValueError( "Audio frame classification does not support the use of WavLM adapters (config.add_adapter=True)" ) self.wavlm = WavLMModel(config) num_layers = config.num_hidden_layers + 1 # transformer layers + input embeddings if config.use_weighted_layer_sum: self.layer_weights = nn.Parameter(torch.ones(num_layers) / num_layers) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.num_labels = config.num_labels self.init_weights() def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. " "Please use the equivalent `freeze_feature_encoder` method instead.", FutureWarning, ) self.freeze_feature_encoder() def freeze_feature_encoder(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ self.wavlm.feature_extractor._freeze_parameters() def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.wavlm.parameters(): param.requires_grad = False @add_start_docstrings_to_model_forward(WAVLM_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_FRAME_CLASS_CHECKPOINT, output_type=TokenClassifierOutput, config_class=_CONFIG_FOR_DOC, modality="audio", expected_output=_FRAME_EXPECTED_OUTPUT, ) def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, TokenClassifierOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states outputs = self.wavlm( input_values, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if self.config.use_weighted_layer_sum: hidden_states = outputs[_HIDDEN_STATES_START_POSITION] hidden_states = torch.stack(hidden_states, dim=1) norm_weights = nn.functional.softmax(self.layer_weights, dim=-1) hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1) else: hidden_states = outputs[0] logits = self.classifier(hidden_states) loss = None if labels is not None: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), torch.argmax(labels.view(-1, self.num_labels), axis=1)) if not return_dict: output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:] return output return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # Copied from transformers.models.wav2vec2.modeling_wav2vec2.AMSoftmaxLoss class AMSoftmaxLoss(nn.Module): def __init__(self, input_dim, num_labels, scale=30.0, margin=0.4): super(AMSoftmaxLoss, self).__init__() self.scale = scale self.margin = margin self.num_labels = num_labels self.weight = nn.Parameter(torch.randn(input_dim, num_labels), requires_grad=True) self.loss = nn.CrossEntropyLoss() def forward(self, hidden_states, labels): labels = labels.flatten() weight = nn.functional.normalize(self.weight, dim=0) hidden_states = nn.functional.normalize(hidden_states, dim=1) cos_theta = torch.mm(hidden_states, weight) psi = cos_theta - self.margin onehot = nn.functional.one_hot(labels, self.num_labels) logits = self.scale * torch.where(onehot.bool(), psi, cos_theta) loss = self.loss(logits, labels) return loss # Copied from transformers.models.wav2vec2.modeling_wav2vec2.TDNNLayer class TDNNLayer(nn.Module): def __init__(self, config, layer_id=0): super().__init__() self.in_conv_dim = config.tdnn_dim[layer_id - 1] if layer_id > 0 else config.tdnn_dim[layer_id] self.out_conv_dim = config.tdnn_dim[layer_id] self.kernel_size = config.tdnn_kernel[layer_id] self.dilation = config.tdnn_dilation[layer_id] self.kernel = nn.Linear(self.in_conv_dim * self.kernel_size, self.out_conv_dim) self.activation = nn.ReLU() def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: if is_peft_available(): from peft.tuners.lora import LoraLayer if isinstance(self.kernel, LoraLayer): warnings.warn( "Detected LoRA on TDNNLayer. LoRA weights won't be applied due to optimization. " "You should exclude TDNNLayer from LoRA's target modules.", ) # for backward compatibility, we keep nn.Linear but call F.conv1d for speed up hidden_states = hidden_states.transpose(1, 2) weight = self.kernel.weight.view(self.out_conv_dim, self.kernel_size, self.in_conv_dim).transpose(1, 2) hidden_states = nn.functional.conv1d(hidden_states, weight, self.kernel.bias, dilation=self.dilation) hidden_states = hidden_states.transpose(1, 2) hidden_states = self.activation(hidden_states) return hidden_states @add_start_docstrings( """ WavLM Model with an XVector feature extraction head on top for tasks like Speaker Verification. """, WAVLM_START_DOCSTRING, ) # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForXVector with Wav2Vec2->WavLM, wav2vec2->wavlm, WAV_2_VEC_2->WAVLM class WavLMForXVector(WavLMPreTrainedModel): def __init__(self, config): super().__init__(config) self.wavlm = WavLMModel(config) num_layers = config.num_hidden_layers + 1 # transformer layers + input embeddings if config.use_weighted_layer_sum: self.layer_weights = nn.Parameter(torch.ones(num_layers) / num_layers) self.projector = nn.Linear(config.hidden_size, config.tdnn_dim[0]) tdnn_layers = [TDNNLayer(config, i) for i in range(len(config.tdnn_dim))] self.tdnn = nn.ModuleList(tdnn_layers) self.feature_extractor = nn.Linear(config.tdnn_dim[-1] * 2, config.xvector_output_dim) self.classifier = nn.Linear(config.xvector_output_dim, config.xvector_output_dim) self.objective = AMSoftmaxLoss(config.xvector_output_dim, config.num_labels) self.init_weights() def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. " "Please use the equivalent `freeze_feature_encoder` method instead.", FutureWarning, ) self.freeze_feature_encoder() def freeze_feature_encoder(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ self.wavlm.feature_extractor._freeze_parameters() def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.wavlm.parameters(): param.requires_grad = False def _get_tdnn_output_lengths(self, input_lengths: Union[torch.LongTensor, int]): """ Computes the output length of the TDNN layers """ def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken # from https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html return (input_length - kernel_size) // stride + 1 for kernel_size in self.config.tdnn_kernel: input_lengths = _conv_out_length(input_lengths, kernel_size, 1) return input_lengths @add_start_docstrings_to_model_forward(WAVLM_INPUTS_DOCSTRING) @add_code_sample_docstrings( checkpoint=_XVECTOR_CHECKPOINT, output_type=XVectorOutput, config_class=_CONFIG_FOR_DOC, modality="audio", expected_output=_XVECTOR_EXPECTED_OUTPUT, ) def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None, ) -> Union[Tuple, XVectorOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states outputs = self.wavlm( input_values, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if self.config.use_weighted_layer_sum: hidden_states = outputs[_HIDDEN_STATES_START_POSITION] hidden_states = torch.stack(hidden_states, dim=1) norm_weights = nn.functional.softmax(self.layer_weights, dim=-1) hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1) else: hidden_states = outputs[0] hidden_states = self.projector(hidden_states) for tdnn_layer in self.tdnn: hidden_states = tdnn_layer(hidden_states) # Statistic Pooling if attention_mask is None: mean_features = hidden_states.mean(dim=1) std_features = hidden_states.std(dim=1) else: feat_extract_output_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(dim=1)) tdnn_output_lengths = self._get_tdnn_output_lengths(feat_extract_output_lengths) mean_features = [] std_features = [] for i, length in enumerate(tdnn_output_lengths): mean_features.append(hidden_states[i, :length].mean(dim=0)) std_features.append(hidden_states[i, :length].std(dim=0)) mean_features = torch.stack(mean_features) std_features = torch.stack(std_features) statistic_pooling = torch.cat([mean_features, std_features], dim=-1) output_embeddings = self.feature_extractor(statistic_pooling) logits = self.classifier(output_embeddings) loss = None if labels is not None: loss = self.objective(logits, labels) if not return_dict: output = (logits, output_embeddings) + outputs[_HIDDEN_STATES_START_POSITION:] return ((loss,) + output) if loss is not None else output return XVectorOutput( loss=loss, logits=logits, embeddings=output_embeddings, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) __all__ = [ "WavLMForAudioFrameClassification", "WavLMForCTC", "WavLMForSequenceClassification", "WavLMForXVector", "WavLMModel", "WavLMPreTrainedModel", ]
transformers/src/transformers/models/wavlm/modeling_wavlm.py/0
{ "file_path": "transformers/src/transformers/models/wavlm/modeling_wavlm.py", "repo_id": "transformers", "token_count": 34204 }
# coding=utf-8 # Copyright 2022 Microsoft Research and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch X-CLIP model.""" from copy import copy from dataclasses import dataclass from typing import Any, Optional, Tuple, Union import torch import torch.utils.checkpoint from torch import nn from ...activations import ACT2FN from ...modeling_attn_mask_utils import _create_4d_causal_attention_mask, _prepare_4d_attention_mask from ...modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling from ...modeling_utils import PreTrainedModel from ...utils import ( ModelOutput, add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings, torch_int, ) from .configuration_x_clip import XCLIPConfig, XCLIPTextConfig, XCLIPVisionConfig logger = logging.get_logger(__name__) _CHECKPOINT_FOR_DOC = "microsoft/xclip-base-patch32" # contrastive loss function, adapted from # https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html def contrastive_loss(logits: torch.Tensor) -> torch.Tensor: return nn.functional.cross_entropy(logits, torch.arange(len(logits), device=logits.device)) # Copied from transformers.models.clip.modeling_clip.clip_loss with clip->x_clip def x_clip_loss(similarity: torch.Tensor) -> torch.Tensor: caption_loss = contrastive_loss(similarity) image_loss = contrastive_loss(similarity.t()) return (caption_loss + image_loss) / 2.0 @dataclass class XCLIPOutput(ModelOutput): """ Args: loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `return_loss` is `True`): Contrastive loss for video-text similarity. logits_per_video (`torch.FloatTensor` of shape `(video_batch_size, text_batch_size)`): The scaled dot product scores between `video_embeds` and `text_embeds`. This represents the video-text similarity scores. logits_per_text (`torch.FloatTensor` of shape `(text_batch_size, video_batch_size)`): The scaled dot product scores between `text_embeds` and `video_embeds`. This represents the text-video similarity scores. text_embeds(`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by applying the projection layer to the pooled output of [`XCLIPTextModel`]. video_embeds(`torch.FloatTensor` of shape `(batch_size, output_dim`): The video embeddings obtained by applying the projection layer to the pooled output of [`XCLIPVisionModel`]. text_model_output (`BaseModelOutputWithPooling`): The output of the [`XCLIPTextModel`]. vision_model_output (`BaseModelOutputWithPooling`): The output of the [`XCLIPVisionModel`]. mit_output (`BaseModelOutputWithPooling`): The output of `XCLIPMultiframeIntegrationTransformer` (MIT for short). """ loss: Optional[torch.FloatTensor] = None logits_per_video: torch.FloatTensor = None logits_per_text: torch.FloatTensor = None text_embeds: torch.FloatTensor = None video_embeds: torch.FloatTensor = None text_model_output: BaseModelOutputWithPooling = None vision_model_output: BaseModelOutputWithPooling = None mit_output: BaseModelOutputWithPooling = None def to_tuple(self) -> Tuple[Any]: return tuple( self[k] if k not in ["text_model_output", "vision_model_output", "mit_output"] else getattr(self, k).to_tuple() for k in self.keys() ) # Copied from transformers.models.clip.modeling_clip.CLIPVisionEmbeddings with CLIP->XCLIP class XCLIPVisionEmbeddings(nn.Module): def __init__(self, config: XCLIPVisionConfig): super().__init__() self.config = config self.embed_dim = config.hidden_size self.image_size = config.image_size self.patch_size = config.patch_size self.class_embedding = nn.Parameter(torch.randn(self.embed_dim)) self.patch_embedding = nn.Conv2d( in_channels=config.num_channels, out_channels=self.embed_dim, kernel_size=self.patch_size, stride=self.patch_size, bias=False, ) self.num_patches = (self.image_size // self.patch_size) ** 2 self.num_positions = self.num_patches + 1 self.position_embedding = nn.Embedding(self.num_positions, self.embed_dim) self.register_buffer("position_ids", torch.arange(self.num_positions).expand((1, -1)), persistent=False) def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor: """ This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution images. This method is also adapted to support torch.jit tracing. Adapted from: - https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362a/vision_transformer.py#L174-L194, and - https://github.com/facebookresearch/dinov2/blob/e1277af2ba9496fbadf7aec6eba56e8d882d1e35/dinov2/models/vision_transformer.py#L179-L211 """ num_patches = embeddings.shape[1] - 1 position_embedding = self.position_embedding.weight.unsqueeze(0) num_positions = position_embedding.shape[1] - 1 # always interpolate when tracing to ensure the exported model works for dynamic input shapes if not torch.jit.is_tracing() and num_patches == num_positions and height == width: return self.position_embedding(self.position_ids) class_pos_embed = position_embedding[:, :1] patch_pos_embed = position_embedding[:, 1:] dim = embeddings.shape[-1] new_height = height // self.patch_size new_width = width // self.patch_size sqrt_num_positions = torch_int(num_positions**0.5) patch_pos_embed = patch_pos_embed.reshape(1, sqrt_num_positions, sqrt_num_positions, dim) patch_pos_embed = patch_pos_embed.permute(0, 3, 1, 2) patch_pos_embed = nn.functional.interpolate( patch_pos_embed, size=(new_height, new_width), mode="bicubic", align_corners=False, ) patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) return torch.cat((class_pos_embed, patch_pos_embed), dim=1) def forward(self, pixel_values: torch.FloatTensor, interpolate_pos_encoding=False) -> torch.Tensor: batch_size, _, height, width = pixel_values.shape if not interpolate_pos_encoding and (height != self.image_size or width != self.image_size): raise ValueError( f"Input image size ({height}*{width}) doesn't match model" f" ({self.image_size}*{self.image_size})." ) target_dtype = self.patch_embedding.weight.dtype patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # shape = [*, width, grid, grid] patch_embeds = patch_embeds.flatten(2).transpose(1, 2) class_embeds = self.class_embedding.expand(batch_size, 1, -1) embeddings = torch.cat([class_embeds, patch_embeds], dim=1) if interpolate_pos_encoding: embeddings = embeddings + self.interpolate_pos_encoding(embeddings, height, width) else: embeddings = embeddings + self.position_embedding(self.position_ids) return embeddings # Copied from transformers.models.clip.modeling_clip.CLIPTextEmbeddings with CLIP->XCLIP class XCLIPTextEmbeddings(nn.Module): def __init__(self, config: XCLIPTextConfig): super().__init__() embed_dim = config.hidden_size self.token_embedding = nn.Embedding(config.vocab_size, embed_dim) self.position_embedding = nn.Embedding(config.max_position_embeddings, embed_dim) # position_ids (1, len position emb) is contiguous in memory and exported when serialized self.register_buffer( "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)), persistent=False ) def forward( self, input_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, ) -> torch.Tensor: seq_length = input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2] max_position_embedding = self.position_embedding.weight.shape[0] if seq_length > max_position_embedding: raise ValueError( f"Sequence length must be less than max_position_embeddings (got `sequence length`: " f"{seq_length} and max_position_embeddings: {max_position_embedding}" ) if position_ids is None: position_ids = self.position_ids[:, :seq_length] if inputs_embeds is None: inputs_embeds = self.token_embedding(input_ids) position_embeddings = self.position_embedding(position_ids) embeddings = inputs_embeds + position_embeddings return embeddings # Copied from transformers.models.clip.modeling_clip.CLIPAttention with CLIP->XCLIP class XCLIPAttention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" def __init__(self, config): super().__init__() self.config = config self.embed_dim = config.hidden_size self.num_heads = config.num_attention_heads self.head_dim = self.embed_dim // self.num_heads if self.head_dim * self.num_heads != self.embed_dim: raise ValueError( f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:" f" {self.num_heads})." ) self.scale = self.head_dim**-0.5 self.dropout = config.attention_dropout self.k_proj = nn.Linear(self.embed_dim, self.embed_dim) self.v_proj = nn.Linear(self.embed_dim, self.embed_dim) self.q_proj = nn.Linear(self.embed_dim, self.embed_dim) self.out_proj = nn.Linear(self.embed_dim, self.embed_dim) def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, causal_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: """Input shape: Batch x Time x Channel""" bsz, tgt_len, embed_dim = hidden_states.size() # get query proj query_states = self.q_proj(hidden_states) * self.scale key_states = self._shape(self.k_proj(hidden_states), -1, bsz) value_states = self._shape(self.v_proj(hidden_states), -1, bsz) proj_shape = (bsz * self.num_heads, -1, self.head_dim) query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) key_states = key_states.view(*proj_shape) value_states = value_states.view(*proj_shape) src_len = key_states.size(1) attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): raise ValueError( f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is" f" {attn_weights.size()}" ) # apply the causal_attention_mask first if causal_attention_mask is not None: if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len): raise ValueError( f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is" f" {causal_attention_mask.size()}" ) attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + causal_attention_mask attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) if attention_mask is not None: if attention_mask.size() != (bsz, 1, tgt_len, src_len): raise ValueError( f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" ) attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) attn_weights = nn.functional.softmax(attn_weights, dim=-1) if output_attentions: # this operation is a bit akward, but it's required to # make sure that attn_weights keeps its gradient. # In order to do so, attn_weights have to reshaped # twice and have to be reused in the following attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len) else: attn_weights_reshaped = None attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) attn_output = torch.bmm(attn_probs, value_states) if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): raise ValueError( f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is" f" {attn_output.size()}" ) attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) attn_output = attn_output.transpose(1, 2) attn_output = attn_output.reshape(bsz, tgt_len, embed_dim) attn_output = self.out_proj(attn_output) return attn_output, attn_weights_reshaped # Copied from transformers.models.clip.modeling_clip.CLIPMLP with CLIP->XCLIP class XCLIPMLP(nn.Module): def __init__(self, config): super().__init__() self.config = config self.activation_fn = ACT2FN[config.hidden_act] self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size) self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size) def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states = self.fc1(hidden_states) hidden_states = self.activation_fn(hidden_states) hidden_states = self.fc2(hidden_states) return hidden_states # Copied from transformers.models.altclip.modeling_altclip.AltCLIPEncoderLayer with AltCLIP->XCLIP class XCLIPEncoderLayer(nn.Module): def __init__(self, config: XCLIPConfig): super().__init__() self.embed_dim = config.hidden_size self.self_attn = XCLIPAttention(config) self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) self.mlp = XCLIPMLP(config) self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, causal_attention_mask: torch.Tensor, output_attentions: Optional[bool] = False, ) -> Tuple[torch.FloatTensor]: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. `(config.encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states hidden_states = self.layer_norm1(hidden_states) hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, causal_attention_mask=causal_attention_mask, output_attentions=output_attentions, ) hidden_states = residual + hidden_states residual = hidden_states hidden_states = self.layer_norm2(hidden_states) hidden_states = self.mlp(hidden_states) hidden_states = residual + hidden_states outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs # Copied from transformers.models.beit.modeling_beit.drop_path def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor: """ Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the argument. """ if drop_prob == 0.0 or not training: return input keep_prob = 1 - drop_prob shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device) random_tensor.floor_() # binarize output = input.div(keep_prob) * random_tensor return output # Copied from transformers.models.beit.modeling_beit.BeitDropPath with Beit->XCLIP class XCLIPDropPath(nn.Module): """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" def __init__(self, drop_prob: Optional[float] = None) -> None: super().__init__() self.drop_prob = drop_prob def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: return drop_path(hidden_states, self.drop_prob, self.training) def extra_repr(self) -> str: return "p={}".format(self.drop_prob) class XCLIPVisionEncoderLayer(nn.Module): """ This corresponds to the `CrossFramelAttentionBlock` class in the original implementation. """ def __init__(self, config: XCLIPConfig): super().__init__() self.num_frames = config.num_frames self.embed_dim = config.hidden_size self.message_fc = nn.Linear(self.embed_dim, self.embed_dim) self.message_ln = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) self.message_attn = XCLIPAttention(config) self.drop_path = XCLIPDropPath(config.drop_path_rate) if config.drop_path_rate > 0.0 else nn.Identity() self.self_attn = XCLIPAttention(config) self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) self.mlp = XCLIPMLP(config) self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, causal_attention_mask: torch.Tensor, output_attentions: Optional[bool] = False, ) -> Tuple[torch.FloatTensor]: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. `(config.encoder_attention_heads,)`. causal_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Causal mask for the text model. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ batch_time, seq_length, hidden_size = hidden_states.size() batch_size = batch_time // self.num_frames msg_token = self.message_fc(hidden_states[:, 0, :]) msg_token = msg_token.view(batch_size, self.num_frames, hidden_size) msg_token = msg_token + self.drop_path(self.message_attn(self.message_ln(msg_token))[0]) # add dummy sequence dimension msg_token = msg_token.view(-1, 1, hidden_size) hidden_states = torch.cat([hidden_states, msg_token], dim=1) residual = hidden_states hidden_states = self.layer_norm1(hidden_states) hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, causal_attention_mask=causal_attention_mask, output_attentions=output_attentions, ) hidden_states = residual + hidden_states hidden_states = hidden_states[:, :seq_length, :] residual = hidden_states hidden_states = self.layer_norm2(hidden_states) hidden_states = self.mlp(hidden_states) hidden_states = residual + hidden_states outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs class XCLIPPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = XCLIPConfig base_model_prefix = "x_clip" supports_gradient_checkpointing = True def _init_weights(self, module): """Initialize the weights""" factor = self.config.initializer_factor if isinstance(module, XCLIPTextEmbeddings): module.token_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02) module.position_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02) elif isinstance(module, XCLIPVisionEmbeddings): factor = self.config.initializer_factor nn.init.normal_(module.class_embedding, mean=0.0, std=module.embed_dim**-0.5 * factor) nn.init.normal_(module.patch_embedding.weight, std=module.config.initializer_range * factor) nn.init.normal_(module.position_embedding.weight, std=module.config.initializer_range * factor) elif isinstance(module, XCLIPAttention): factor = self.config.initializer_factor in_proj_std = (module.embed_dim**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor out_proj_std = (module.embed_dim**-0.5) * factor nn.init.normal_(module.q_proj.weight, std=in_proj_std) nn.init.normal_(module.k_proj.weight, std=in_proj_std) nn.init.normal_(module.v_proj.weight, std=in_proj_std) nn.init.normal_(module.out_proj.weight, std=out_proj_std) elif isinstance(module, XCLIPMLP): factor = self.config.initializer_factor in_proj_std = (module.config.hidden_size**-0.5) * ((2 * module.config.num_hidden_layers) ** -0.5) * factor fc_std = (2 * module.config.hidden_size) ** -0.5 * factor nn.init.normal_(module.fc1.weight, std=fc_std) nn.init.normal_(module.fc2.weight, std=in_proj_std) elif isinstance(module, XCLIPModel): factor = self.config.initializer_factor nn.init.normal_( module.text_projection.weight, std=module.text_embed_dim**-0.5 * factor, ) nn.init.normal_( module.visual_projection.weight, std=module.vision_embed_dim**-0.5 * factor, ) nn.init.normal_(module.prompts_visual_projection, mean=0.0, std=module.vision_embed_dim**-0.5 * factor) elif isinstance(module, XCLIPMultiframeIntegrationTransformer): nn.init.normal_(module.position_embedding, std=self.config.initializer_factor) if isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_factor) if module.bias is not None: module.bias.data.zero_() X_CLIP_START_DOCSTRING = r""" This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XCLIPConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. """ X_CLIP_TEXT_INPUTS_DOCSTRING = r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ X_CLIP_VISION_INPUTS_DOCSTRING = r""" Args: pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. interpolate_pos_encoding (`bool`, *optional*, defaults `False`): Whether to interpolate the pre-trained position encodings. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ X_CLIP_INPUTS_DOCSTRING = r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details. return_loss (`bool`, *optional*): Whether or not to return the contrastive loss. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. interpolate_pos_encoding (`bool`, *optional*, defaults `False`): Whether to interpolate the pre-trained position encodings. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ # Copied from transformers.models.altclip.modeling_altclip.AltCLIPEncoder with AltCLIP->XCLIP class XCLIPEncoder(nn.Module): """ Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a [`XCLIPEncoderLayer`]. Args: config: XCLIPConfig """ def __init__(self, config: XCLIPConfig): super().__init__() self.config = config self.layers = nn.ModuleList([XCLIPEncoderLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False def forward( self, inputs_embeds, attention_mask: Optional[torch.Tensor] = None, causal_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutput]: r""" Args: inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) causal_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Causal mask for the text model. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None hidden_states = inputs_embeds for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( encoder_layer.__call__, hidden_states, attention_mask, causal_attention_mask, output_attentions, ) else: layer_outputs = encoder_layer( hidden_states, attention_mask, causal_attention_mask, output_attentions=output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions ) class XCLIPTextTransformer(nn.Module): def __init__(self, config: XCLIPTextConfig): super().__init__() self.config = config embed_dim = config.hidden_size self.embeddings = XCLIPTextEmbeddings(config) self.encoder = XCLIPEncoder(config) self.final_layer_norm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) @add_start_docstrings_to_model_forward(X_CLIP_TEXT_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=XCLIPTextConfig) def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" Returns: """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is None: raise ValueError("You have to specify either input_ids") input_shape = input_ids.size() input_ids = input_ids.view(-1, input_shape[-1]) hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) # X_CLIP's text model uses causal mask, prepare it here. # https://github.com/openai/CLIP/blob/cfcffb90e69f37bf2ff1e988237a0fbe41f33c04/clip/model.py#L324 causal_attention_mask = _create_4d_causal_attention_mask( input_shape, hidden_states.dtype, device=hidden_states.device ) # expand attention_mask if attention_mask is not None: # [batch_size, seq_len] -> [batch_size, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask(attention_mask, hidden_states.dtype) encoder_outputs = self.encoder( inputs_embeds=hidden_states, attention_mask=attention_mask, causal_attention_mask=causal_attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = encoder_outputs[0] last_hidden_state = self.final_layer_norm(last_hidden_state) # text_embeds.shape = [batch_size, sequence_length, transformer.width] # take features from the eot embedding (eot_token is the highest number in each sequence) pooled_output = last_hidden_state[torch.arange(last_hidden_state.shape[0]), input_ids.argmax(dim=-1)] if not return_dict: return (last_hidden_state, pooled_output) + encoder_outputs[1:] return BaseModelOutputWithPooling( last_hidden_state=last_hidden_state, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class XCLIPTextModel(XCLIPPreTrainedModel): config_class = XCLIPTextConfig def __init__(self, config: XCLIPTextConfig): super().__init__(config) self.text_model = XCLIPTextTransformer(config) # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self) -> nn.Module: return self.text_model.embeddings.token_embedding def set_input_embeddings(self, value): self.text_model.embeddings.token_embedding = value @add_start_docstrings_to_model_forward(X_CLIP_TEXT_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=XCLIPTextConfig) def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" Returns: Examples: ```python >>> from transformers import AutoTokenizer, XCLIPTextModel >>> model = XCLIPTextModel.from_pretrained("microsoft/xclip-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output # pooled (EOS token) states ```""" return self.text_model( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) class XCLIPVisionEncoder(nn.Module): """ Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a [`XCLIPVisionEncoderLayer`]. Args: config: XCLIPConfig """ def __init__(self, config: XCLIPConfig): super().__init__() self.config = config self.layers = nn.ModuleList([XCLIPVisionEncoderLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False def forward( self, inputs_embeds, attention_mask: Optional[torch.Tensor] = None, causal_attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutput]: r""" Args: inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) causal_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Causal mask for the text model. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None hidden_states = inputs_embeds for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func( encoder_layer.__call__, hidden_states, attention_mask, causal_attention_mask, output_attentions, ) else: layer_outputs = encoder_layer( hidden_states, attention_mask, causal_attention_mask, output_attentions=output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions ) class XCLIPVisionTransformer(nn.Module): """ This corresponds to the `CrossFrameCommunicationTransformer` class in the original implementation. """ def __init__(self, config: XCLIPVisionConfig): super().__init__() self.config = config embed_dim = config.hidden_size self.embeddings = XCLIPVisionEmbeddings(config) self.pre_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) self.encoder = XCLIPVisionEncoder(config) self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) @add_start_docstrings_to_model_forward(X_CLIP_VISION_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=XCLIPVisionConfig) def forward( self, pixel_values: torch.FloatTensor, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" Returns: """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict hidden_states = self.embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) hidden_states = self.pre_layernorm(hidden_states) encoder_outputs = self.encoder( inputs_embeds=hidden_states, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = encoder_outputs[0] pooled_output = last_hidden_state[:, 0, :] pooled_output = self.post_layernorm(pooled_output) if not return_dict: return (last_hidden_state, pooled_output) + encoder_outputs[1:] return BaseModelOutputWithPooling( last_hidden_state=last_hidden_state, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class XCLIPVisionModel(XCLIPPreTrainedModel): config_class = XCLIPVisionConfig main_input_name = "pixel_values" def __init__(self, config: XCLIPVisionConfig): super().__init__(config) self.vision_model = XCLIPVisionTransformer(config) # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self) -> nn.Module: return self.vision_model.embeddings.patch_embedding @add_start_docstrings_to_model_forward(X_CLIP_VISION_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=XCLIPVisionConfig) def forward( self, pixel_values: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" Returns: Examples: ```python >>> import av >>> import torch >>> import numpy as np >>> from transformers import AutoProcessor, XCLIPVisionModel >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> # video clip consists of 300 frames (10 seconds at 30 FPS) >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> # sample 16 frames >>> indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container, indices) >>> processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32") >>> model = XCLIPVisionModel.from_pretrained("microsoft/xclip-base-patch32") >>> pixel_values = processor(videos=list(video), return_tensors="pt").pixel_values >>> batch_size, num_frames, num_channels, height, width = pixel_values.shape >>> pixel_values = pixel_values.reshape(-1, num_channels, height, width) >>> outputs = model(pixel_values) >>> last_hidden_state = outputs.last_hidden_state ```""" return self.vision_model( pixel_values=pixel_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) class XCLIPMultiframeIntegrationTransformer(nn.Module): """ This corresponds to the `MultiframeIntegrationTransformer` class in the original implementation. """ def __init__(self, config: XCLIPVisionConfig): super().__init__() self.position_embedding = nn.Parameter(torch.empty(1, config.num_frames, config.hidden_size)) self.encoder = XCLIPEncoder(config) def forward( self, hidden_states, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutput]: residual = hidden_states # add position embeddings hidden_states = hidden_states + self.position_embedding encoder_outputs = self.encoder( inputs_embeds=hidden_states, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = encoder_outputs[0] last_hidden_state = last_hidden_state.type(hidden_states.dtype) + residual pooled_output = last_hidden_state.mean(dim=1, keepdim=False) if not return_dict: return (last_hidden_state, pooled_output) + encoder_outputs[1:] return BaseModelOutputWithPooling( last_hidden_state=last_hidden_state, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class XCLIPCrossAttention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" def __init__(self, config): super().__init__() self.num_heads = config.prompt_num_attention_heads dim = config.projection_dim head_dim = dim // self.num_heads self.scale = head_dim**-0.5 self.q_proj = nn.Linear(dim, dim, bias=False) self.k_proj = nn.Linear(dim, dim, bias=False) self.v_proj = nn.Linear(dim, dim, bias=False) self.attn_drop = nn.Dropout(config.prompt_attention_dropout) self.proj = nn.Linear(dim, dim) self.proj_drop = nn.Dropout(config.prompt_projection_dropout) def _shape(self, tensor: torch.Tensor, seq_len: int, batch_size: int): return tensor.view(batch_size, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() def forward(self, queries, keys, values): """Input shape: Batch x Time x Channel""" batch_size, query_seq_len, hidden_size = queries.shape batch_size, key_seq_len, hidden_size = keys.shape queries = ( self.q_proj(queries) .reshape(batch_size, query_seq_len, self.num_heads, hidden_size // self.num_heads) .permute(0, 2, 1, 3) ) keys = ( self.k_proj(keys) .reshape(batch_size, key_seq_len, self.num_heads, hidden_size // self.num_heads) .permute(0, 2, 1, 3) ) values = ( self.v_proj(values) .reshape(batch_size, key_seq_len, self.num_heads, hidden_size // self.num_heads) .permute(0, 2, 1, 3) ) attn = (queries @ keys.transpose(-2, -1)) * self.scale attn = attn.softmax(dim=-1) attn = self.attn_drop(attn) x = (attn @ values).transpose(1, 2).reshape(batch_size, query_seq_len, hidden_size) x = self.proj(x) x = self.proj_drop(x) return x class PromptGeneratorLayer(nn.Module): def __init__(self, config): super().__init__() embed_dim = config.projection_dim self.cross_attn = XCLIPCrossAttention(config) self.norm1 = nn.LayerNorm(embed_dim, eps=config.text_config.layer_norm_eps) self.norm3 = nn.LayerNorm(embed_dim, eps=config.text_config.layer_norm_eps) self.mlp = nn.Sequential( nn.Linear(embed_dim, embed_dim * 4), ACT2FN[config.prompt_hidden_act], nn.Dropout(config.prompt_attention_dropout), nn.Linear(embed_dim * 4, embed_dim), ) def forward(self, x, visual): x = x + self.cross_attn(self.norm1(x), visual, visual) x = x + self.mlp(self.norm3(x)) return x class XCLIPPromptGenerator(nn.Module): """This corresponds to the `VideoSpecificPrompt` class in the original implementation.""" def __init__(self, config): super().__init__() embed_dim = config.projection_dim self.layernorm = nn.LayerNorm(embed_dim, eps=config.vision_config.layer_norm_eps) self.decoder = nn.ModuleList([PromptGeneratorLayer(config) for _ in range(config.prompt_layers)]) self.alpha = nn.Parameter(torch.ones(embed_dim) * config.prompt_alpha) def forward(self, text, visual): visual = self.layernorm(visual) for layer in self.decoder: text = layer(text, visual) return self.alpha * text @add_start_docstrings(X_CLIP_START_DOCSTRING) class XCLIPModel(XCLIPPreTrainedModel): config_class = XCLIPConfig def __init__(self, config: XCLIPConfig): super().__init__(config) if not isinstance(config.text_config, XCLIPTextConfig): raise TypeError( "config.text_config is expected to be of type XCLIPTextConfig but is of type" f" {type(config.text_config)}." ) if not isinstance(config.vision_config, XCLIPVisionConfig): raise TypeError( "config.vision_config is expected to be of type XCLIPVisionConfig but is of type" f" {type(config.vision_config)}." ) text_config = config.text_config vision_config = config.vision_config self.projection_dim = config.projection_dim self.text_embed_dim = text_config.hidden_size self.vision_embed_dim = vision_config.hidden_size self.text_model = XCLIPTextTransformer(text_config) self.vision_model = XCLIPVisionTransformer(vision_config) self.visual_projection = nn.Linear(self.vision_embed_dim, self.projection_dim, bias=False) self.text_projection = nn.Linear(self.text_embed_dim, self.projection_dim, bias=False) self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value)) self.prompts_visual_layernorm = nn.LayerNorm(self.vision_embed_dim, eps=config.vision_config.layer_norm_eps) self.prompts_visual_projection = nn.Parameter(torch.randn(self.vision_embed_dim, self.projection_dim)) mit_config = copy(vision_config) mit_config.hidden_size = vision_config.mit_hidden_size mit_config.intermediate_size = vision_config.mit_intermediate_size mit_config.num_hidden_layers = vision_config.mit_num_hidden_layers mit_config.num_attention_heads = vision_config.mit_num_attention_heads self.mit = XCLIPMultiframeIntegrationTransformer(mit_config) self.prompts_generator = XCLIPPromptGenerator(config) # Initialize weights and apply final processing self.post_init() @add_start_docstrings_to_model_forward(X_CLIP_TEXT_INPUTS_DOCSTRING) def get_text_features( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> torch.FloatTensor: r""" Returns: text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by applying the projection layer to the pooled output of [`XCLIPTextModel`]. Examples: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/xclip-base-patch32") >>> model = AutoModel.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) ```""" # Use X_CLIP model's config for some fields (if specified) instead of those of vision & text components. output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict text_outputs = self.text_model( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) text_embeds = text_outputs[1] text_embeds = self.text_projection(text_embeds) return text_embeds @add_start_docstrings_to_model_forward(X_CLIP_VISION_INPUTS_DOCSTRING) def get_video_features( self, pixel_values: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> torch.FloatTensor: r""" Returns: video_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The video embeddings obtained by applying the projection layer to the pooled output of [`XCLIPVisionModel`] and [`XCLIPMultiframeIntegrationTransformer`]. Examples: ```python >>> import av >>> import torch >>> import numpy as np >>> from transformers import AutoProcessor, AutoModel >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> # video clip consists of 300 frames (10 seconds at 30 FPS) >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> # sample 8 frames >>> indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container, indices) >>> processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32") >>> model = AutoModel.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = processor(videos=list(video), return_tensors="pt") >>> video_features = model.get_video_features(**inputs) ```""" # Use X_CLIP model's config for some fields (if specified) instead of those of vision & text components. output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict batch_size, num_frames, num_channels, height, width = pixel_values.shape pixel_values = pixel_values.reshape(-1, num_channels, height, width) vision_outputs = self.vision_model( pixel_values=pixel_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) video_embeds = vision_outputs[1] video_embeds = self.visual_projection(video_embeds) cls_features = video_embeds.view(batch_size, num_frames, -1) mit_outputs = self.mit( cls_features, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) video_embeds = mit_outputs[1] return video_embeds @add_start_docstrings_to_model_forward(X_CLIP_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=XCLIPOutput, config_class=XCLIPConfig) def forward( self, input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, return_loss: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, interpolate_pos_encoding: bool = False, return_dict: Optional[bool] = None, ) -> Union[Tuple, XCLIPOutput]: r""" Returns: Examples: ```python >>> import av >>> import torch >>> import numpy as np >>> from transformers import AutoProcessor, AutoModel >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> # video clip consists of 300 frames (10 seconds at 30 FPS) >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> # sample 8 frames >>> indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container, indices) >>> processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32") >>> model = AutoModel.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = processor( ... text=["playing sports", "eating spaghetti", "go shopping"], ... videos=list(video), ... return_tensors="pt", ... padding=True, ... ) >>> # forward pass >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits_per_video = outputs.logits_per_video # this is the video-text similarity score >>> probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities >>> print(probs) tensor([[1.9496e-04, 9.9960e-01, 2.0825e-04]]) ```""" # Use X_CLIP model's config for some fields (if specified) instead of those of vision & text components. output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict batch_size, num_frames, num_channels, height, width = pixel_values.shape pixel_values = pixel_values.reshape(-1, num_channels, height, width) vision_outputs = self.vision_model( pixel_values=pixel_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, interpolate_pos_encoding=interpolate_pos_encoding, return_dict=return_dict, ) video_embeds = vision_outputs[1] video_embeds = self.visual_projection(video_embeds) cls_features = video_embeds.view(batch_size, num_frames, -1) mit_outputs = self.mit( cls_features, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) video_embeds = mit_outputs[1] img_features = vision_outputs[0][:, 1:, :] img_features = self.prompts_visual_layernorm(img_features) img_features = img_features @ self.prompts_visual_projection img_features = img_features.view(batch_size, num_frames, -1, video_embeds.shape[-1]) img_features = img_features.mean(dim=1, keepdim=False) text_outputs = self.text_model( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) text_embeds = text_outputs[1] text_embeds = self.text_projection(text_embeds) text_embeds = text_embeds.unsqueeze(0).expand(batch_size, -1, -1) text_embeds = text_embeds + self.prompts_generator(text_embeds, img_features) # normalized features video_embeds = video_embeds / video_embeds.norm(p=2, dim=-1, keepdim=True) text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True) # cosine similarity as logits logit_scale = self.logit_scale.exp() logits_per_video = torch.einsum("bd,bkd->bk", video_embeds, logit_scale * text_embeds) logits_per_text = logits_per_video.T loss = None if return_loss: loss = x_clip_loss(logits_per_text) if not return_dict: output = (logits_per_video, logits_per_text, text_embeds, video_embeds, text_outputs, vision_outputs) return ((loss,) + output) if loss is not None else output return XCLIPOutput( loss=loss, logits_per_video=logits_per_video, logits_per_text=logits_per_text, text_embeds=text_embeds, video_embeds=video_embeds, text_model_output=text_outputs, vision_model_output=vision_outputs, mit_output=mit_outputs, ) __all__ = ["XCLIPModel", "XCLIPPreTrainedModel", "XCLIPTextModel", "XCLIPVisionModel"]
transformers/src/transformers/models/x_clip/modeling_x_clip.py/0
{ "file_path": "transformers/src/transformers/models/x_clip/modeling_x_clip.py", "repo_id": "transformers", "token_count": 31639 }
import os from functools import partial, reduce from typing import TYPE_CHECKING, Callable, Dict, Optional, Tuple, Type, Union import transformers from .. import PretrainedConfig, is_tf_available, is_torch_available from ..utils import TF2_WEIGHTS_NAME, WEIGHTS_NAME, logging from .config import OnnxConfig if TYPE_CHECKING: from transformers import PreTrainedModel, TFPreTrainedModel logger = logging.get_logger(__name__) # pylint: disable=invalid-name if is_torch_available(): from transformers.models.auto import ( AutoModel, AutoModelForCausalLM, AutoModelForImageClassification, AutoModelForImageSegmentation, AutoModelForMaskedImageModeling, AutoModelForMaskedLM, AutoModelForMultipleChoice, AutoModelForObjectDetection, AutoModelForQuestionAnswering, AutoModelForSemanticSegmentation, AutoModelForSeq2SeqLM, AutoModelForSequenceClassification, AutoModelForSpeechSeq2Seq, AutoModelForTokenClassification, AutoModelForVision2Seq, ) if is_tf_available(): from transformers.models.auto import ( TFAutoModel, TFAutoModelForCausalLM, TFAutoModelForMaskedLM, TFAutoModelForMultipleChoice, TFAutoModelForQuestionAnswering, TFAutoModelForSemanticSegmentation, TFAutoModelForSeq2SeqLM, TFAutoModelForSequenceClassification, TFAutoModelForTokenClassification, ) if not is_torch_available() and not is_tf_available(): logger.warning( "The ONNX export features are only supported for PyTorch or TensorFlow. You will not be able to export models" " without one of these libraries installed." ) def supported_features_mapping( *supported_features: str, onnx_config_cls: str = None ) -> Dict[str, Callable[[PretrainedConfig], OnnxConfig]]: """ Generate the mapping between supported the features and their corresponding OnnxConfig for a given model. Args: *supported_features: The names of the supported features. onnx_config_cls: The OnnxConfig full name corresponding to the model. Returns: The dictionary mapping a feature to an OnnxConfig constructor. """ if onnx_config_cls is None: raise ValueError("A OnnxConfig class must be provided") config_cls = transformers for attr_name in onnx_config_cls.split("."): config_cls = getattr(config_cls, attr_name) mapping = {} for feature in supported_features: if "-with-past" in feature: task = feature.replace("-with-past", "") mapping[feature] = partial(config_cls.with_past, task=task) else: mapping[feature] = partial(config_cls.from_model_config, task=feature) return mapping class FeaturesManager: _TASKS_TO_AUTOMODELS = {} _TASKS_TO_TF_AUTOMODELS = {} if is_torch_available(): _TASKS_TO_AUTOMODELS = { "default": AutoModel, "masked-lm": AutoModelForMaskedLM, "causal-lm": AutoModelForCausalLM, "seq2seq-lm": AutoModelForSeq2SeqLM, "sequence-classification": AutoModelForSequenceClassification, "token-classification": AutoModelForTokenClassification, "multiple-choice": AutoModelForMultipleChoice, "object-detection": AutoModelForObjectDetection, "question-answering": AutoModelForQuestionAnswering, "image-classification": AutoModelForImageClassification, "image-segmentation": AutoModelForImageSegmentation, "masked-im": AutoModelForMaskedImageModeling, "semantic-segmentation": AutoModelForSemanticSegmentation, "vision2seq-lm": AutoModelForVision2Seq, "speech2seq-lm": AutoModelForSpeechSeq2Seq, } if is_tf_available(): _TASKS_TO_TF_AUTOMODELS = { "default": TFAutoModel, "masked-lm": TFAutoModelForMaskedLM, "causal-lm": TFAutoModelForCausalLM, "seq2seq-lm": TFAutoModelForSeq2SeqLM, "sequence-classification": TFAutoModelForSequenceClassification, "token-classification": TFAutoModelForTokenClassification, "multiple-choice": TFAutoModelForMultipleChoice, "question-answering": TFAutoModelForQuestionAnswering, "semantic-segmentation": TFAutoModelForSemanticSegmentation, } # Set of model topologies we support associated to the features supported by each topology and the factory _SUPPORTED_MODEL_TYPE = { "albert": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.albert.AlbertOnnxConfig", ), "bart": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "seq2seq-lm", "seq2seq-lm-with-past", "sequence-classification", "question-answering", onnx_config_cls="models.bart.BartOnnxConfig", ), # BEiT cannot be used with the masked image modeling autoclass, so this feature is excluded here "beit": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.beit.BeitOnnxConfig" ), "bert": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.bert.BertOnnxConfig", ), "big-bird": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.big_bird.BigBirdOnnxConfig", ), "bigbird-pegasus": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "seq2seq-lm", "seq2seq-lm-with-past", "sequence-classification", "question-answering", onnx_config_cls="models.bigbird_pegasus.BigBirdPegasusOnnxConfig", ), "blenderbot": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "seq2seq-lm", "seq2seq-lm-with-past", onnx_config_cls="models.blenderbot.BlenderbotOnnxConfig", ), "blenderbot-small": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "seq2seq-lm", "seq2seq-lm-with-past", onnx_config_cls="models.blenderbot_small.BlenderbotSmallOnnxConfig", ), "bloom": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "sequence-classification", "token-classification", onnx_config_cls="models.bloom.BloomOnnxConfig", ), "camembert": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.camembert.CamembertOnnxConfig", ), "clip": supported_features_mapping( "default", onnx_config_cls="models.clip.CLIPOnnxConfig", ), "codegen": supported_features_mapping( "default", "causal-lm", onnx_config_cls="models.codegen.CodeGenOnnxConfig", ), "convbert": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.convbert.ConvBertOnnxConfig", ), "convnext": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.convnext.ConvNextOnnxConfig", ), "data2vec-text": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.data2vec.Data2VecTextOnnxConfig", ), "data2vec-vision": supported_features_mapping( "default", "image-classification", # ONNX doesn't support `adaptive_avg_pool2d` yet # "semantic-segmentation", onnx_config_cls="models.data2vec.Data2VecVisionOnnxConfig", ), "deberta": supported_features_mapping( "default", "masked-lm", "sequence-classification", "token-classification", "question-answering", onnx_config_cls="models.deberta.DebertaOnnxConfig", ), "deberta-v2": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.deberta_v2.DebertaV2OnnxConfig", ), "deit": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.deit.DeiTOnnxConfig" ), "detr": supported_features_mapping( "default", "object-detection", "image-segmentation", onnx_config_cls="models.detr.DetrOnnxConfig", ), "distilbert": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.distilbert.DistilBertOnnxConfig", ), "electra": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.electra.ElectraOnnxConfig", ), "flaubert": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.flaubert.FlaubertOnnxConfig", ), "gpt2": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "sequence-classification", "token-classification", onnx_config_cls="models.gpt2.GPT2OnnxConfig", ), "gptj": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "question-answering", "sequence-classification", onnx_config_cls="models.gptj.GPTJOnnxConfig", ), "gpt-neo": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "sequence-classification", onnx_config_cls="models.gpt_neo.GPTNeoOnnxConfig", ), "groupvit": supported_features_mapping( "default", onnx_config_cls="models.groupvit.GroupViTOnnxConfig", ), "ibert": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.ibert.IBertOnnxConfig", ), "imagegpt": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.imagegpt.ImageGPTOnnxConfig" ), "layoutlm": supported_features_mapping( "default", "masked-lm", "sequence-classification", "token-classification", onnx_config_cls="models.layoutlm.LayoutLMOnnxConfig", ), "layoutlmv3": supported_features_mapping( "default", "question-answering", "sequence-classification", "token-classification", onnx_config_cls="models.layoutlmv3.LayoutLMv3OnnxConfig", ), "levit": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.levit.LevitOnnxConfig" ), "longt5": supported_features_mapping( "default", "default-with-past", "seq2seq-lm", "seq2seq-lm-with-past", onnx_config_cls="models.longt5.LongT5OnnxConfig", ), "longformer": supported_features_mapping( "default", "masked-lm", "multiple-choice", "question-answering", "sequence-classification", "token-classification", onnx_config_cls="models.longformer.LongformerOnnxConfig", ), "marian": supported_features_mapping( "default", "default-with-past", "seq2seq-lm", "seq2seq-lm-with-past", "causal-lm", "causal-lm-with-past", onnx_config_cls="models.marian.MarianOnnxConfig", ), "mbart": supported_features_mapping( "default", "default-with-past", "causal-lm", "causal-lm-with-past", "seq2seq-lm", "seq2seq-lm-with-past", "sequence-classification", "question-answering", onnx_config_cls="models.mbart.MBartOnnxConfig", ), "mobilebert": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.mobilebert.MobileBertOnnxConfig", ), "mobilenet-v1": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.mobilenet_v1.MobileNetV1OnnxConfig", ), "mobilenet-v2": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.mobilenet_v2.MobileNetV2OnnxConfig", ), "mobilevit": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.mobilevit.MobileViTOnnxConfig", ), "mt5": supported_features_mapping( "default", "default-with-past", "seq2seq-lm", "seq2seq-lm-with-past", onnx_config_cls="models.mt5.MT5OnnxConfig", ), "m2m-100": supported_features_mapping( "default", "default-with-past", "seq2seq-lm", "seq2seq-lm-with-past", onnx_config_cls="models.m2m_100.M2M100OnnxConfig", ), "owlvit": supported_features_mapping( "default", onnx_config_cls="models.owlvit.OwlViTOnnxConfig", ), "perceiver": supported_features_mapping( "image-classification", "masked-lm", "sequence-classification", onnx_config_cls="models.perceiver.PerceiverOnnxConfig", ), "poolformer": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.poolformer.PoolFormerOnnxConfig" ), "rembert": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.rembert.RemBertOnnxConfig", ), "resnet": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.resnet.ResNetOnnxConfig", ), "roberta": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.roberta.RobertaOnnxConfig", ), "roformer": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "multiple-choice", "question-answering", "token-classification", onnx_config_cls="models.roformer.RoFormerOnnxConfig", ), "segformer": supported_features_mapping( "default", "image-classification", "semantic-segmentation", onnx_config_cls="models.segformer.SegformerOnnxConfig", ), "squeezebert": supported_features_mapping( "default", "masked-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.squeezebert.SqueezeBertOnnxConfig", ), "swin": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.swin.SwinOnnxConfig" ), "t5": supported_features_mapping( "default", "default-with-past", "seq2seq-lm", "seq2seq-lm-with-past", onnx_config_cls="models.t5.T5OnnxConfig", ), "vision-encoder-decoder": supported_features_mapping( "vision2seq-lm", onnx_config_cls="models.vision_encoder_decoder.VisionEncoderDecoderOnnxConfig" ), "vit": supported_features_mapping( "default", "image-classification", onnx_config_cls="models.vit.ViTOnnxConfig" ), "whisper": supported_features_mapping( "default", "default-with-past", "speech2seq-lm", "speech2seq-lm-with-past", onnx_config_cls="models.whisper.WhisperOnnxConfig", ), "xlm": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.xlm.XLMOnnxConfig", ), "xlm-roberta": supported_features_mapping( "default", "masked-lm", "causal-lm", "sequence-classification", "multiple-choice", "token-classification", "question-answering", onnx_config_cls="models.xlm_roberta.XLMRobertaOnnxConfig", ), "yolos": supported_features_mapping( "default", "object-detection", onnx_config_cls="models.yolos.YolosOnnxConfig", ), } AVAILABLE_FEATURES = sorted(reduce(lambda s1, s2: s1 | s2, (v.keys() for v in _SUPPORTED_MODEL_TYPE.values()))) @staticmethod def get_supported_features_for_model_type( model_type: str, model_name: Optional[str] = None ) -> Dict[str, Callable[[PretrainedConfig], OnnxConfig]]: """ Tries to retrieve the feature -> OnnxConfig constructor map from the model type. Args: model_type (`str`): The model type to retrieve the supported features for. model_name (`str`, *optional*): The name attribute of the model object, only used for the exception message. Returns: The dictionary mapping each feature to a corresponding OnnxConfig constructor. """ model_type = model_type.lower() if model_type not in FeaturesManager._SUPPORTED_MODEL_TYPE: model_type_and_model_name = f"{model_type} ({model_name})" if model_name else model_type raise KeyError( f"{model_type_and_model_name} is not supported yet. " f"Only {list(FeaturesManager._SUPPORTED_MODEL_TYPE.keys())} are supported. " f"If you want to support {model_type} please propose a PR or open up an issue." ) return FeaturesManager._SUPPORTED_MODEL_TYPE[model_type] @staticmethod def feature_to_task(feature: str) -> str: return feature.replace("-with-past", "") @staticmethod def _validate_framework_choice(framework: str): """ Validates if the framework requested for the export is both correct and available, otherwise throws an exception. """ if framework not in ["pt", "tf"]: raise ValueError( f"Only two frameworks are supported for ONNX export: pt or tf, but {framework} was provided." ) elif framework == "pt" and not is_torch_available(): raise RuntimeError("Cannot export model to ONNX using PyTorch because no PyTorch package was found.") elif framework == "tf" and not is_tf_available(): raise RuntimeError("Cannot export model to ONNX using TensorFlow because no TensorFlow package was found.") @staticmethod def get_model_class_for_feature(feature: str, framework: str = "pt") -> Type: """ Attempts to retrieve an AutoModel class from a feature name. Args: feature (`str`): The feature required. framework (`str`, *optional*, defaults to `"pt"`): The framework to use for the export. Returns: The AutoModel class corresponding to the feature. """ task = FeaturesManager.feature_to_task(feature) FeaturesManager._validate_framework_choice(framework) if framework == "pt": task_to_automodel = FeaturesManager._TASKS_TO_AUTOMODELS else: task_to_automodel = FeaturesManager._TASKS_TO_TF_AUTOMODELS if task not in task_to_automodel: raise KeyError( f"Unknown task: {feature}. Possible values are {list(FeaturesManager._TASKS_TO_AUTOMODELS.values())}" ) return task_to_automodel[task] @staticmethod def determine_framework(model: str, framework: str = None) -> str: """ Determines the framework to use for the export. The priority is in the following order: 1. User input via `framework`. 2. If local checkpoint is provided, use the same framework as the checkpoint. 3. Available framework in environment, with priority given to PyTorch Args: model (`str`): The name of the model to export. framework (`str`, *optional*, defaults to `None`): The framework to use for the export. See above for priority if none provided. Returns: The framework to use for the export. """ if framework is not None: return framework framework_map = {"pt": "PyTorch", "tf": "TensorFlow"} exporter_map = {"pt": "torch", "tf": "tf2onnx"} if os.path.isdir(model): if os.path.isfile(os.path.join(model, WEIGHTS_NAME)): framework = "pt" elif os.path.isfile(os.path.join(model, TF2_WEIGHTS_NAME)): framework = "tf" else: raise FileNotFoundError( "Cannot determine framework from given checkpoint location." f" There should be a {WEIGHTS_NAME} for PyTorch" f" or {TF2_WEIGHTS_NAME} for TensorFlow." ) logger.info(f"Local {framework_map[framework]} model found.") else: if is_torch_available(): framework = "pt" elif is_tf_available(): framework = "tf" else: raise EnvironmentError("Neither PyTorch nor TensorFlow found in environment. Cannot export to ONNX.") logger.info(f"Framework not requested. Using {exporter_map[framework]} to export to ONNX.") return framework @staticmethod def get_model_from_feature( feature: str, model: str, framework: str = None, cache_dir: str = None ) -> Union["PreTrainedModel", "TFPreTrainedModel"]: """ Attempts to retrieve a model from a model's name and the feature to be enabled. Args: feature (`str`): The feature required. model (`str`): The name of the model to export. framework (`str`, *optional*, defaults to `None`): The framework to use for the export. See `FeaturesManager.determine_framework` for the priority should none be provided. Returns: The instance of the model. """ framework = FeaturesManager.determine_framework(model, framework) model_class = FeaturesManager.get_model_class_for_feature(feature, framework) try: model = model_class.from_pretrained(model, cache_dir=cache_dir) except OSError: if framework == "pt": logger.info("Loading TensorFlow model in PyTorch before exporting to ONNX.") model = model_class.from_pretrained(model, from_tf=True, cache_dir=cache_dir) else: logger.info("Loading PyTorch model in TensorFlow before exporting to ONNX.") model = model_class.from_pretrained(model, from_pt=True, cache_dir=cache_dir) return model @staticmethod def check_supported_model_or_raise( model: Union["PreTrainedModel", "TFPreTrainedModel"], feature: str = "default" ) -> Tuple[str, Callable]: """ Check whether or not the model has the requested features. Args: model: The model to export. feature: The name of the feature to check if it is available. Returns: (str) The type of the model (OnnxConfig) The OnnxConfig instance holding the model export properties. """ model_type = model.config.model_type.replace("_", "-") model_name = getattr(model, "name", "") model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name) if feature not in model_features: raise ValueError( f"{model.config.model_type} doesn't support feature {feature}. Supported values are: {model_features}" ) return model.config.model_type, FeaturesManager._SUPPORTED_MODEL_TYPE[model_type][feature] def get_config(model_type: str, feature: str) -> OnnxConfig: """ Gets the OnnxConfig for a model_type and feature combination. Args: model_type (`str`): The model type to retrieve the config for. feature (`str`): The feature to retrieve the config for. Returns: `OnnxConfig`: config for the combination """ return FeaturesManager._SUPPORTED_MODEL_TYPE[model_type][feature]
transformers/src/transformers/onnx/features.py/0
{ "file_path": "transformers/src/transformers/onnx/features.py", "repo_id": "transformers", "token_count": 13911 }
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import enum from typing import Dict, List, Optional, Union from ..processing_utils import ProcessingKwargs, Unpack from ..utils import ( add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends, ) from .base import Pipeline, build_pipeline_init_args if is_vision_available(): from PIL import Image from ..image_utils import load_images, valid_images if is_torch_available(): from ..models.auto.modeling_auto import MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES from .pt_utils import KeyDataset logger = logging.get_logger(__name__) IMAGE_TOKEN = "<image>" class ReturnType(enum.Enum): TENSORS = 0 NEW_TEXT = 1 FULL_TEXT = 2 class Chat: """This class is intended to just be used internally in this pipeline and not exposed to users. We convert chats to this format because the rest of the pipeline code tends to assume that lists of messages are actually a batch of samples rather than messages in the same conversation.""" def __init__(self, messages: Dict, images: Union[str, List[str], "Image.Image", List["Image.Image"]]): for message in messages: if not ("role" in message and "content" in message): raise ValueError("When passing chat dicts as input, each dict must have a 'role' and 'content' key.") images = retrieve_images_in_messages(messages, images) self.messages = messages self.images = images def retrieve_images_in_messages( messages: dict, images: Optional[Union[str, List[str], "Image.Image", List["Image.Image"]]] ): """ Retrieve and combine images from the chat and the images passed as input. """ if images is None: images = [] idx_images = 0 retrieved_images = [] for message in messages: for content in message["content"]: if isinstance(content, dict): if content.get("type") == "image": for key in ["image", "url", "path", "base64"]: if key in content: retrieved_images.append(content[key]) break else: if idx_images < len(images): retrieved_images.append(images[idx_images]) idx_images += 1 else: raise ValueError( "The number of images in the chat messages should be the same as the number of images passed to the pipeline." ) # Add support for OpenAI/TGI chat format elif content.get("type") == "image_url": if isinstance(content.get("image_url"), dict) and "url" in content["image_url"]: retrieved_images.append(content["image_url"]["url"]) # Rewrite content to be in the Transformers chat format content["type"] = "image" content["image"] = content["image_url"]["url"] del content["image_url"] else: raise ValueError( "Wrong format for 'image_url' content type. The content should have an 'image_url' dict with a 'url' key." ) # The number of images passed should be consistent with the number of images in the chat without an image key if idx_images != len(images): raise ValueError( "The number of images in the chat messages should be the same as the number of images passed to the pipeline." ) return retrieved_images @add_end_docstrings(build_pipeline_init_args(has_processor=True)) class ImageTextToTextPipeline(Pipeline): """ Image-text-to-text pipeline using an `AutoModelForImageTextToText`. This pipeline generates text given an image and text. When the underlying model is a conversational model, it can also accept one or more chats, in which case the pipeline will operate in chat mode and will continue the chat(s) by adding its response(s). Each chat takes the form of a list of dicts, where each dict contains "role" and "content" keys. Example: ```python >>> from transformers import pipeline >>> pipe = pipeline(task="image-text-to-text", model="Salesforce/blip-image-captioning-base") >>> pipe("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", text="A photo of") [{'generated_text': 'a photo of two birds'}] ``` ```python >>> from transformers import pipeline >>> pipe = pipeline("image-text-to-text", model="llava-hf/llava-interleave-qwen-0.5b-hf") >>> messages = [ >>> { >>> "role": "user", >>> "content": [ >>> { >>> "type": "image", >>> "url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", >>> }, >>> {"type": "text", "text": "Describe this image."}, >>> ], >>> }, >>> { >>> "role": "assistant", >>> "content": [ >>> {"type": "text", "text": "There is a dog and"}, >>> ], >>> }, >>> ] >>> pipe(text=messages, max_new_tokens=20, return_full_text=False) [{'input_text': [{'role': 'user', 'content': [{'type': 'image', 'url': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'}, {'type': 'text', 'text': 'Describe this image.'}]}, {'role': 'assistant', 'content': [{'type': 'text', 'text': 'There is a dog and'}]}], 'generated_text': ' a person in the image. The dog is sitting on the sand, and the person is sitting on'}] ``` Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This image-text to text pipeline can currently be loaded from pipeline() using the following task identifier: "image-text-to-text". See the list of available models on [huggingface.co/models](https://huggingface.co/models?pipeline_tag=image-text-to-text). """ _load_processor = True _load_image_processor = False _load_feature_extractor = False _load_tokenizer = False def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) requires_backends(self, "vision") self.check_model_type(MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES) def _sanitize_parameters( self, max_new_tokens=None, generate_kwargs=None, timeout=None, return_full_text=None, return_tensors=None, return_type=None, continue_final_message=None, **kwargs: Unpack[ProcessingKwargs], ): forward_kwargs = {} preprocess_params = {} postprocess_params = {} preprocess_params["processing_kwargs"] = kwargs if timeout is not None: preprocess_params["timeout"] = timeout if continue_final_message is not None: preprocess_params["continue_final_message"] = continue_final_message if generate_kwargs is not None: forward_kwargs["generate_kwargs"] = generate_kwargs if max_new_tokens is not None: if "generate_kwargs" not in forward_kwargs: forward_kwargs["generate_kwargs"] = {} if "max_new_tokens" in forward_kwargs["generate_kwargs"]: raise ValueError( "'max_new_tokens' is defined twice, once in 'generate_kwargs' and once as a direct parameter," " please use only one" ) forward_kwargs["generate_kwargs"]["max_new_tokens"] = max_new_tokens if return_full_text is not None and return_type is None: if return_tensors is not None: raise ValueError("`return_full_text` is mutually exclusive with `return_tensors`") return_type = ReturnType.FULL_TEXT if return_full_text else ReturnType.NEW_TEXT if return_tensors is not None and return_type is None: return_type = ReturnType.TENSORS if return_type is not None: postprocess_params["return_type"] = return_type if continue_final_message is not None: postprocess_params["continue_final_message"] = continue_final_message return preprocess_params, forward_kwargs, postprocess_params def __call__( self, images: Optional[ Union[str, List[str], List[List[str]], "Image.Image", List["Image.Image"], List[List["Image.Image"]]] ] = None, text: Optional[Union[str, List[str], List[dict]]] = None, **kwargs, ): """ Generate a text given text and the image(s) passed as inputs. Args: images (`str`, `List[str]`, `PIL.Image or `List[PIL.Image]`): The pipeline handles three types of images: - A string containing a HTTP(s) link pointing to an image - A string containing a local path to an image - An image loaded in PIL directly The pipeline accepts either a single image or a batch of images. text (str, List[str], `List[Dict[str, Union[str, PIL.Image]]]`): The text to be used for generation. If a list of strings is passed, the length of the list should be the same as the number of images. Text can also follow the chat format: a list of dictionaries where each dictionary represents a message in a conversation. Each dictionary should have two keys: 'role' and 'content'. 'role' should be one of 'user', 'system' or 'assistant'. 'content' should be a list of dictionary containing the text of the message and the type of the message. The type of the message can be either 'text' or 'image'. If the type is 'image', no text is needed. return_tensors (`bool`, *optional*, defaults to `False`): Returns the tensors of predictions (as token indices) in the outputs. If set to `True`, the decoded text is not returned. return_text (`bool`, *optional*): Returns the decoded texts in the outputs. return_full_text (`bool`, *optional*, defaults to `True`): If set to `False` only added text is returned, otherwise the full text is returned. Cannot be specified at the same time as `return_text`. continue_final_message( `bool`, *optional*): This indicates that you want the model to continue the last message in the input chat rather than starting a new one, allowing you to "prefill" its response. By default this is `True` when the final message in the input chat has the `assistant` role and `False` otherwise, but you can manually override that behaviour by setting this flag. Return: A list or a list of list of `dict`: Each result comes as a dictionary with the following key (cannot return a combination of both `generated_text` and `generated_token_ids`): - **generated_text** (`str`, present when `return_text=True`) -- The generated text. - **generated_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token ids of the generated text. - **input_text** (`str`) -- The input text. """ if images is None and text is None: raise ValueError("You must at least provide either text or images.") if images is not None and text is None and not valid_images(images): """ Supports the following format - {"image": image, "text": text} - [{"image": image, "text": text}] - Generator and datasets This is a common pattern in other multimodal pipelines, so we support it here as well. """ return super().__call__(images, **kwargs) if isinstance(text, (list, tuple, KeyDataset)) and isinstance(text[0], (list, tuple, dict)): # We have one or more prompts in list-of-dicts format, so this is chat mode if isinstance(text[0], dict): return super().__call__(Chat(text, images), **kwargs) else: if images is None: images = [None] * len(text) chats = [Chat(chat, image) for chat, image in zip(text, images)] # 🐈 🐈 🐈 return super().__call__(chats, **kwargs) # encourage the user to use the chat format if supported if getattr(self.processor, "chat_template", None) is not None: logger.warning_once( "The input data was not formatted as a chat with dicts containing 'role' and 'content' keys, even though this model supports chat. " "Consider using the chat format for better results. For more information, see https://huggingface.co/docs/transformers/en/chat_templating" ) # support text only generation if images is None: return super().__call__(text, **kwargs) if text is None: raise ValueError("You must provide text for this pipeline.") return super().__call__({"images": images, "text": text}, **kwargs) def preprocess(self, inputs=None, timeout=None, continue_final_message=None, processing_kwargs=None): # In case we only have text inputs if isinstance(inputs, (list, tuple, str)): images = None text = inputs inputs_text = inputs else: if isinstance(inputs, Chat): # If the user passes a chat that ends in an assistant message, we treat it as a prefill by default # because very few models support multiple separate, consecutive assistant messages if continue_final_message is None: continue_final_message = inputs.messages[-1]["role"] == "assistant" text = self.processor.apply_chat_template( inputs.messages, add_generation_prompt=not continue_final_message, continue_final_message=continue_final_message, return_tensors=self.framework, ) inputs_text = inputs images = inputs.images else: text = inputs["text"] inputs_text = inputs["text"] images = inputs["images"] images = load_images(images) # if batched text inputs, we set padding to True unless specified otherwise if isinstance(text, (list, tuple)) and len(text) > 1: processing_kwargs.setdefault("padding", True) model_inputs = self.processor( images=images, text=text, return_tensors=self.framework, legacy=False, **processing_kwargs ).to(dtype=self.torch_dtype) model_inputs["text"] = inputs_text return model_inputs def _forward(self, model_inputs, generate_kwargs=None): generate_kwargs = {} if generate_kwargs is None else generate_kwargs prompt_text = model_inputs.pop("text") input_ids = ( model_inputs["input_ids"] if "input_ids" in model_inputs else model_inputs["decoder_input_ids"] ) # for decoder-only models generated_sequence = self.model.generate(**model_inputs, **generate_kwargs) return {"generated_sequence": generated_sequence, "prompt_text": prompt_text, "input_ids": input_ids} def postprocess(self, model_outputs, return_type=ReturnType.FULL_TEXT, continue_final_message=None): input_texts = model_outputs["prompt_text"] input_texts = [input_texts] if isinstance(input_texts, (str, Chat)) else input_texts generated_sequence = model_outputs["generated_sequence"] input_ids = model_outputs["input_ids"] if return_type == ReturnType.TENSORS: return [ {"input_text": input_texts[i], "generated_token_ids": generated_sequence[i]} for i in range(len(input_texts)) ] # Decode inputs and outputs the same way to remove input text from generated text if present generated_texts = self.processor.post_process_image_text_to_text(generated_sequence) decoded_inputs = self.processor.post_process_image_text_to_text(input_ids) # Force consistent behavior for including the input text in the output if return_type in {ReturnType.NEW_TEXT, ReturnType.FULL_TEXT}: # Remove the input text from the generated text if the generated text starts with the input text # (accounting for the possibility of a space between the input and generated text) new_generated_texts = [] for text_generated, decoded_input in zip(generated_texts, decoded_inputs): # There can be added characters before the input text, so we need to find the beginning of the input text in the generated text index_input_text = text_generated.find(decoded_input) # Limit the search to 2 residual characters, like spaces or new lines, to avoid removing a large part of the answer if 0 <= index_input_text <= 2: # If the input text is found, we remove it new_generated_texts.append(text_generated[index_input_text + len(decoded_input) :]) else: new_generated_texts.append(text_generated) generated_texts = new_generated_texts if return_type == ReturnType.FULL_TEXT: full_texts = [] for prompt_text, generated_text in zip(input_texts, generated_texts): if isinstance(prompt_text, str): generated_text = prompt_text + generated_text elif isinstance(prompt_text, Chat): if continue_final_message is None: # If the user passes a chat ending in an assistant message, we treat it as a prefill by # default because very few models support multiple separate, consecutive assistant messages continue_final_message = prompt_text.messages[-1]["role"] == "assistant" if continue_final_message: # With assistant prefill, concat onto the end of the last message new_text = dict(prompt_text.messages[-1]["content"][-1].items()) new_text["text"] += generated_text generated_text = list(prompt_text.messages)[:-1] + [ { "role": prompt_text.messages[-1]["role"], "content": prompt_text.messages[-1]["content"][:-1] + [new_text], } ] else: # When we're not starting from a prefill, the output is a new assistant message generated_text = list(prompt_text.messages) + [ {"role": "assistant", "content": generated_text} ] full_texts.append(generated_text) generated_texts = full_texts records = [ { "input_text": input_text.messages if isinstance(input_text, Chat) else input_text, "generated_text": generated_text, } for input_text, generated_text in zip(input_texts, generated_texts) ] return records
transformers/src/transformers/pipelines/image_text_to_text.py/0
{ "file_path": "transformers/src/transformers/pipelines/image_text_to_text.py", "repo_id": "transformers", "token_count": 8727 }
import inspect from typing import List, Union import numpy as np from ..tokenization_utils import TruncationStrategy from ..utils import add_end_docstrings, logging from .base import ArgumentHandler, ChunkPipeline, build_pipeline_init_args logger = logging.get_logger(__name__) class ZeroShotClassificationArgumentHandler(ArgumentHandler): """ Handles arguments for zero-shot for text classification by turning each possible label into an NLI premise/hypothesis pair. """ def _parse_labels(self, labels): if isinstance(labels, str): labels = [label.strip() for label in labels.split(",") if label.strip()] return labels def __call__(self, sequences, labels, hypothesis_template): if len(labels) == 0 or len(sequences) == 0: raise ValueError("You must include at least one label and at least one sequence.") if hypothesis_template.format(labels[0]) == hypothesis_template: raise ValueError( ( 'The provided hypothesis_template "{}" was not able to be formatted with the target labels. ' "Make sure the passed template includes formatting syntax such as {{}} where the label should go." ).format(hypothesis_template) ) if isinstance(sequences, str): sequences = [sequences] sequence_pairs = [] for sequence in sequences: sequence_pairs.extend([[sequence, hypothesis_template.format(label)] for label in labels]) return sequence_pairs, sequences @add_end_docstrings(build_pipeline_init_args(has_tokenizer=True)) class ZeroShotClassificationPipeline(ChunkPipeline): """ NLI-based zero-shot classification pipeline using a `ModelForSequenceClassification` trained on NLI (natural language inference) tasks. Equivalent of `text-classification` pipelines, but these models don't require a hardcoded number of potential classes, they can be chosen at runtime. It usually means it's slower but it is **much** more flexible. Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis pair and passed to the pretrained model. Then, the logit for *entailment* is taken as the logit for the candidate label being valid. Any NLI model can be used, but the id of the *entailment* label must be included in the model config's :attr:*~transformers.PretrainedConfig.label2id*. Example: ```python >>> from transformers import pipeline >>> oracle = pipeline(model="facebook/bart-large-mnli") >>> oracle( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} >>> oracle( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["english", "german"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['english', 'german'], 'scores': [0.814, 0.186]} ``` Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This NLI pipeline can currently be loaded from [`pipeline`] using the following task identifier: `"zero-shot-classification"`. The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?search=nli). """ def __init__(self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs): self._args_parser = args_parser super().__init__(*args, **kwargs) if self.entailment_id == -1: logger.warning( "Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to " "-1. Define a descriptive label2id mapping in the model config to ensure correct outputs." ) @property def entailment_id(self): for label, ind in self.model.config.label2id.items(): if label.lower().startswith("entail"): return ind return -1 def _parse_and_tokenize( self, sequence_pairs, padding=True, add_special_tokens=True, truncation=TruncationStrategy.ONLY_FIRST, **kwargs ): """ Parse arguments and tokenize only_first so that hypothesis (label) is not truncated """ return_tensors = self.framework if self.tokenizer.pad_token is None: # Override for tokenizers not supporting padding logger.error( "Tokenizer was not supporting padding necessary for zero-shot, attempting to use " " `pad_token=eos_token`" ) self.tokenizer.pad_token = self.tokenizer.eos_token try: inputs = self.tokenizer( sequence_pairs, add_special_tokens=add_special_tokens, return_tensors=return_tensors, padding=padding, truncation=truncation, ) except Exception as e: if "too short" in str(e): # tokenizers might yell that we want to truncate # to a value that is not even reached by the input. # In that case we don't want to truncate. # It seems there's not a really better way to catch that # exception. inputs = self.tokenizer( sequence_pairs, add_special_tokens=add_special_tokens, return_tensors=return_tensors, padding=padding, truncation=TruncationStrategy.DO_NOT_TRUNCATE, ) else: raise e return inputs def _sanitize_parameters(self, **kwargs): if kwargs.get("multi_class", None) is not None: kwargs["multi_label"] = kwargs["multi_class"] logger.warning( "The `multi_class` argument has been deprecated and renamed to `multi_label`. " "`multi_class` will be removed in a future version of Transformers." ) preprocess_params = {} if "candidate_labels" in kwargs: preprocess_params["candidate_labels"] = self._args_parser._parse_labels(kwargs["candidate_labels"]) if "hypothesis_template" in kwargs: preprocess_params["hypothesis_template"] = kwargs["hypothesis_template"] postprocess_params = {} if "multi_label" in kwargs: postprocess_params["multi_label"] = kwargs["multi_label"] return preprocess_params, {}, postprocess_params def __call__( self, sequences: Union[str, List[str]], *args, **kwargs, ): """ Classify the sequence(s) given as inputs. See the [`ZeroShotClassificationPipeline`] documentation for more information. Args: sequences (`str` or `List[str]`): The sequence(s) to classify, will be truncated if the model input is too large. candidate_labels (`str` or `List[str]`): The set of possible class labels to classify each sequence into. Can be a single label, a string of comma-separated labels, or a list of labels. hypothesis_template (`str`, *optional*, defaults to `"This example is {}."`): The template used to turn each label into an NLI-style hypothesis. This template must include a {} or similar syntax for the candidate label to be inserted into the template. For example, the default template is `"This example is {}."` With the candidate label `"sports"`, this would be fed into the model like `"<cls> sequence to classify <sep> This example is sports . <sep>"`. The default template works well in many cases, but it may be worthwhile to experiment with different templates depending on the task setting. multi_label (`bool`, *optional*, defaults to `False`): Whether or not multiple candidate labels can be true. If `False`, the scores are normalized such that the sum of the label likelihoods for each sequence is 1. If `True`, the labels are considered independent and probabilities are normalized for each candidate by doing a softmax of the entailment score vs. the contradiction score. Return: A `dict` or a list of `dict`: Each result comes as a dictionary with the following keys: - **sequence** (`str`) -- The sequence for which this is the output. - **labels** (`List[str]`) -- The labels sorted by order of likelihood. - **scores** (`List[float]`) -- The probabilities for each of the labels. """ if len(args) == 0: pass elif len(args) == 1 and "candidate_labels" not in kwargs: kwargs["candidate_labels"] = args[0] else: raise ValueError(f"Unable to understand extra arguments {args}") return super().__call__(sequences, **kwargs) def preprocess(self, inputs, candidate_labels=None, hypothesis_template="This example is {}."): sequence_pairs, sequences = self._args_parser(inputs, candidate_labels, hypothesis_template) for i, (candidate_label, sequence_pair) in enumerate(zip(candidate_labels, sequence_pairs)): model_input = self._parse_and_tokenize([sequence_pair]) yield { "candidate_label": candidate_label, "sequence": sequences[0], "is_last": i == len(candidate_labels) - 1, **model_input, } def _forward(self, inputs): candidate_label = inputs["candidate_label"] sequence = inputs["sequence"] model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names} # `XXXForSequenceClassification` models should not use `use_cache=True` even if it's supported model_forward = self.model.forward if self.framework == "pt" else self.model.call if "use_cache" in inspect.signature(model_forward).parameters.keys(): model_inputs["use_cache"] = False outputs = self.model(**model_inputs) model_outputs = { "candidate_label": candidate_label, "sequence": sequence, "is_last": inputs["is_last"], **outputs, } return model_outputs def postprocess(self, model_outputs, multi_label=False): candidate_labels = [outputs["candidate_label"] for outputs in model_outputs] sequences = [outputs["sequence"] for outputs in model_outputs] if self.framework == "pt": logits = np.concatenate([output["logits"].float().numpy() for output in model_outputs]) else: logits = np.concatenate([output["logits"].numpy() for output in model_outputs]) N = logits.shape[0] n = len(candidate_labels) num_sequences = N // n reshaped_outputs = logits.reshape((num_sequences, n, -1)) if multi_label or len(candidate_labels) == 1: # softmax over the entailment vs. contradiction dim for each label independently entailment_id = self.entailment_id contradiction_id = -1 if entailment_id == 0 else 0 entail_contr_logits = reshaped_outputs[..., [contradiction_id, entailment_id]] scores = np.exp(entail_contr_logits) / np.exp(entail_contr_logits).sum(-1, keepdims=True) scores = scores[..., 1] else: # softmax the "entailment" logits over all candidate labels entail_logits = reshaped_outputs[..., self.entailment_id] scores = np.exp(entail_logits) / np.exp(entail_logits).sum(-1, keepdims=True) top_inds = list(reversed(scores[0].argsort())) return { "sequence": sequences[0], "labels": [candidate_labels[i] for i in top_inds], "scores": scores[0, top_inds].tolist(), }
transformers/src/transformers/pipelines/zero_shot_classification.py/0
{ "file_path": "transformers/src/transformers/pipelines/zero_shot_classification.py", "repo_id": "transformers", "token_count": 5122 }
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib from typing import TYPE_CHECKING, Optional from packaging import version from .base import HfQuantizer if TYPE_CHECKING: from ..modeling_utils import PreTrainedModel from ..utils import is_auto_gptq_available, is_gptqmodel_available, is_optimum_available, is_torch_available, logging from ..utils.quantization_config import GPTQConfig, QuantizationConfigMixin if is_torch_available(): import torch logger = logging.get_logger(__name__) class GptqHfQuantizer(HfQuantizer): """ Quantizer of the GPTQ method - for GPTQ the quantizer support calibration of the model through `auto_gptq` or `gptqmodel` package. Quantization is done under the hood for users if they load a non-prequantized model. """ requires_calibration = False required_packages = ["optimum", "auto_gptq", "gptqmodel"] optimum_quantizer = None def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs): super().__init__(quantization_config, **kwargs) if not is_optimum_available(): raise ImportError("Loading a GPTQ quantized model requires optimum (`pip install optimum`)") from optimum.gptq import GPTQQuantizer self.optimum_quantizer = GPTQQuantizer.from_dict(self.quantization_config.to_dict_optimum()) def validate_environment(self, *args, **kwargs): if not is_optimum_available(): raise ImportError("Loading a GPTQ quantized model requires optimum (`pip install optimum`)") if is_auto_gptq_available() and is_gptqmodel_available(): logger.warning("Detected gptqmodel and auto-gptq, will use gptqmodel") gptq_supports_cpu = ( is_auto_gptq_available() and version.parse(importlib.metadata.version("auto-gptq")) > version.parse("0.4.2") ) or is_gptqmodel_available() if not gptq_supports_cpu and not torch.cuda.is_available(): raise RuntimeError("GPU is required to quantize or run quantize model.") elif not (is_auto_gptq_available() or is_gptqmodel_available()): raise ImportError( "Loading a GPTQ quantized model requires gptqmodel (`pip install gptqmodel`) or auto-gptq (`pip install auto-gptq`) library. " ) elif is_auto_gptq_available() and version.parse(importlib.metadata.version("auto_gptq")) < version.parse( "0.4.2" ): raise ImportError( "You need a version of auto_gptq >= 0.4.2 to use GPTQ: `pip install --upgrade auto-gptq` or use gptqmodel by `pip install gptqmodel>=1.4.3`." ) elif is_gptqmodel_available() and ( version.parse(importlib.metadata.version("gptqmodel")) < version.parse("1.4.3") or version.parse(importlib.metadata.version("optimum")) < version.parse("1.23.99") ): raise ImportError("The gptqmodel version should be >= 1.4.3, optimum version should >= 1.24.0") def update_torch_dtype(self, torch_dtype: "torch.dtype") -> "torch.dtype": if torch_dtype is None: torch_dtype = torch.float16 logger.info("Loading the model in `torch.float16`. To overwrite it, set `torch_dtype` manually.") elif torch_dtype != torch.float16: logger.info("We suggest you to set `torch_dtype=torch.float16` for better efficiency with GPTQ.") return torch_dtype def update_device_map(self, device_map): if device_map is None: device_map = {"": torch.device("cpu")} # Only with auto-gptq do not support CPU, we should move the model to cuda if available. if not is_gptqmodel_available() and device_map in ("cpu", {"": torch.device("cpu")}): device_map == {"": 0} return device_map def _process_model_before_weight_loading(self, model: "PreTrainedModel", **kwargs): if model.__class__.main_input_name != "input_ids": raise RuntimeError("We can only quantize pure text model.") if self.pre_quantized: # compat: latest optimum has gptqmodel refactor if version.parse(importlib.metadata.version("optimum")) <= version.parse("1.23.99"): model = self.optimum_quantizer.convert_model(model) else: model = self.optimum_quantizer.convert_model(model, **kwargs) def _process_model_after_weight_loading(self, model: "PreTrainedModel", **kwargs): if self.pre_quantized: model = self.optimum_quantizer.post_init_model(model) else: if self.quantization_config.tokenizer is None: self.quantization_config.tokenizer = model.name_or_path self.optimum_quantizer.quantize_model(model, self.quantization_config.tokenizer) model.config.quantization_config = GPTQConfig.from_dict(self.optimum_quantizer.to_dict()) @property def is_trainable(self, model: Optional["PreTrainedModel"] = None): return True def is_serializable(self, safe_serialization=None): return True
transformers/src/transformers/quantizers/quantizer_gptq.py/0
{ "file_path": "transformers/src/transformers/quantizers/quantizer_gptq.py", "repo_id": "transformers", "token_count": 2224 }
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Tokenization classes for fast tokenizers (provided by HuggingFace's tokenizers library). For slow (python) tokenizers see tokenization_utils.py """ import copy import json import os from collections import defaultdict from typing import Any, Dict, Iterable, List, Optional, Tuple, Union import tokenizers.pre_tokenizers as pre_tokenizers_fast from tokenizers import Encoding as EncodingFast from tokenizers import Tokenizer as TokenizerFast from tokenizers.decoders import Decoder as DecoderFast from tokenizers.trainers import BpeTrainer, UnigramTrainer, WordLevelTrainer, WordPieceTrainer from .convert_slow_tokenizer import convert_slow_tokenizer from .integrations.ggml import convert_gguf_tokenizer from .modeling_gguf_pytorch_utils import load_gguf_checkpoint from .tokenization_utils import PreTrainedTokenizer from .tokenization_utils_base import ( INIT_TOKENIZER_DOCSTRING, AddedToken, BatchEncoding, PreTokenizedInput, PreTokenizedInputPair, PreTrainedTokenizerBase, SpecialTokensMixin, TextInput, TextInputPair, TruncationStrategy, ) from .utils import PaddingStrategy, add_end_docstrings, logging logger = logging.get_logger(__name__) # Fast tokenizers (provided by HuggingFace tokenizer's library) can be saved in a single file TOKENIZER_FILE = "tokenizer.json" SPECIAL_TOKENS_MAP_FILE = "special_tokens_map.json" TOKENIZER_CONFIG_FILE = "tokenizer_config.json" TIKTOKEN_VOCAB_FILE = "tokenizer.model" # Slow tokenizers have an additional added tokens files ADDED_TOKENS_FILE = "added_tokens.json" INIT_TOKENIZER_DOCSTRING += """ tokenizer_object ([`tokenizers.Tokenizer`]): A [`tokenizers.Tokenizer`] object from 🤗 tokenizers to instantiate from. See [Using tokenizers from 🤗 tokenizers](../fast_tokenizers) for more information. tokenizer_file ([`str`]): A path to a local JSON file representing a previously serialized [`tokenizers.Tokenizer`] object from 🤗 tokenizers. """ MODEL_TO_TRAINER_MAPPING = { "BPE": BpeTrainer, "Unigram": UnigramTrainer, "WordLevel": WordLevelTrainer, "WordPiece": WordPieceTrainer, } VOCAB_FILES_NAMES = {"tokenizer_file": TOKENIZER_FILE, "vocab_file": TIKTOKEN_VOCAB_FILE} @add_end_docstrings(INIT_TOKENIZER_DOCSTRING) class PreTrainedTokenizerFast(PreTrainedTokenizerBase): """ Base class for all fast tokenizers (wrapping HuggingFace tokenizers library). Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`]. Handles all the shared methods for tokenization and special tokens, as well as methods for downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary. This class also contains the added tokens in a unified way on top of all tokenizers so we don't have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...). """ vocab_files_names = VOCAB_FILES_NAMES slow_tokenizer_class: PreTrainedTokenizer = None def __init__(self, *args, **kwargs): tokenizer_object = kwargs.pop("tokenizer_object", None) slow_tokenizer = kwargs.pop("__slow_tokenizer", None) gguf_file = kwargs.pop("gguf_file", None) fast_tokenizer_file = kwargs.pop("tokenizer_file", None) from_slow = kwargs.pop("from_slow", False) added_tokens_decoder = kwargs.pop("added_tokens_decoder", {}) self.add_prefix_space = kwargs.get("add_prefix_space", False) if from_slow and slow_tokenizer is None and self.slow_tokenizer_class is None: raise ValueError( "Cannot instantiate this tokenizer from a slow version. If it's based on sentencepiece, make sure you " "have sentencepiece installed." ) if tokenizer_object is not None: fast_tokenizer = copy.deepcopy(tokenizer_object) elif fast_tokenizer_file is not None and not from_slow: # We have a serialization from tokenizers which let us directly build the backend fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) elif slow_tokenizer: # We need to convert a slow tokenizer to build the backend fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) elif gguf_file is not None: # We need to convert a slow tokenizer to build the backend gguf_param = load_gguf_checkpoint(kwargs.get("vocab_file")) architecture = gguf_param["config"]["model_type"] tokenizer_dict = gguf_param["tokenizer"] tokenizer_config = gguf_param["tokenizer_config"] fast_tokenizer, additional_kwargs = convert_gguf_tokenizer(architecture, tokenizer_dict) kwargs.update(tokenizer_config) if len(additional_kwargs) > 0: kwargs.update(additional_kwargs) elif self.slow_tokenizer_class is not None and slow_tokenizer is not False: # We need to create and convert a slow tokenizer to build the backend slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs) fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) elif not slow_tokenizer: # We tried loading a slow_tokenizer with spm and failed, try to load with tiktoken self.vocab_file = kwargs.get("vocab_file", None) self.additional_special_tokens = kwargs.get("additional_special_tokens", []) fast_tokenizer = convert_slow_tokenizer(self, from_tiktoken=True) slow_tokenizer = None else: raise ValueError( "Couldn't instantiate the backend tokenizer from one of: \n" "(1) a `tokenizers` library serialization file, \n" "(2) a slow tokenizer instance to convert or \n" "(3) an equivalent slow tokenizer class to instantiate and convert. \n" "You need to have sentencepiece or tiktoken installed to convert a slow tokenizer to a fast one." ) self._tokenizer = fast_tokenizer if slow_tokenizer is not None: kwargs.update(slow_tokenizer.init_kwargs) self._decode_use_source_tokenizer = False _truncation = self._tokenizer.truncation if _truncation is not None: self._tokenizer.enable_truncation(**_truncation) kwargs.setdefault("max_length", _truncation["max_length"]) kwargs.setdefault("truncation_side", _truncation["direction"]) kwargs.setdefault("stride", _truncation["stride"]) kwargs.setdefault("truncation_strategy", _truncation["strategy"]) else: self._tokenizer.no_truncation() _padding = self._tokenizer.padding if _padding is not None: self._tokenizer.enable_padding(**_padding) kwargs.setdefault("pad_token", _padding["pad_token"]) kwargs.setdefault("pad_token_type_id", _padding["pad_type_id"]) kwargs.setdefault("padding_side", _padding["direction"]) kwargs.setdefault("max_length", _padding["length"]) kwargs.setdefault("pad_to_multiple_of", _padding["pad_to_multiple_of"]) # We call this after having initialized the backend tokenizer because we update it. super().__init__(**kwargs) self._tokenizer.encode_special_tokens = self.split_special_tokens added_tokens_decoder_hash = {hash(repr(token)) for token in self.added_tokens_decoder} tokens_to_add = [ token for index, token in sorted(added_tokens_decoder.items(), key=lambda x: x[0]) if hash(repr(token)) not in added_tokens_decoder_hash ] encoder = list(self.added_tokens_encoder.keys()) + [str(token) for token in tokens_to_add] # if some of the special tokens are strings, we check if we don't already have a token tokens_to_add += [ token for token in self.all_special_tokens_extended if token not in encoder and token not in tokens_to_add ] if len(tokens_to_add) > 0: tokens = [] special_tokens = self.all_special_tokens for token in tokens_to_add: is_special = ( (token.special or str(token) in special_tokens) if isinstance(token, AddedToken) else str(token) in special_tokens ) if isinstance(token, str): token = AddedToken(token, special=is_special) else: token.special = is_special tokens.append(token) if tokens: self.add_tokens(tokens) try: pre_tok_state = json.loads(self.backend_tokenizer.pre_tokenizer.__getstate__()) if pre_tok_state.get("add_prefix_space", self.add_prefix_space) != self.add_prefix_space: pre_tok_class = getattr(pre_tokenizers_fast, pre_tok_state.pop("type")) pre_tok_state["add_prefix_space"] = self.add_prefix_space self.backend_tokenizer.pre_tokenizer = pre_tok_class(**pre_tok_state) except Exception: # We'll get an error if there is no pre_tokenizer, or if it's a custom pre_tokenizer that can # not be serialized. In those cases, we just ignore the error as there's no pre_tokenizer # for which we need to update the `add_prefix_space` attribute. pass @property def is_fast(self) -> bool: return True @property def can_save_slow_tokenizer(self) -> bool: """ `bool`: Whether or not the slow tokenizer can be saved. Usually for sentencepiece based slow tokenizer, this can only be `True` if the original `"sentencepiece.model"` was not deleted. """ return True @property def vocab_size(self) -> int: """ `int`: Size of the base vocabulary (without the added tokens). """ return self._tokenizer.get_vocab_size(with_added_tokens=False) def get_vocab(self) -> Dict[str, int]: return self._tokenizer.get_vocab(with_added_tokens=True) @property def vocab(self) -> Dict[str, int]: return self.get_vocab() @property def added_tokens_encoder(self) -> Dict[str, int]: """ Returns the sorted mapping from string to index. The added tokens encoder is cached for performance optimisation in `self._added_tokens_encoder` for the slow tokenizers. """ return {k.content: v for v, k in sorted(self.added_tokens_decoder.items(), key=lambda item: item[0])} @property def added_tokens_decoder(self) -> Dict[int, AddedToken]: """ Returns the added tokens in the vocabulary as a dictionary of index to AddedToken. Returns: `Dict[str, int]`: The added tokens. """ return self._tokenizer.get_added_tokens_decoder() def get_added_vocab(self) -> Dict[str, int]: """ Returns the added tokens in the vocabulary as a dictionary of token to index. Returns: `Dict[str, int]`: The added tokens. """ return {k.content: v for v, k in sorted(self.added_tokens_decoder.items(), key=lambda item: item[0])} def __len__(self) -> int: """ Size of the full vocabulary with the added tokens. """ return self._tokenizer.get_vocab_size(with_added_tokens=True) @property def backend_tokenizer(self) -> TokenizerFast: """ `tokenizers.implementations.BaseTokenizer`: The Rust tokenizer used as a backend. """ return self._tokenizer @property def decoder(self) -> DecoderFast: """ `tokenizers.decoders.Decoder`: The Rust decoder for this tokenizer. """ return self._tokenizer.decoder def _convert_encoding( self, encoding: EncodingFast, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, ) -> Tuple[Dict[str, Any], List[EncodingFast]]: """ Convert the encoding representation (from low-level HuggingFace tokenizer output) to a python Dict and a list of encodings, take care of building a batch from overflowing tokens. Overflowing tokens are converted to additional examples (like batches) so the output values of the dict are lists (overflows) of lists (tokens). Output shape: (overflows, sequence length) """ if return_token_type_ids is None: return_token_type_ids = "token_type_ids" in self.model_input_names if return_attention_mask is None: return_attention_mask = "attention_mask" in self.model_input_names if return_overflowing_tokens and encoding.overflowing is not None: encodings = [encoding] + encoding.overflowing else: encodings = [encoding] encoding_dict = defaultdict(list) for e in encodings: encoding_dict["input_ids"].append(e.ids) if return_token_type_ids: encoding_dict["token_type_ids"].append(e.type_ids) if return_attention_mask: encoding_dict["attention_mask"].append(e.attention_mask) if return_special_tokens_mask: encoding_dict["special_tokens_mask"].append(e.special_tokens_mask) if return_offsets_mapping: encoding_dict["offset_mapping"].append(e.offsets) if return_length: encoding_dict["length"].append(len(e.ids)) return encoding_dict, encodings def convert_tokens_to_ids(self, tokens: Union[str, Iterable[str]]) -> Union[int, List[int]]: """ Converts a token string (or a sequence of tokens) in a single integer id (or a Iterable of ids), using the vocabulary. Args: tokens (`str` or `Iterable[str]`): One or several token(s) to convert to token id(s). Returns: `int` or `List[int]`: The token id or list of token ids. """ if isinstance(tokens, str): return self._convert_token_to_id_with_added_voc(tokens) return [self._convert_token_to_id_with_added_voc(token) for token in tokens] def _convert_token_to_id_with_added_voc(self, token: str) -> int: index = self._tokenizer.token_to_id(token) if index is None: return self.unk_token_id return index def _convert_id_to_token(self, index: int) -> Optional[str]: return self._tokenizer.id_to_token(int(index)) def _add_tokens(self, new_tokens: List[Union[str, AddedToken]], special_tokens=False) -> int: if special_tokens: return self._tokenizer.add_special_tokens(new_tokens) return self._tokenizer.add_tokens(new_tokens) def num_special_tokens_to_add(self, pair: bool = False) -> int: """ Returns the number of added tokens when encoding a sequence with special tokens. <Tip> This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop. </Tip> Args: pair (`bool`, *optional*, defaults to `False`): Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence. Returns: `int`: Number of special tokens added to sequences. """ return self._tokenizer.num_special_tokens_to_add(pair) def convert_ids_to_tokens( self, ids: Union[int, List[int]], skip_special_tokens: bool = False ) -> Union[str, List[str]]: """ Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens. Args: ids (`int` or `List[int]`): The token id (or token ids) to convert to tokens. skip_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not to remove special tokens in the decoding. Returns: `str` or `List[str]`: The decoded token(s). """ if isinstance(ids, int): return self._tokenizer.id_to_token(ids) tokens = [] for index in ids: index = int(index) if skip_special_tokens and index in self.all_special_ids: continue tokens.append(self._tokenizer.id_to_token(index)) return tokens def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]: return self.encode_plus(text=text, text_pair=pair, add_special_tokens=add_special_tokens, **kwargs).tokens() def set_truncation_and_padding( self, padding_strategy: PaddingStrategy, truncation_strategy: TruncationStrategy, max_length: int, stride: int, pad_to_multiple_of: Optional[int], padding_side: Optional[bool], ): """ Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards. The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section. Args: padding_strategy ([`~utils.PaddingStrategy`]): The kind of padding that will be applied to the input truncation_strategy ([`~tokenization_utils_base.TruncationStrategy`]): The kind of truncation that will be applied to the input max_length (`int`): The maximum size of a sequence. stride (`int`): The stride to use when handling overflow. pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability `>= 7.5` (Volta). padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. """ _truncation = self._tokenizer.truncation _padding = self._tokenizer.padding # Set truncation and padding on the backend tokenizer if truncation_strategy == TruncationStrategy.DO_NOT_TRUNCATE: if _truncation is not None: self._tokenizer.no_truncation() else: target = { "max_length": max_length, "stride": stride, "strategy": truncation_strategy.value, "direction": self.truncation_side, } # _truncation might contain more keys that the target `transformers` # supports. Use only the target keys to trigger `enable_truncation`. # This should enable this code to works on various `tokenizers` # targets. if _truncation is None: current = None else: current = {k: _truncation.get(k, None) for k in target} if current != target: self._tokenizer.enable_truncation(**target) if padding_strategy == PaddingStrategy.DO_NOT_PAD: if _padding is not None: self._tokenizer.no_padding() else: length = max_length if padding_strategy == PaddingStrategy.MAX_LENGTH else None target = { "length": length, "direction": padding_side if padding_side is not None else self.padding_side, "pad_id": self.pad_token_id, "pad_token": self.pad_token, "pad_type_id": self.pad_token_type_id, "pad_to_multiple_of": pad_to_multiple_of, } if _padding != target: self._tokenizer.enable_padding(**target) def _batch_encode_plus( self, batch_text_or_text_pairs: Union[ List[TextInput], List[TextInputPair], List[PreTokenizedInput], List[PreTokenizedInputPair] ], add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, padding_side: Optional[bool] = None, return_tensors: Optional[str] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, split_special_tokens: bool = False, ) -> BatchEncoding: if not isinstance(batch_text_or_text_pairs, (tuple, list)): raise TypeError( f"batch_text_or_text_pairs has to be a list or a tuple (got {type(batch_text_or_text_pairs)})" ) # Set the truncation and padding strategy and restore the initial configuration self.set_truncation_and_padding( padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, ) if self._tokenizer.encode_special_tokens != split_special_tokens: self._tokenizer.encode_special_tokens = split_special_tokens encodings = self._tokenizer.encode_batch( batch_text_or_text_pairs, add_special_tokens=add_special_tokens, is_pretokenized=is_split_into_words, ) # Convert encoding to dict # `Tokens` has type: Tuple[ # List[Dict[str, List[List[int]]]] or List[Dict[str, 2D-Tensor]], # List[EncodingFast] # ] # with nested dimensions corresponding to batch, overflows, sequence length tokens_and_encodings = [ self._convert_encoding( encoding=encoding, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, ) for encoding in encodings ] # Convert the output to have dict[list] from list[dict] and remove the additional overflows dimension # From (variable) shape (batch, overflows, sequence length) to ~ (batch * overflows, sequence length) # (we say ~ because the number of overflow varies with the example in the batch) # # To match each overflowing sample with the original sample in the batch # we add an overflow_to_sample_mapping array (see below) sanitized_tokens = {} for key in tokens_and_encodings[0][0].keys(): stack = [e for item, _ in tokens_and_encodings for e in item[key]] sanitized_tokens[key] = stack sanitized_encodings = [e for _, item in tokens_and_encodings for e in item] # If returning overflowing tokens, we need to return a mapping # from the batch idx to the original sample if return_overflowing_tokens: overflow_to_sample_mapping = [] for i, (toks, _) in enumerate(tokens_and_encodings): overflow_to_sample_mapping += [i] * len(toks["input_ids"]) sanitized_tokens["overflow_to_sample_mapping"] = overflow_to_sample_mapping for input_ids in sanitized_tokens["input_ids"]: self._eventual_warn_about_too_long_sequence(input_ids, max_length, verbose) return BatchEncoding(sanitized_tokens, sanitized_encodings, tensor_type=return_tensors) def _encode_plus( self, text: Union[TextInput, PreTokenizedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, padding_side: Optional[bool] = None, return_tensors: Optional[bool] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, split_special_tokens: bool = False, **kwargs, ) -> BatchEncoding: batched_input = [(text, text_pair)] if text_pair else [text] batched_output = self._batch_encode_plus( batched_input, is_split_into_words=is_split_into_words, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, split_special_tokens=split_special_tokens, **kwargs, ) # Return tensor is None, then we can remove the leading batch axis # Overflowing tokens are returned as a batch of output so we keep them in this case if return_tensors is None and not return_overflowing_tokens: batched_output = BatchEncoding( { key: (value[0] if len(value) > 0 and isinstance(value[0], list) else value) for key, value in batched_output.items() }, batched_output.encodings, ) self._eventual_warn_about_too_long_sequence(batched_output["input_ids"], max_length, verbose) return batched_output def convert_tokens_to_string(self, tokens: List[str]) -> str: return ( self.backend_tokenizer.decoder.decode(tokens) if self.backend_tokenizer.decoder is not None else " ".join(tokens) ) def _decode( self, token_ids: Union[int, List[int]], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = None, **kwargs, ) -> str: self._decode_use_source_tokenizer = kwargs.pop("use_source_tokenizer", False) if isinstance(token_ids, int): token_ids = [token_ids] text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) clean_up_tokenization_spaces = ( clean_up_tokenization_spaces if clean_up_tokenization_spaces is not None else self.clean_up_tokenization_spaces ) if clean_up_tokenization_spaces: clean_text = self.clean_up_tokenization(text) return clean_text else: return text def _save_pretrained( self, save_directory: Union[str, os.PathLike], file_names: Tuple[str], legacy_format: Optional[bool] = None, filename_prefix: Optional[str] = None, ) -> Tuple[str]: """ Save a tokenizer using the slow-tokenizer/legacy format: vocabulary + added tokens as well as in a unique JSON file containing {config + vocab + added-tokens}. """ save_directory = str(save_directory) if self.slow_tokenizer_class is None and legacy_format is True: raise ValueError( "Your tokenizer does not have a legacy version defined and therefore cannot register this version. You" " might consider leaving the legacy_format at `None` or setting it to `False`." ) save_slow = ( (legacy_format is None or legacy_format is True) and self.slow_tokenizer_class is not None and self.can_save_slow_tokenizer ) save_fast = legacy_format is None or legacy_format is False if save_slow: added_tokens_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + ADDED_TOKENS_FILE ) # make sure to be foward compatible added_vocab = {tok: index for tok, index in self.added_tokens_encoder.items() if index >= self.vocab_size} if added_vocab: with open(added_tokens_file, "w", encoding="utf-8") as f: out_str = json.dumps(added_vocab, indent=2, sort_keys=True, ensure_ascii=False) + "\n" f.write(out_str) vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix) file_names = file_names + vocab_files + (added_tokens_file,) if save_fast: tokenizer_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + TOKENIZER_FILE ) self.backend_tokenizer.save(tokenizer_file) file_names = file_names + (tokenizer_file,) return file_names def train_new_from_iterator( self, text_iterator, vocab_size, length=None, new_special_tokens=None, special_tokens_map=None, **kwargs, ): """ Trains a tokenizer on a new corpus with the same defaults (in terms of special tokens or tokenization pipeline) as the current one. Args: text_iterator (generator of `List[str]`): The training corpus. Should be a generator of batches of texts, for instance a list of lists of texts if you have everything in memory. vocab_size (`int`): The size of the vocabulary you want for your tokenizer. length (`int`, *optional*): The total number of sequences in the iterator. This is used to provide meaningful progress tracking new_special_tokens (list of `str` or `AddedToken`, *optional*): A list of new special tokens to add to the tokenizer you are training. special_tokens_map (`Dict[str, str]`, *optional*): If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special token name to new special token name in this argument. kwargs (`Dict[str, Any]`, *optional*): Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library. Returns: [`PreTrainedTokenizerFast`]: A new tokenizer of the same type as the original one, trained on `text_iterator`. """ tokenizer_json = json.loads(self._tokenizer.to_str()) # Remove added tokens for now (uses IDs of tokens) added_tokens = tokenizer_json.pop("added_tokens") # Remove post processor for now (uses IDs of tokens) post_processor = tokenizer_json.pop("post_processor") unk_token = None # Remove vocab if tokenizer_json["model"]["type"] == "BPE": tokenizer_json["model"]["vocab"] = {} tokenizer_json["model"]["merges"] = [] elif tokenizer_json["model"]["type"] == "Unigram": if tokenizer_json["model"]["unk_id"] is not None: unk_id = tokenizer_json["model"]["unk_id"] unk_token = tokenizer_json["model"]["vocab"][unk_id][0] if special_tokens_map is not None and unk_token in special_tokens_map: unk_token = special_tokens_map[unk_token] tokenizer_json["model"]["unk_id"] = 0 tokenizer_json["model"]["vocab"] = [[unk_token, 0.0]] elif tokenizer_json["model"]["type"] in ["WordLevel", "WordPiece"]: tokenizer_json["model"]["vocab"] = {} else: raise ValueError( f"This method does not support this type of tokenizer (found {tokenizer_json['model']['type']}) " "only BPE, Unigram, WordLevel and WordPiece." ) if ( special_tokens_map is not None and "unk_token" in tokenizer_json["model"] and tokenizer_json["model"]["unk_token"] in special_tokens_map ): tokenizer_json["model"]["unk_token"] = special_tokens_map[tokenizer_json["model"]["unk_token"]] tokenizer = TokenizerFast.from_str(json.dumps(tokenizer_json)) # Get the special tokens from the current tokenizer if none are specified. special_tokens = [] for added_token in added_tokens: special = added_token.pop("special", None) _ = added_token.pop("id", None) if tokenizer_json["model"]["type"] != "Unigram" and not special: continue if special_tokens_map is not None and added_token["content"] in special_tokens_map: added_token["content"] = special_tokens_map[added_token["content"]] special_tokens.append(AddedToken(**added_token)) if new_special_tokens is not None: special_tokens.extend(new_special_tokens) # Trainer needs to know the end of word / continuing subword thingies in BPE if ( tokenizer_json["model"]["type"] == "BPE" and "continuing_subword_prefix" not in kwargs and tokenizer_json["model"]["continuing_subword_prefix"] is not None ): kwargs["continuing_subword_prefix"] = tokenizer_json["model"]["continuing_subword_prefix"] if ( tokenizer_json["model"]["type"] == "BPE" and "end_of_word_suffix" not in kwargs and tokenizer_json["model"]["end_of_word_suffix"] is not None ): kwargs["end_of_word_suffix"] = tokenizer_json["model"]["end_of_word_suffix"] if tokenizer_json["model"]["type"] == "Unigram" and unk_token is not None: kwargs["unk_token"] = unk_token if tokenizer_json["pre_tokenizer"] is not None: if ( tokenizer_json["pre_tokenizer"]["type"] == "ByteLevel" or tokenizer_json["pre_tokenizer"]["type"] == "Sequence" and "pretokenizers" in tokenizer_json["pre_tokenizer"] and any( pretokenizer["type"] == "ByteLevel" for pretokenizer in tokenizer_json["pre_tokenizer"]["pretokenizers"] ) ): kwargs["initial_alphabet"] = pre_tokenizers_fast.ByteLevel.alphabet() trainer_class = MODEL_TO_TRAINER_MAPPING[tokenizer_json["model"]["type"]] trainer = trainer_class(vocab_size=vocab_size, special_tokens=special_tokens, **kwargs) tokenizer.train_from_iterator(text_iterator, length=length, trainer=trainer) if post_processor is not None: trained_tokenizer_json = json.loads(tokenizer.to_str()) # Almost done, we just have to adjust the token IDs in the post processor if "special_tokens" in post_processor: for key in post_processor["special_tokens"]: tokens = post_processor["special_tokens"][key]["tokens"] if special_tokens_map is not None: tokens = [special_tokens_map.get(token, token) for token in tokens] post_processor["special_tokens"][key]["tokens"] = tokens for token in tokens: token_id = tokenizer.token_to_id(token) if token_id is None: raise ValueError( "Attempted to set a token in the post processor that does not exist in the mapping" ) post_processor["special_tokens"][key]["ids"] = [tokenizer.token_to_id(token) for token in tokens] for special_token in ["cls", "sep"]: if special_token in post_processor: token, _ = post_processor[special_token] if special_tokens_map is not None and token in special_tokens_map: token = special_tokens_map[token] token_id = tokenizer.token_to_id(token) if token_id is None: raise ValueError( "Attempted to set a token in the post processor that does not exist in the mapping" ) post_processor[special_token] = [token, token_id] trained_tokenizer_json["post_processor"] = post_processor tokenizer = TokenizerFast.from_str(json.dumps(trained_tokenizer_json)) kwargs = self.init_kwargs.copy() # Map pad/cls/mask token at the Transformers level special_tokens_list = SpecialTokensMixin.SPECIAL_TOKENS_ATTRIBUTES.copy() special_tokens_list.remove("additional_special_tokens") for token in special_tokens_list: if getattr(self, token) is not None: special_token = getattr(self, token) if special_tokens_map is not None and special_token in special_tokens_map: special_token = special_tokens_map[special_token] special_token_full = self._special_tokens_map.get(token, None) if isinstance(special_token_full, AddedToken): # Create an added token with the same parameters except the content kwargs[token] = AddedToken( special_token, single_word=special_token_full.single_word, lstrip=special_token_full.lstrip, rstrip=special_token_full.rstrip, normalized=special_token_full.normalized, special=True, ) else: kwargs[token] = special_token additional_special_tokens = self.additional_special_tokens if new_special_tokens is not None: additional_special_tokens.extend(new_special_tokens) if len(additional_special_tokens) > 0: kwargs["additional_special_tokens"] = additional_special_tokens return self.__class__(tokenizer_object=tokenizer, **kwargs)
transformers/src/transformers/tokenization_utils_fast.py/0
{ "file_path": "transformers/src/transformers/tokenization_utils_fast.py", "repo_id": "transformers", "token_count": 18099 }
# This file is autogenerated by the command `make fix-copies`, do not edit. from ..utils import requires_backends class LayoutLMv2Model: def __init__(self, *args, **kwargs): requires_backends(self, ["detectron2"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["detectron2"])
transformers/src/transformers/utils/dummy_detectron2_objects.py/0
{ "file_path": "transformers/src/transformers/utils/dummy_detectron2_objects.py", "repo_id": "transformers", "token_count": 131 }
# coding=utf-8 # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import builtins import collections import contextlib import functools import inspect import math import operator import os import random import sys import warnings from typing import Any, Callable, Dict, List, Literal, Optional, Tuple, Type, Union import torch import torch.utils._pytree as pytree from torch import nn from torch.fx import Graph, GraphModule, Node, Proxy, Tracer from torch.fx._compatibility import compatibility from torch.fx._symbolic_trace import is_fx_tracing from torch.fx.proxy import ParameterProxy from .. import logging from ..cache_utils import Cache, DynamicCache, SinkCache, StaticCache from ..modeling_utils import PretrainedConfig, PreTrainedModel from ..models.auto import get_values from ..models.auto.modeling_auto import ( MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES, MODEL_FOR_BACKBONE_MAPPING_NAMES, MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, MODEL_FOR_CTC_MAPPING_NAMES, MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES, MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES, MODEL_FOR_IMAGE_MAPPING_NAMES, MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING_NAMES, MODEL_FOR_MASKED_LM_MAPPING_NAMES, MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES, MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES, MODEL_FOR_PRETRAINING_MAPPING_NAMES, MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES, MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING_NAMES, MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES, MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES, MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES, MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES, MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES, MODEL_MAPPING_NAMES, ) from .import_utils import ( ENV_VARS_TRUE_VALUES, TORCH_FX_REQUIRED_VERSION, get_torch_version, is_peft_available, is_torch_fx_available, ) if is_peft_available(): from peft import PeftModel logger = logging.get_logger(__name__) _IS_IN_DEBUG_MODE = os.environ.get("FX_DEBUG_MODE", "").upper() in ENV_VARS_TRUE_VALUES def _generate_supported_model_class_names( model_name: Type[PretrainedConfig], supported_tasks: Optional[Union[str, List[str]]] = None, ) -> List[str]: task_mapping = { "default": MODEL_MAPPING_NAMES, "pretraining": MODEL_FOR_PRETRAINING_MAPPING_NAMES, "next-sentence-prediction": MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES, "masked-lm": MODEL_FOR_MASKED_LM_MAPPING_NAMES, "causal-lm": MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, "seq2seq-lm": MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES, "speech-seq2seq": MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES, "multiple-choice": MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES, "document-question-answering": MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES, "question-answering": MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES, "sequence-classification": MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES, "token-classification": MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES, "masked-image-modeling": MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING_NAMES, "image-classification": MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES, "zero-shot-image-classification": MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES, "ctc": MODEL_FOR_CTC_MAPPING_NAMES, "audio-classification": MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES, "semantic-segmentation": MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING_NAMES, "backbone": MODEL_FOR_BACKBONE_MAPPING_NAMES, "image-feature-extraction": MODEL_FOR_IMAGE_MAPPING_NAMES, } if supported_tasks is None: supported_tasks = task_mapping.keys() if isinstance(supported_tasks, str): supported_tasks = [supported_tasks] model_class_names = [] for task in supported_tasks: class_name = task_mapping[task].get(model_name, None) if class_name: model_class_names.append(class_name) return model_class_names _REGULAR_SUPPORTED_MODEL_NAMES_AND_TASKS = [ "altclip", "albert", "bart", "bert", "blenderbot", "blenderbot-small", "bloom", "clip", "convnext", "deberta", "deberta-v2", "dinov2", "distilbert", "donut-swin", "electra", "gpt2", "gpt_neo", "gptj", "hiera", "hubert", "ijepa", "layoutlm", "llama", "cohere", "lxmert", "m2m_100", "marian", "mbart", "megatron-bert", "mistral", "mixtral", "mobilebert", "mt5", "nezha", "opt", "pegasus", "plbart", "qwen2", "qwen2_moe", "resnet", "roberta", "segformer", "speech_to_text", "speech_to_text_2", "swin", "t5", "trocr", "vit", "xglm", "wav2vec2", # "xlnet", ] _FX_SUPPORTED_MODELS_WITH_KV_CACHE = ["llama", "opt"] _REGULAR_SUPPORTED_MODELS = [] for item in _REGULAR_SUPPORTED_MODEL_NAMES_AND_TASKS: if isinstance(item, dict): _REGULAR_SUPPORTED_MODELS.extend(_generate_supported_model_class_names(**item)) else: _REGULAR_SUPPORTED_MODELS.extend(_generate_supported_model_class_names(item)) _SPECIAL_SUPPORTED_MODELS = [ "CLIPTextModel", "CLIPTextModelWithProjection", "CLIPVisionModel", "CLIPVisionModelWithProjection", "AltCLIPTextModel", "AltCLIPVisionModel", "GitVisionModel", "GPT2DoubleHeadsModel", "Speech2Text2Decoder", "TrOCRDecoder", "PeftModelForCausalLM", "PeftModelForSeq2SeqLM", # TODO: add support for them as it should be quite easy to do so (small blocking issues). # XLNetForQuestionAnswering, ] _SUPPORTED_MODELS = tuple(sorted(set(_REGULAR_SUPPORTED_MODELS + _SPECIAL_SUPPORTED_MODELS))) _CURRENT_TRACER = None def torch_nn_embedding(self, input): return torch.empty(*input.shape, self.weight.shape[-1], device="meta", dtype=self.weight.dtype) def torch_nn_functional_embedding( input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False ): return torch.empty(*input.shape, weight.shape[-1], device="meta", dtype=weight.dtype) def torch_nn_layernorm(self, input): return input def torch_nn_groupnorm(self, input): return input def torch_nn_linear(self, input): return torch.empty(input.shape[:-1] + (self.out_features,), device="meta") def torch_relu(x): return x def torch_nn_relu(self, x): return x def torch_nn_functional_relu(x, inplace=False): if not inplace: raise ValueError("Don't support in-place functional.relu for MetaTensor analysis") return x def torch_where(condition, x, y): # torch.where returns the broadcasted tensor of condition, x, and y, # so hack it by using addition return condition.to(device="meta") + x.to(device="meta") + y.to(device="meta") def torch_abs(input, *, out=None): if out is not None: raise ValueError("Don't support in-place abs for MetaTensor analysis") return input def torch_arange(*args, **kwargs): n = len(args) step = 1 if n == 1: start = 0 end = args[0] elif n == 2: start, end = args else: start, end, step = args if isinstance(start, float): start = int(start) if isinstance(end, float): start = int(end) if isinstance(step, float): step = int(step) step = kwargs.get("step", step) dtype = kwargs.get("dtype") return torch.empty((end - start) // step, dtype=dtype, device="meta") def torch_full(*args, **kwargs): args = list(args) # We set the fill value to 1 as its value is not important as long as it's not a tensor on the `meta` device. if len(args) > 1: args[1] = 1 else: kwargs["fill_value"] = 1 kwargs_without_device = dict(kwargs) kwargs_without_device.pop("device", None) return torch.full(*args, **kwargs_without_device, device="meta") def torch_cat(tensors, dim=None, axis=None, *, out=None): if dim is None and axis is None: dim = 0 if dim is None and axis is not None: dim = axis if dim < 0: dim = tensors[0].dim() + dim shapes = [t.shape for t in tensors] shape = list(shapes[0]) concatenated_dim = sum(shape[dim] for shape in shapes) final_shape = shape[:dim] + [concatenated_dim] + shape[dim + 1 :] return torch.empty(final_shape, device="meta") def torch_stack(tensors, dim=None, axis=None, *, out=None): if dim is None and axis is None: dim = 0 if dim is None and axis is not None: dim = axis if dim < 0: dim = tensors[0].dim() + 1 + dim shape = list(tensors[0].shape) shape.insert(dim, len(tensors)) return torch.empty(shape, device="meta") def torch_add(input, other, *, alpha=1, out=None): if not isinstance(input, torch.Tensor): return torch.empty_like(other, device="meta") if not isinstance(other, torch.Tensor): return torch.empty_like(input, device="meta") max_length = max(input.dim(), other.dim()) input_shape = list(input.shape) + [1] * (max_length - input.dim()) other_shape = list(other.shape) + [1] * (max_length - other.dim()) shape = [] for i in range(max_length): shape.append(max(input_shape[i], other_shape[i])) return torch.empty(shape, device="meta") def torch_mul(input, other, *, out=None): return torch_add(input, other, out=out) def torch_tensor_mul(self, other): return torch_mul(self, other) def torch_matmul(input, other, *, out=None): d1 = input.dim() d2 = other.dim() shape = None if d1 == 1 and d2 == 1: shape = None elif d1 == 2 and d2 == 2: shape = (input.size(0), other.size(1)) elif d1 == 1 and d2 == 2: shape = (other.size(1),) elif d1 == 2 and d1 == 1: shape = (input.size(0),) else: max_length = max(input.dim(), other.dim()) shape1 = list(input.shape) shape2 = list(other.shape) if d1 == 1: shape1 = [1] + shape1 if d2 == 1: shape2.append(1) shape1 = [-1] * (max_length - d1) + list(input.shape) shape2 = [-1] * (max_length - d2) + list(other.shape) shape = [] for i in range(max_length): shape.append(max(shape1[i], shape2[i])) shape[-2] = shape1[-2] shape[-1] = shape2[-1] if d1 == 1: shape.pop(-2) if d2 == 1: shape.pop(-1) if shape is None: return torch.tensor(0.0, device="meta") return torch.empty(*shape, device="meta") def torch_bmm(input, mat2, *, out=None): if out is not None: raise ValueError("Don't support in-place bmm for MetaTensor analysis") batch_size, n, m = input.shape _, _, p = mat2.shape return torch.empty(batch_size, n, p, device="meta") def torch_baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None): if out is not None: raise ValueError("Don't support in-place baddbmm for MetaTensor analysis") return torch_bmm(batch1, batch2) def torch_tensor_baddbmm(self, batch1, batch2, *, beta=1, alpha=1, out=None): return torch_baddbmm(self, batch1, batch2, beta=beta, alpha=alpha, out=out) def torch_einsum(equation, *operands): # TODO: infer shape without performing the computation, this might be quite hard. concrete_operands = (torch.empty_like(operand, device="cpu") for operand in operands) return torch.einsum(equation, *concrete_operands).to("meta") def torch_tensor_repeat(self, *sizes): shape = list(self.shape) for i, x in enumerate(sizes): shape[i] *= x return torch.empty(shape, device="meta") def torch_repeat_interleave(*args, dim=None, output_size=None): num_args = len(args) if num_args == 1: shape = [output_size if output_size is not None else args[0].sum()] else: shape = list(args[0].shape) if dim is None: if num_args > 2: dim = args[2] else: shape = [sum(shape)] dim = 0 repeats = args[1] if isinstance(repeats, int) or torch.numel(repeats) == 1: shape[dim] *= int(repeats) else: shape[dim] = output_size if output_size is not None else repeats.sum() return torch.empty(*shape, device="meta") def torch_index_select(input, dim, index, *, out=None): shape = list(input.shape) shape[dim] = len(index) return torch.empty(*shape, device="meta") def torch_tensor_index_select(self, dim, index): return torch_index_select(self, dim, index) def torch_gather(input, dim, index, *, sparse_grad=False, out=None): shape = list(input.shape) shape[dim] = index.shape[dim] return torch.empty(*shape, device="meta") def torch_tensor_gather(self, dim, index): return torch_gather(self, dim, index) def torch_roll(input, shifts, dims=None): return input def torch_flip(input, dims): return input def torch_tensor_flip(self, dims): return self def torch_nn_conv1d(self, input): l_in = input.shape[-1] shape = None padding = self.padding if padding == "valid": padding = (0, 0) if padding == "same": shape = list(input.shape) if shape is None: shape = list(input.shape) l_out = math.floor( (l_in + 2 * padding[0] - self.dilation[0] * (self.kernel_size[0] - 1) - 1) / self.stride[0] + 1 ) shape[-1] = l_out shape[-2] = self.out_channels return torch.empty(shape, device="meta") def torch_nn_conv2d(self, input): h_in, w_in = input.shape[-2:] shape = None padding = self.padding if padding == "valid": padding = (0, 0) if padding == "same": shape = list(input.shape) if shape is None: shape = list(input.shape) h_out = math.floor( (h_in + 2 * padding[0] - self.dilation[0] * (self.kernel_size[0] - 1) - 1) / self.stride[0] + 1 ) w_out = math.floor( (w_in + 2 * padding[1] - self.dilation[1] * (self.kernel_size[1] - 1) - 1) / self.stride[1] + 1 ) shape[-2:] = [h_out, w_out] shape[-3] = self.out_channels return torch.empty(shape, device="meta") def torch_squeeze(input, dim=None): shape = list(input.shape) if dim is not None: if dim < 0: dim = input.dim() + dim if shape[dim] == 1: shape.pop(dim) else: new_shape = [] for dim_value in shape: if dim_value == 1: continue new_shape.append(dim_value) shape = new_shape return torch.empty(shape, device="meta") def torch_tensor_squeeze(self, dim=None): return torch_squeeze(self, dim) def torch_unsqueeze(input, dim): shape = list(input.shape) if dim < 0: dim = input.dim() + 1 + dim shape.insert(dim, 1) return torch.empty(shape, device="meta") def torch_tensor_unsqueeze(self, dim): return torch_unsqueeze(self, dim) def torch_unique_consecutive(input, **kwargs): output = torch.unique_consecutive(torch.zeros_like(input, device="cpu"), **kwargs) if isinstance(output, torch.Tensor): return output.to("meta") else: return tuple(map(output, lambda x: x.to("meta"))) def torch_nn_functional_one_hot(tensor, num_classes=-1): if num_classes < 0: raise ValueError("Don't support automatic num_classes inference for MetaTensor analysis") shape = list(tensor.shape) + [num_classes] return torch.empty(shape, device="meta") def torch_nn_functional_scaled_dot_product_attention( query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False, scale=None ): target_length = query.shape[-2] head_dim = value.shape[-1] return torch.empty((*query.shape[:-2], target_length, head_dim), device="meta") def torch_nn_mseloss(self, input, target): if self.reduction == "none": shape = target.shape else: shape = (1,) return torch.empty(shape, device="meta") def torch_nn_crossentropyloss(self, input, target): if self.reduction == "none": shape = target.shape else: shape = (1,) return torch.empty(shape, device="meta") def torch_nn_bcewithlogitsloss(self, input, target): if self.reduction == "none": shape = target.shape else: shape = (1,) return torch.empty(shape, device="meta") def operator_getitem(a, b): def to_concrete(t): if isinstance(t, torch.Tensor): concrete = torch.ones_like(t, device="cpu") if concrete.dtype in [torch.float16, torch.float32, torch.float64, torch.int32]: concrete = concrete.to(torch.int64) return concrete return t if isinstance(a, torch.Tensor): # TODO: infer shape without performing the computation. if isinstance(b, tuple): b = tuple(map(to_concrete, b)) else: b = to_concrete(b) return operator.getitem(torch.empty_like(a, device="cpu"), b).to("meta") return operator.getitem(a, b) _MANUAL_META_OVERRIDES: Dict[Callable, Callable] = { torch.nn.Embedding: torch_nn_embedding, torch.nn.functional.embedding: torch_nn_functional_embedding, torch.nn.LayerNorm: torch_nn_layernorm, torch.nn.GroupNorm: torch_nn_groupnorm, torch.nn.Linear: torch_nn_linear, torch.relu: torch_relu, torch.nn.functional.relu: torch_nn_functional_relu, torch.nn.ReLU: torch_nn_relu, torch.where: torch_where, torch.abs: torch_abs, torch.arange: torch_arange, torch.full: torch_full, torch.cat: torch_cat, torch.stack: torch_stack, torch.add: torch_add, torch.mul: torch_mul, torch.Tensor.mul: torch_tensor_mul, torch.matmul: torch_matmul, torch.bmm: torch_bmm, torch.baddbmm: torch_baddbmm, torch.Tensor.baddbmm: torch_tensor_baddbmm, torch.einsum: torch_einsum, torch.Tensor.repeat: torch_tensor_repeat, torch.repeat_interleave: torch_repeat_interleave, torch.roll: torch_roll, torch.flip: torch_flip, torch.Tensor.flip: torch_tensor_flip, torch.index_select: torch_index_select, torch.Tensor.index_select: torch_tensor_index_select, torch.gather: torch_gather, torch.Tensor.gather: torch_tensor_gather, torch.nn.Conv1d: torch_nn_conv1d, torch.nn.Conv2d: torch_nn_conv2d, torch.squeeze: torch_squeeze, torch.Tensor.squeeze: torch_tensor_squeeze, torch.unsqueeze: torch_unsqueeze, torch.Tensor.unsqueeze: torch_tensor_unsqueeze, torch.unique_consecutive: torch_unique_consecutive, torch.nn.functional.one_hot: torch_nn_functional_one_hot, torch.nn.MSELoss: torch_nn_mseloss, torch.nn.CrossEntropyLoss: torch_nn_crossentropyloss, torch.nn.BCEWithLogitsLoss: torch_nn_bcewithlogitsloss, operator.getitem: operator_getitem, } _MANUAL_META_OVERRIDES[torch.nn.functional.scaled_dot_product_attention] = ( torch_nn_functional_scaled_dot_product_attention ) class HFProxy(Proxy): """ Proxy that uses metadata to handle data-dependent control-flow. """ def install_metadata(self, metadata): self._metadata = metadata @property def shape(self): return self.tracer.create_proxy("call_method", "size", (self,), {}) @property def device(self): # Hack so we can track when devices are used. During meta-tensor propagation, # replace these values with a constant 'meta' return MetaDeviceAttribute(self, "device") def __len__(self): if hasattr(self, "_metadata") and self._metadata is not None: return len(self._metadata) return super().__len__() def __bool__(self): if hasattr(self, "_metadata") and self._metadata is not None: return self._metadata return super().__bool__() def __getattr__(self, k): if k == "_metadata": return self.__getattribute__(k) # note: not added to the graph yet, if this is a method call # we peephole optimize to the method invocation return HFAttribute(self, k) def __setitem__(self, indices, values): return self.tracer.create_proxy("call_function", operator.setitem, (self, indices, values), {}) def __contains__(self, key): if hasattr(self, "_metadata") and self._metadata is not None: return key in self._metadata return super().__contains__(key) class HFAttribute(HFProxy): def __init__(self, root, attr: str): self.root = root self.attr = attr self.tracer = root.tracer self._node = None if hasattr(self.root, "_metadata"): self.install_metadata(getattr(self.root._metadata, attr)) @property def node(self): # the node for attributes is added lazily, since most will just be method calls # which do not rely on the getitem call if self._node is None: self._node = self.tracer.create_proxy("call_function", builtins.getattr, (self.root, self.attr), {}).node return self._node def __call__(self, *args, **kwargs): return self.tracer.create_proxy("call_method", self.attr, (self.root,) + args, kwargs) class MetaDeviceAttribute(HFAttribute): pass class HFCacheProxy(HFProxy): """ Proxy that represents an instance of `transformers.cache_utils.Cache`. """ def install_orig_cache_cls(self, orig_cache_cls: Type[Cache]): self._orig_cache_cls = orig_cache_cls @property def __class__(self): if not hasattr(self, "_orig_cache_cls"): raise RuntimeError("The original Cache class must be installed to the HFCacheProxy.") return self.tracer._CLASSES_TO_PATCH[self._orig_cache_cls] def create_wrapper( function: Callable, op_type: Union[Literal["call_function"], Literal["call_method"], Literal["get_attr"]], proxy_factory_fn: Optional[Callable[[Node], Proxy]] = None, ) -> Callable: @functools.wraps(function) def wrapper(*args, **kwargs): if not is_fx_tracing(): return function(*args, **kwargs) found_proxies = [] def check_proxy(a): if isinstance(a, Proxy): found_proxies.append(a) torch.fx.node.map_aggregate(args, check_proxy) torch.fx.node.map_aggregate(kwargs, check_proxy) if len(found_proxies) > 0: tracer = found_proxies[0].tracer if op_type == "call_function": target = function elif op_type == "call_method": target = function.__name__ elif op_type == "get_attr": target = function.__name__ else: raise ValueError(f"op_type {op_type} not supported.") return tracer.create_proxy(op_type, target, args, kwargs, proxy_factory_fn=proxy_factory_fn) else: return function(*args, **kwargs) return wrapper class HFProxyableClassMeta(type): """ Metaclass that creates a class with its main methods wrapped to be proxyable. """ def __new__( cls, name: str, bases: Tuple[Type, ...], attrs: Dict[str, Any], proxy_factory_fn: Optional[Callable[[Node], Proxy]] = None, ): cls = super().__new__(cls, name, bases, attrs) for attr_name in dir(cls): attr = getattr(cls, attr_name, None) if attr is None: continue if attr_name == "__init__": op_type = "call_function" elif attr_name.startswith("__"): op_type = None elif inspect.ismethod(attr): op_type = "call_function" elif inspect.isfunction(attr): op_type = "call_method" else: op_type = None if op_type is not None: setattr(cls, attr_name, create_wrapper(attr, op_type, proxy_factory_fn=proxy_factory_fn)) return cls def gen_constructor_wrapper(target: Callable) -> Tuple[Callable, Callable]: """ Wraps `target` to be proxyable. Used for tensor creators like `torch.ones`, `torch.arange` and so on. """ wrapper = create_wrapper(target, "call_function") return wrapper, target def _proxies_to_metas(v): """Returns the underlying metadata for HFProxies, and behaves like the identity for the others.""" if isinstance(v, MetaDeviceAttribute): return "meta" if isinstance(v, torch.fx.Proxy): if not (isinstance(v, HFProxy) and hasattr(v, "_metadata")): raise RuntimeError(f"No metadata was found for {v}") return v._metadata return v def create_cache_proxy_factory_fn(orig_cache_cls: Type[Cache]) -> Callable[[Node], HFCacheProxy]: def cache_proxy_factory_fn(n: Node) -> HFCacheProxy: global _CURRENT_TRACER if not isinstance(_CURRENT_TRACER, HFTracer): raise RuntimeError("Cannot create HFCacheProxy because there is no HFTracer currently tracing.") cache_proxy = HFCacheProxy(n, _CURRENT_TRACER) cache_proxy.install_orig_cache_cls(orig_cache_cls) return cache_proxy return cache_proxy_factory_fn # Proxyable equivalent of the cache classes defined in `transformers.cache_utils`. ProxyableCache = HFProxyableClassMeta( "ProxyableCache", (Cache,), {}, proxy_factory_fn=create_cache_proxy_factory_fn(Cache) ) ProxyableDynamicCache = HFProxyableClassMeta( "ProxyableDynamicCache", (DynamicCache,), {}, proxy_factory_fn=create_cache_proxy_factory_fn(DynamicCache), ) ProxyableSinkCache = HFProxyableClassMeta( "ProxyableSinkCache", (SinkCache,), {}, proxy_factory_fn=create_cache_proxy_factory_fn(SinkCache), ) ProxyableStaticCache = HFProxyableClassMeta( "ProxyableStaticCache", (StaticCache,), {}, proxy_factory_fn=create_cache_proxy_factory_fn(StaticCache), ) def _generate_random_int(low: int = 10, high: int = 20, forbidden_values: Optional[List[int]] = None): if forbidden_values is None: forbidden_values = [] value = random.randint(low, high) while value in forbidden_values: value = random.randint(low, high) return value class HFTracer(Tracer): """ Tracer that is able to symbolically trace models from the library. To do that, it uses the HFProxy instead of the regular PyTorch torch.fx.Proxy. """ # Feature flag for proxying accesses to buffer values proxy_buffer_attributes: bool = True allow_insert_stateless_mods: bool = True _TORCH_METHODS_TO_PATCH = [ "arange", "zeros", "ones", "full", "full_like", "eye", "empty", "tensor", "clamp", "finfo", "tril", ] _CLASSES_TO_PATCH = { Cache: ProxyableCache, DynamicCache: ProxyableDynamicCache, SinkCache: ProxyableSinkCache, StaticCache: ProxyableStaticCache, } supported_archs = (PreTrainedModel,) if not is_peft_available() else (PreTrainedModel, PeftModel) def __init__(self, autowrap_modules=(math,), autowrap_functions=()): super().__init__(autowrap_modules=autowrap_modules, autowrap_functions=autowrap_functions) if not is_torch_fx_available(): raise ImportError( f"Found an incompatible version of torch. Found version {get_torch_version()}, but only version " f"{TORCH_FX_REQUIRED_VERSION} is supported." ) def _generate_dummy_input( self, model: "PreTrainedModel", input_name: str, shape: List[int], input_names: List[str] ) -> Dict[str, torch.Tensor]: """Generates dummy input for model inference recording.""" # Retrieving the model class, either from the "class_for_deserialization" attribute if the model was restored # from pickle, or from the "__class__" attribute in the general case. model_class_name = getattr(model, "class_for_deserialization", model.__class__).__name__ device = model.device inputs_dict = {} # when tracing a model with KV cache, we simply need to unsure that the KV cache length is larger than one to # rightfully pass certain controlflows (Example: https://github.com/huggingface/transformers/blob/5c8d941d66734811d2ef6f57f15b44f7fb7a98c4/src/transformers/modeling_attn_mask_utils.py#L162). # After tracing, the model can then still be used with arbitrary lengths different than the one used during tracing. kv_cache_length = 5 if input_name in ["labels", "start_positions", "end_positions"]: batch_size = shape[0] if model_class_name in [ *get_values(MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES), *get_values(MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES), *get_values(MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES), *get_values(MODEL_FOR_BACKBONE_MAPPING_NAMES), *get_values(MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES), ]: inputs_dict["labels"] = torch.zeros(batch_size, dtype=torch.long, device=device) elif model_class_name in [ *get_values(MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES), *get_values(MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES), "XLNetForQuestionAnswering", ]: inputs_dict["start_positions"] = torch.zeros(batch_size, dtype=torch.long, device=device) inputs_dict["end_positions"] = torch.zeros(batch_size, dtype=torch.long, device=device) elif model_class_name in get_values(MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES): if not hasattr(model.config, "problem_type") or model.config.problem_type is None: raise ValueError( "Could not retrieve the problem type for the sequence classification task, please set " 'model.config.problem_type to one of the following values: "regression", ' '"single_label_classification", or "multi_label_classification".' ) if model.config.problem_type == "regression": labels_shape = (batch_size, model.config.num_labels) labels_dtype = torch.float32 elif model.config.problem_type == "single_label_classification": labels_shape = (batch_size,) labels_dtype = torch.long elif model.config.problem_type == "multi_label_classification": labels_shape = (batch_size, model.config.num_labels) labels_dtype = torch.float32 else: raise ValueError( 'Expected model.config.problem_type to be either: "regression", "single_label_classification"' f', or "multi_label_classification", but "{model.config.problem_type}" was provided.' ) inputs_dict["labels"] = torch.zeros(*labels_shape, dtype=labels_dtype, device=device) elif model_class_name in [ *get_values(MODEL_FOR_PRETRAINING_MAPPING_NAMES), *get_values(MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES), *get_values(MODEL_FOR_CAUSAL_LM_MAPPING_NAMES), *get_values(MODEL_FOR_MASKED_LM_MAPPING_NAMES), *get_values(MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES), *get_values(MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING_NAMES), "GPT2DoubleHeadsModel", "PeftModelForCausalLM", "PeftModelForSeq2SeqLM", ]: inputs_dict["labels"] = torch.zeros(shape, dtype=torch.long, device=device) elif model_class_name in [*get_values(MODEL_FOR_CTC_MAPPING_NAMES)]: inputs_dict["labels"] = torch.zeros(shape, dtype=torch.float32, device=device) else: raise NotImplementedError( f"Generating the dummy input named {input_name} for {model_class_name} is not supported yet." ) elif "pixel_values" in input_name: batch_size = shape[0] image_size = getattr(model.config, "image_size", None) if image_size is None: if hasattr(model.config, "vision_config"): image_size = model.config.vision_config.image_size elif hasattr(model.config, "encoder"): image_size = model.config.encoder.image_size else: image_size = (_generate_random_int(), _generate_random_int()) # If no num_channels is in the config, use some arbitrary value. num_channels = getattr(model.config, "num_channels", 3) if not isinstance(image_size, collections.abc.Iterable): image_size = (image_size, image_size) height, width = image_size inputs_dict[input_name] = torch.zeros( batch_size, num_channels, height, width, dtype=torch.float32, device=device ) elif "bbox" in input_name: inputs_dict[input_name] = torch.zeros(*shape, 4, dtype=torch.float, device=device) elif "input_features" in input_name: inputs_dict[input_name] = torch.zeros( *shape, model.config.input_feat_per_channel, dtype=torch.float, device=device ) elif "inputs_embeds" in input_name: batch_size = shape[0] if ( getattr(model.config, "embedding_size", None) is not None and model.config.model_type != "megatron-bert" ): embedding_size = model.config.embedding_size else: embedding_size = model.config.hidden_size if len(shape) == 3: # (batch_size, num_choices, sequence_length, embedding_size) embedding_shape = (batch_size, shape[1], shape[2], embedding_size) else: # (batch_size, sequence_length, embedding_size) embedding_shape = (batch_size, shape[1], embedding_size) inputs_dict[input_name] = torch.zeros(embedding_shape, dtype=torch.float, device=device) elif "visual_feats" in input_name: inputs_dict[input_name] = torch.zeros( shape + [ model.config.visual_feat_dim, ], dtype=torch.float, device=device, ) elif "visual_pos" in input_name: inputs_dict[input_name] = torch.zeros( shape + [ model.config.visual_pos_dim, ], dtype=torch.float, device=device, ) elif "inputs" in input_name: inputs_dict[input_name] = torch.zeros(*shape, dtype=torch.float, device=device) elif "input_values" in input_name: batch_size, _ = shape # Generating big sequence length for audio inputs. seq_length = _generate_random_int(low=10000, high=20000) inputs_dict[input_name] = torch.zeros(batch_size, seq_length, dtype=torch.float, device=device) elif "mask" in input_name: if "past_key_values" in input_names: mask_shape = [shape[0], shape[1] + kv_cache_length] else: mask_shape = shape inputs_dict[input_name] = torch.zeros(mask_shape, dtype=torch.long, device=device) elif "ids" in input_name: inputs_dict[input_name] = torch.zeros(shape, dtype=torch.long, device=device) elif "past_key_values" in input_name: if model.config.model_type not in _FX_SUPPORTED_MODELS_WITH_KV_CACHE: raise NotImplementedError( f"Symbolic trace with past_key_values input is not supported yet for the model {model.config.model_type}. Please open an issue or a PR in Transformers repository if you would like to see the support added." ) num_heads = model.config.num_attention_heads head_dim = model.config.hidden_size // model.config.num_attention_heads cache_shape = (shape[0], num_heads, kv_cache_length, head_dim) pkv = tuple( ( torch.rand(cache_shape, dtype=torch.float, device=device), torch.rand(cache_shape, dtype=torch.float, device=device), ) for i in range(model.config.num_hidden_layers) ) inputs_dict[input_name] = pkv else: shape_with_hidden_size = shape + [model.config.hidden_size] inputs_dict[input_name] = torch.zeros(shape_with_hidden_size, dtype=torch.float, device=device) return inputs_dict def create_proxy(self, kind, target, args, kwargs, name=None, type_expr=None, proxy_factory_fn=None): rv = super().create_proxy(kind, target, args, kwargs, name, type_expr, proxy_factory_fn) if kind == "placeholder" and target in self.meta_args: rv.install_metadata(self.meta_args[target]) return rv if target in self.orig_fns: # NOTE: tensor constructors in PyTorch define the `device` argument as # *kwargs-only*. That is why this works. If you add methods to # _TORCH_METHODS_TO_PATCH that do not define `device` as kwarg-only, # this will break and you will likely see issues where we cannot infer # the size of the output. if "device" in kwargs: kwargs["device"] = "meta" try: args_metas = torch.fx.node.map_aggregate(args, _proxies_to_metas) kwargs_metas = torch.fx.node.map_aggregate(kwargs, _proxies_to_metas) should_install_metadata = True self._disable_module_getattr = True self._disable_call_module = True if kind == "call_function": meta_target = _MANUAL_META_OVERRIDES.get(target, target) meta_out = meta_target(*args_metas, **kwargs_metas) if isinstance(meta_out, torch.Tensor): meta_out = meta_out.to(device="meta") elif kind == "call_method": method = getattr(args_metas[0].__class__, target) meta_target = _MANUAL_META_OVERRIDES.get(method, method) meta_out = meta_target(*args_metas, **kwargs_metas) elif kind == "call_module": if not hasattr(self, "orig_forward"): raise AttributeError(f"{self} does not have an attribute called orig_forward") mod = self.root.get_submodule(target) mod_type = type(mod) if mod_type in _MANUAL_META_OVERRIDES: meta_out = _MANUAL_META_OVERRIDES[mod_type](mod, *args_metas, **kwargs_metas) else: meta_out = self.orig_forward(*args_metas, **kwargs_metas) elif kind == "get_attr": attr_itr = self.root atoms = target.split(".") for atom in atoms: attr_itr = getattr(attr_itr, atom) if isinstance(attr_itr, torch.Tensor): meta_out = attr_itr.to(device="meta") else: meta_out = attr_itr else: should_install_metadata = False if should_install_metadata: if not isinstance(rv, Proxy): raise ValueError("Don't support composite output yet") rv.install_metadata(meta_out) except Exception as e: if _IS_IN_DEBUG_MODE: warnings.warn(f"Could not compute metadata for {kind} target {target}: {e}") self._disable_module_getattr = False self._disable_call_module = False return rv # Replaced by .getattr from PyTorch 1.13 def _module_getattr(self, attr, attr_val, parameter_proxy_cache): if getattr(self, "_disable_module_getattr", False): return attr_val else: def maybe_get_proxy_for_attr(attr_val, collection_to_search, parameter_proxy_cache): for n, p in collection_to_search: if attr_val is p: if n not in parameter_proxy_cache: kwargs = {} if "proxy_factory_fn" in inspect.signature(self.create_proxy).parameters: kwargs["proxy_factory_fn"] = ( None if not self.param_shapes_constant else lambda node: ParameterProxy(self, node, n, attr_val) ) val_proxy = self.create_proxy("get_attr", n, (), {}, **kwargs) # type: ignore[arg-type] parameter_proxy_cache[n] = val_proxy return parameter_proxy_cache[n] return None if isinstance(attr_val, torch.nn.Parameter): maybe_parameter_proxy = maybe_get_proxy_for_attr( attr_val, self.root.named_parameters(), parameter_proxy_cache ) if maybe_parameter_proxy is not None: return maybe_parameter_proxy if self.proxy_buffer_attributes and isinstance(attr_val, torch.Tensor): maybe_buffer_proxy = maybe_get_proxy_for_attr( attr_val, self.root.named_buffers(), parameter_proxy_cache ) if maybe_buffer_proxy is not None: return maybe_buffer_proxy return attr_val # Needed for PyTorch 1.13+ def getattr(self, attr: str, attr_val: Any, parameter_proxy_cache: Dict[str, Any]): return self._module_getattr(attr, attr_val, parameter_proxy_cache) def call_module(self, m, forward, args, kwargs): if getattr(self, "_disable_call_module", False): return forward(*args, **kwargs) self.orig_forward = forward return super().call_module(m, forward, args, kwargs) def proxy(self, node): return HFProxy(node, self) @contextlib.contextmanager def patch_for_tracing(self, root: Union[torch.nn.Module, Callable[..., Any]]): # Patching torch functions self.patched_torch_methods = { target: gen_constructor_wrapper(getattr(torch, target)) for target in self._TORCH_METHODS_TO_PATCH } self.orig_fns = set() for name, (wrapper, orig) in self.patched_torch_methods.items(): setattr(torch, name, wrapper) self.orig_fns.add(orig) # Patching classes patched = [] module_of_model = inspect.getmodule(root) for name, mod in sys.modules.items(): if module_of_model is not None and mod is not module_of_model: continue if not name.startswith("transformers"): continue for orig_cls, patched_cls in self._CLASSES_TO_PATCH.items(): for attr_name, attr in mod.__dict__.items(): if attr is orig_cls: patched.append((mod, attr_name, orig_cls)) setattr(mod, attr_name, patched_cls) yield # Restoring patched functions and classes. for name, (_, orig) in self.patched_torch_methods.items(): setattr(torch, name, orig) self.patched_torch_methods = {} self.orig_fns = set() for mod, attr_name, orig_cls in patched: setattr(mod, attr_name, orig_cls) def trace( self, root: Union[torch.nn.Module, Callable[..., Any]], concrete_args: Optional[Dict[str, Any]] = None, dummy_inputs: Optional[Dict[str, Any]] = None, complete_concrete_args_with_inputs_not_in_dummy_inputs: bool = True, ) -> Graph: """ Traces `root` and returns the corresponding FX `torch.fx.Graph` representation. `root` can either be a `torch.nn.Module` instance or a Python callable. Note that after this call, `self.root` may be different from the `root` passed in here. For example, when a free function is passed to `trace()`, we will create a `torch.nn.Module` instance to use as the root and add embedded constants to. Args: root (`torch.nn.Module` or `Callable`): Either a `torch.nn.Module`` or a function to be traced through. If root is not a [`~transformers.PreTrainedModel`], then `dummy_inputs` must be passed, otherwise tracing will fail. concrete_args (`Dict[str, Any], *optional*): Concrete arguments that should not be treated as Proxies dummy_inputs (`Dict[str, Any]`, *optional*): The dummy inputs needed to handle data-dependent control-flow if `root` is not a [`~transformers.PreTrainedModel`]. It can also be used when `root` is a [`~transformers.PreTrainedModel`] to specify custom dummy inputs for a subset or all the model inputs. complete_concrete_args_with_inputs_not_in_dummy_inputs (`bool`, *optional*, defaults to `True`): If `True`, and `dummy_inputs` is specified, every argument that `root` can take that is not in `dummy_inputs` and not in `concrete_args` will be added to `concrete_args`, otherwise does nothing. Returns: `torch.fx.Graph`: A FX `torch.fx.Graph` representing the semantics of the passed-in `root`. """ sig = inspect.signature(root.forward if isinstance(root, torch.nn.Module) else root) if concrete_args is None: concrete_args = {} if dummy_inputs is not None and complete_concrete_args_with_inputs_not_in_dummy_inputs: for param in sig.parameters.values(): if param.name in dummy_inputs: continue if param.default is inspect.Parameter.empty: raise ValueError(f"You need to specify a default value for the parameter {param.name}.") concrete_args.update( { p.name: p.default for p in sig.parameters.values() if (p.name not in dummy_inputs and p.name not in concrete_args) } ) input_names = sig.parameters.keys() - concrete_args.keys() # Creating a random input shape to generate dummy inputs. batch_size = _generate_random_int() sequence_length = _generate_random_int() shape = [batch_size, sequence_length] if root.__class__.__name__ in get_values(MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES): num_choices = _generate_random_int(low=2, high=5) shape.insert(1, num_choices) inputs = dict(dummy_inputs) if dummy_inputs is not None else {} for input_name in input_names: if input_name in inputs: continue # We enforce that root must either be a PreTrainedModel or deserialized from a serialized traced model to # be able to use HFTracer._generate_dummy_input. if isinstance(root, self.supported_archs) or type(root).__qualname__.startswith( ("_deserialize_graph_module", "_CodeOnlyModule") ): inputs.update(self._generate_dummy_input(root, input_name, shape, input_names=input_names)) else: raise RuntimeError( f"Could not generate input named {input_name} for because root is not a" " transformers.PreTrainedModel." ) def to_meta(value): if isinstance(value, torch.Tensor): return value.to("meta") return value concrete_metas = pytree.tree_map(to_meta, inputs) for param in sig.parameters.values(): if param.kind == inspect.Parameter.VAR_KEYWORD and param.name not in input_names: concrete_metas[f"**{param.name}"] = {} self.meta_args = concrete_metas global _CURRENT_TRACER _CURRENT_TRACER = self with self.patch_for_tracing(root): try: self.graph = super().trace(root, concrete_args=concrete_args) finally: _CURRENT_TRACER = None # This is necessary because concrete args are added as input to the traced module since # https://github.com/pytorch/pytorch/pull/55888. for node in self.graph.nodes: if node.op == "placeholder": # Removing default values for inputs as the forward pass will fail with them. if node.target in input_names: node.args = () # Without this, torch.jit.script fails because the inputs type is Optional[torch.Tensor]. # It cannot infer on the attributes and methods the input should have, and fails. node.type = torch.Tensor # It is a concrete arg so it is not used and should be removed. else: to_visit = [node] to_delete = collections.OrderedDict() while to_visit: n = to_visit.pop(0) to_delete[n] = None to_visit += list(n.users.keys()) for user in reversed(to_delete.keys()): self.graph.erase_node(user) # TODO: solves GraphModule creation. # Without this, return type annotation "Tuple" is causing code execution failure. if node.op == "output": node.type = None return self.graph def _stateless_mod_instanciation_depends_on_proxies(self, mod: nn.Module) -> bool: """ Whether the module was instantiated with Proxies. If that is the case, such module cannot be a leaf module because its attributes are input-dependent. """ return any(isinstance(attr, Proxy) for attr in mod.__dict__.values()) def _insert_module_as_submodule(self, mod: nn.Module) -> str: """ Helper method which tries to insert a module that was not declared as submodule. """ # If one of the module attributes is a Proxy, it means that its instantiation is input-dependent. # It is not possible to insert such modules, those should be traced through. if self._stateless_mod_instanciation_depends_on_proxies(mod): return "" idx = 0 mod_name = mod.__class__.__name__.lower() path = f"{mod_name}_{idx}" already_inserted = False while hasattr(self.root, path): if getattr(self.root, path) is mod: already_inserted = True break path = f"{mod_name}_{idx}" idx += 1 # No need to add multiple instances of the same module. if not already_inserted: self.root.add_module(path, mod) return path def path_of_module(self, mod: nn.Module) -> str: """ Helper method to find the qualified name of `mod` in the Module hierarchy of `root`. For example, if `root` has a submodule named `foo`, which has a submodule named `bar`, passing `bar` into this function will return the string "foo.bar". Args: mod (str): The `Module` to retrieve the qualified name for. """ try: return super().path_of_module(mod) except NameError as e: if self.allow_insert_stateless_mods and len(list(mod.parameters())) == 0 and len(list(mod.buffers())) == 0: path = self._insert_module_as_submodule(mod) return path raise e def is_leaf_module(self, m: torch.nn.Module, module_qualified_name: str) -> bool: return (not self._stateless_mod_instanciation_depends_on_proxies(m)) and super().is_leaf_module( m, module_qualified_name ) @compatibility(is_backward_compatible=True) def keys(self, obj: "Proxy") -> Any: """Called when a proxy object is has the keys() method called. This is what happens when ** is called on a proxy. This should return an iterator if ** is supposed to work in your custom tracer. """ attribute = HFAttribute(obj, "keys")() if obj.node.target.startswith("**"): return attribute._metadata return attribute def get_concrete_args(model: nn.Module, input_names: List[str]): sig = inspect.signature(model.forward) if not (set(input_names) <= set(sig.parameters.keys())): formatted_input_names = input_names[0] if len(input_names) == 1 else ", ".join(input_names) formatted_allowed_input_names = ", ".join(sig.parameters.keys()) raise ValueError( f"The model does not have input(s) named: {formatted_input_names}, expected a subset of the following:" f" {formatted_allowed_input_names}" ) return {p.name: p.default for p in sig.parameters.values() if p.name not in input_names} def is_model_supported(model: "PreTrainedModel"): return model.__class__.__name__ in _SUPPORTED_MODELS def check_if_model_is_supported(model: "PreTrainedModel"): if not is_model_supported(model): supported_model_names = ", ".join(_SUPPORTED_MODELS) raise NotImplementedError( f"Model {model.__class__.__name__} is not supported yet, supported models: {supported_model_names}" ) def symbolic_trace( model: "PreTrainedModel", input_names: Optional[List[str]] = None, disable_check: bool = False, tracer_cls: Type[HFTracer] = HFTracer, ) -> GraphModule: """ Performs symbolic tracing on the model. Args: model ([`PretrainedModel`]): The model to trace. input_names (`List[str]`, *optional*): The names of the inputs of the traced model. If unset, model.dummy_inputs.keys() are used instead. disable_check (`bool`, *optional*, defaults to `False`): If `True`, no check is done before trying to trace the model, this is mostly usesul for debugging purposes. tracer_cls (`Type[HFTracer]`, *optional*, defaults to `HFTracer`): The tracer class to use for instantiating the tracer. If unset, `HFTracer` is used instead. Returns: `torch.fx.GraphModule`: A GraphModule constructed by recording operations seen while tracing the model. Example: ```python from transformers.utils.fx import symbolic_trace traced_model = symbolic_trace(model, input_names=["input_ids", "attention_mask", "token_type_ids"]) ``` """ if input_names is None: input_names = model.dummy_inputs.keys() input_names = list(input_names) concrete_args = get_concrete_args(model, input_names) if not disable_check: check_if_model_is_supported(model) if "past_key_values" in input_names and not getattr(model.config, "use_cache", False): logger.warning( "`past_key_values` were specified as input names, but model.config.use_cache = False, this might lead to " "unexpected behavior." ) if "past_key_values" not in input_names and getattr(model.config, "use_cache", False): logger.warning( "`past_key_values` were not specified as input names, but model.config.use_cache = True. Setting " "model.config.use_cache = False." ) model.config.use_cache = False # Tracing. tracer = tracer_cls() traced_graph = tracer.trace(model, concrete_args=concrete_args) traced = torch.fx.GraphModule(model, traced_graph) traced.config = model.config # The model class must be stored as an attribute to allow model deserialization, which uses trace, and thus # _generate_dummy_input, where the model class is needed. traced.class_for_deserialization = model.__class__ traced.device = model.device return traced
transformers/src/transformers/utils/fx.py/0
{ "file_path": "transformers/src/transformers/utils/fx.py", "repo_id": "transformers", "token_count": 25868 }
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # How to add a new example script in 🤗 Transformers This folder provide a template for adding a new example script implementing a training or inference task with the models in the 🤗 Transformers library. To use it, you will need to install cookiecutter: ```bash pip install cookiecutter ``` or refer to the installation page of the [cookiecutter documentation](https://cookiecutter.readthedocs.io/). You can then run the following command inside the `examples` folder of the transformers repo: ```bash cookiecutter ../templates/adding_a_new_example_script/ ``` and answer the questions asked, which will generate a new folder where you will find a pre-filled template for your example following the best practices we recommend for them. Adjust the way the data is preprocessed, the model is loaded or the Trainer is instantiated then when you're happy, add a `README.md` in the folder (or complete the existing one if you added a script to an existing folder) telling a user how to run your script. Make a PR to the 🤗 Transformers repo. Don't forget to tweet about your new example with a carbon screenshot of how to run it and tag @huggingface!
transformers/templates/adding_a_new_example_script/README.md/0
{ "file_path": "transformers/templates/adding_a_new_example_script/README.md", "repo_id": "transformers", "token_count": 444 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse from typing import Any, Callable from transformers import is_torch_available from transformers.testing_utils import ( TestCasePlus, execute_subprocess_async, get_torch_dist_unique_port, require_torch_multi_gpu, ) if is_torch_available(): import functools import torch import torch.distributed from torch.distributed._composable.fsdp import fully_shard, register_fsdp_forward_method from torch.distributed.device_mesh import init_device_mesh from torch.distributed.fsdp import FullyShardedDataParallel from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.models.gpt2.modeling_gpt2 import GPT2Block data = 4 * [ "Hello world!", "The quick brown fox jumps over the lazy dog.", ] def manage_process_group(func: Callable[..., Any]) -> Callable[..., Any]: """Manage the creation and destruction of the distributed process group for the wrapped function.""" def wrapped(*args: Any, **kwargs: Any) -> Any: torch.distributed.init_process_group(world_size=torch.cuda.device_count()) try: return func(*args, **kwargs) finally: torch.distributed.destroy_process_group() return wrapped @manage_process_group def fsdp_generate(): torch.cuda.set_device(device := torch.device(rank := torch.distributed.get_rank())) model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-gpt2").to(device) fsdp_model = FullyShardedDataParallel( model, auto_wrap_policy=functools.partial(transformer_auto_wrap_policy, transformer_layer_cls={GPT2Block}), limit_all_gathers=True, use_orig_params=True, ) tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gpt2") batch = tokenizer(data[rank], return_tensors="pt", return_attention_mask=True).to(device) with FullyShardedDataParallel.summon_full_params(fsdp_model): _ = fsdp_model.module.generate( input_ids=batch["input_ids"], attention_mask=batch["attention_mask"], max_length=30, ) @manage_process_group def fsdp2_generate(): torch.cuda.set_device(device := torch.device(rank := torch.distributed.get_rank())) model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-gpt2").to(device) mesh = init_device_mesh("cuda", (torch.distributed.get_world_size(),)) for submodule in model.modules(): if isinstance(submodule, GPT2Block): fully_shard(submodule, mesh=mesh) fully_shard(model, mesh=mesh) register_fsdp_forward_method(model, "generate") tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gpt2") batch = tokenizer(data[rank], return_tensors="pt", return_attention_mask=True).to(device) _ = model.generate( input_ids=batch["input_ids"], attention_mask=batch["attention_mask"], max_length=30, ) class TestFSDPGeneration(TestCasePlus): @require_torch_multi_gpu def test_fsdp_generate(self): distributed_args = f"""--nproc_per_node={torch.cuda.device_count()} --master_port={get_torch_dist_unique_port()} {self.test_file_dir}/test_fsdp.py """.split() args = "--fsdp".split() cmd = ["torchrun"] + distributed_args + args execute_subprocess_async(cmd, env=self.get_env()) # successful return here == success - any errors would have caused an error in the sub-call @require_torch_multi_gpu def test_fsdp2_generate(self): distributed_args = f"""--nproc_per_node={torch.cuda.device_count()} --master_port={get_torch_dist_unique_port()} {self.test_file_dir}/test_fsdp.py """.split() args = "--fsdp2".split() cmd = ["torchrun"] + distributed_args + args execute_subprocess_async(cmd, env=self.get_env()) # successful return here == success - any errors would have caused an error in the sub-call if __name__ == "__main__": # The script below is meant to be run under torch.distributed, on a machine with multiple GPUs: # # PYTHONPATH="src" python -m torch.distributed.run --nproc_per_node 2 --output_dir output_dir ./tests/generation/test_fsdp.py --fsdp class CLIArgs(argparse.Namespace): fsdp: bool fsdp2: bool parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group() group.add_argument("--fsdp", action="store_true") group.add_argument("--fsdp2", action="store_true") args = parser.parse_args(namespace=CLIArgs()) if args.fsdp: fsdp_generate() elif args.fsdp2: fsdp2_generate() else: raise ValueError("Missing test selection")
transformers/tests/generation/test_fsdp.py/0
{ "file_path": "transformers/tests/generation/test_fsdp.py", "repo_id": "transformers", "token_count": 2267 }
# coding=utf-8 # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import copy import tempfile import unittest from transformers import CONFIG_MAPPING, AutoConfig, BertConfig, GPT2Config, T5Config, TapasConfig, is_tf_available from transformers.testing_utils import ( DUMMY_UNKNOWN_IDENTIFIER, SMALL_MODEL_IDENTIFIER, RequestCounter, require_tensorflow_probability, require_tf, slow, ) from ..bert.test_modeling_bert import BertModelTester if is_tf_available(): from transformers import ( TFAutoModel, TFAutoModelForCausalLM, TFAutoModelForMaskedLM, TFAutoModelForPreTraining, TFAutoModelForQuestionAnswering, TFAutoModelForSeq2SeqLM, TFAutoModelForSequenceClassification, TFAutoModelForTableQuestionAnswering, TFAutoModelForTokenClassification, TFAutoModelWithLMHead, TFBertForMaskedLM, TFBertForPreTraining, TFBertForQuestionAnswering, TFBertForSequenceClassification, TFBertModel, TFFunnelBaseModel, TFFunnelModel, TFGPT2LMHeadModel, TFRobertaForMaskedLM, TFT5ForConditionalGeneration, TFTapasForQuestionAnswering, ) from transformers.models.auto.modeling_tf_auto import ( TF_MODEL_FOR_CAUSAL_LM_MAPPING, TF_MODEL_FOR_MASKED_LM_MAPPING, TF_MODEL_FOR_PRETRAINING_MAPPING, TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, TF_MODEL_MAPPING, ) class NewModelConfig(BertConfig): model_type = "new-model" if is_tf_available(): class TFNewModel(TFBertModel): config_class = NewModelConfig @require_tf class TFAutoModelTest(unittest.TestCase): @slow def test_model_from_pretrained(self): model_name = "google-bert/bert-base-cased" config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, BertConfig) model = TFAutoModel.from_pretrained(model_name) self.assertIsNotNone(model) self.assertIsInstance(model, TFBertModel) @slow def test_model_for_pretraining_from_pretrained(self): model_name = "google-bert/bert-base-cased" config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, BertConfig) model = TFAutoModelForPreTraining.from_pretrained(model_name) self.assertIsNotNone(model) self.assertIsInstance(model, TFBertForPreTraining) @slow def test_model_for_causal_lm(self): model_name = "openai-community/gpt2" config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, GPT2Config) model = TFAutoModelForCausalLM.from_pretrained(model_name) model, loading_info = TFAutoModelForCausalLM.from_pretrained(model_name, output_loading_info=True) self.assertIsNotNone(model) self.assertIsInstance(model, TFGPT2LMHeadModel) @slow def test_lmhead_model_from_pretrained(self): model_name = "openai-community/gpt2" config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, GPT2Config) model = TFAutoModelWithLMHead.from_pretrained(model_name) self.assertIsNotNone(model) self.assertIsInstance(model, TFGPT2LMHeadModel) @slow def test_model_for_masked_lm(self): model_name = "google-bert/bert-base-uncased" config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, BertConfig) model = TFAutoModelForMaskedLM.from_pretrained(model_name) model, loading_info = TFAutoModelForMaskedLM.from_pretrained(model_name, output_loading_info=True) self.assertIsNotNone(model) self.assertIsInstance(model, TFBertForMaskedLM) @slow def test_model_for_encoder_decoder_lm(self): model_name = "google-t5/t5-base" config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, T5Config) model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name) model, loading_info = TFAutoModelForSeq2SeqLM.from_pretrained(model_name, output_loading_info=True) self.assertIsNotNone(model) self.assertIsInstance(model, TFT5ForConditionalGeneration) @slow def test_sequence_classification_model_from_pretrained(self): # model_name = 'openai-community/gpt2' for model_name in ["google-bert/bert-base-uncased"]: config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, BertConfig) model = TFAutoModelForSequenceClassification.from_pretrained(model_name) self.assertIsNotNone(model) self.assertIsInstance(model, TFBertForSequenceClassification) @slow def test_question_answering_model_from_pretrained(self): # model_name = 'openai-community/gpt2' for model_name in ["google-bert/bert-base-uncased"]: config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, BertConfig) model = TFAutoModelForQuestionAnswering.from_pretrained(model_name) self.assertIsNotNone(model) self.assertIsInstance(model, TFBertForQuestionAnswering) @slow @require_tensorflow_probability def test_table_question_answering_model_from_pretrained(self): model_name = "google/tapas-base" config = AutoConfig.from_pretrained(model_name) self.assertIsNotNone(config) self.assertIsInstance(config, TapasConfig) model = TFAutoModelForTableQuestionAnswering.from_pretrained(model_name) model, loading_info = TFAutoModelForTableQuestionAnswering.from_pretrained( model_name, output_loading_info=True ) self.assertIsNotNone(model) self.assertIsInstance(model, TFTapasForQuestionAnswering) def test_from_pretrained_identifier(self): model = TFAutoModelWithLMHead.from_pretrained(SMALL_MODEL_IDENTIFIER) self.assertIsInstance(model, TFBertForMaskedLM) self.assertEqual(model.num_parameters(), 14410) self.assertEqual(model.num_parameters(only_trainable=True), 14410) def test_from_identifier_from_model_type(self): model = TFAutoModelWithLMHead.from_pretrained(DUMMY_UNKNOWN_IDENTIFIER) self.assertIsInstance(model, TFRobertaForMaskedLM) self.assertEqual(model.num_parameters(), 14410) self.assertEqual(model.num_parameters(only_trainable=True), 14410) def test_from_pretrained_with_tuple_values(self): # For the auto model mapping, FunnelConfig has two models: FunnelModel and FunnelBaseModel model = TFAutoModel.from_pretrained("sgugger/funnel-random-tiny") self.assertIsInstance(model, TFFunnelModel) config = copy.deepcopy(model.config) config.architectures = ["FunnelBaseModel"] model = TFAutoModel.from_config(config) model.build_in_name_scope() self.assertIsInstance(model, TFFunnelBaseModel) with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir) model = TFAutoModel.from_pretrained(tmp_dir) self.assertIsInstance(model, TFFunnelBaseModel) def test_new_model_registration(self): try: AutoConfig.register("new-model", NewModelConfig) auto_classes = [ TFAutoModel, TFAutoModelForCausalLM, TFAutoModelForMaskedLM, TFAutoModelForPreTraining, TFAutoModelForQuestionAnswering, TFAutoModelForSequenceClassification, TFAutoModelForTokenClassification, ] for auto_class in auto_classes: with self.subTest(auto_class.__name__): # Wrong config class will raise an error with self.assertRaises(ValueError): auto_class.register(BertConfig, TFNewModel) auto_class.register(NewModelConfig, TFNewModel) # Trying to register something existing in the Transformers library will raise an error with self.assertRaises(ValueError): auto_class.register(BertConfig, TFBertModel) # Now that the config is registered, it can be used as any other config with the auto-API tiny_config = BertModelTester(self).get_config() config = NewModelConfig(**tiny_config.to_dict()) model = auto_class.from_config(config) model.build_in_name_scope() self.assertIsInstance(model, TFNewModel) with tempfile.TemporaryDirectory() as tmp_dir: model.save_pretrained(tmp_dir) new_model = auto_class.from_pretrained(tmp_dir) self.assertIsInstance(new_model, TFNewModel) finally: if "new-model" in CONFIG_MAPPING._extra_content: del CONFIG_MAPPING._extra_content["new-model"] for mapping in ( TF_MODEL_MAPPING, TF_MODEL_FOR_PRETRAINING_MAPPING, TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, TF_MODEL_FOR_CAUSAL_LM_MAPPING, TF_MODEL_FOR_MASKED_LM_MAPPING, ): if NewModelConfig in mapping._extra_content: del mapping._extra_content[NewModelConfig] def test_repo_not_found(self): with self.assertRaisesRegex( EnvironmentError, "bert-base is not a local folder and is not a valid model identifier" ): _ = TFAutoModel.from_pretrained("bert-base") def test_revision_not_found(self): with self.assertRaisesRegex( EnvironmentError, r"aaaaaa is not a valid git identifier \(branch name, tag name or commit id\)" ): _ = TFAutoModel.from_pretrained(DUMMY_UNKNOWN_IDENTIFIER, revision="aaaaaa") def test_model_file_not_found(self): with self.assertRaisesRegex( EnvironmentError, "hf-internal-testing/config-no-model does not appear to have a file named pytorch_model.bin", ): _ = TFAutoModel.from_pretrained("hf-internal-testing/config-no-model") def test_model_from_pt_suggestion(self): with self.assertRaisesRegex(EnvironmentError, "Use `from_pt=True` to load this model"): _ = TFAutoModel.from_pretrained("hf-internal-testing/tiny-bert-pt-only") def test_cached_model_has_minimum_calls_to_head(self): # Make sure we have cached the model. _ = TFAutoModel.from_pretrained("hf-internal-testing/tiny-random-bert") with RequestCounter() as counter: _ = TFAutoModel.from_pretrained("hf-internal-testing/tiny-random-bert") self.assertEqual(counter["GET"], 0) self.assertEqual(counter["HEAD"], 1) self.assertEqual(counter.total_calls, 1) # With a sharded checkpoint _ = TFAutoModel.from_pretrained("ArthurZ/tiny-random-bert-sharded") with RequestCounter() as counter: _ = TFAutoModel.from_pretrained("ArthurZ/tiny-random-bert-sharded") self.assertEqual(counter["GET"], 0) self.assertEqual(counter["HEAD"], 1) self.assertEqual(counter.total_calls, 1)
transformers/tests/models/auto/test_modeling_tf_auto.py/0
{ "file_path": "transformers/tests/models/auto/test_modeling_tf_auto.py", "repo_id": "transformers", "token_count": 5501 }
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shutil import tempfile import unittest import pytest from transformers.testing_utils import require_torch, require_vision from transformers.utils import is_vision_available from ...test_processing_common import ProcessorTesterMixin if is_vision_available(): from transformers import AutoProcessor, BertTokenizer, BlipImageProcessor, BlipProcessor, PreTrainedTokenizerFast @require_vision class BlipProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = BlipProcessor def setUp(self): self.tmpdirname = tempfile.mkdtemp() image_processor = BlipImageProcessor() tokenizer = BertTokenizer.from_pretrained("hf-internal-testing/tiny-random-BertModel") processor = BlipProcessor(image_processor, tokenizer) processor.save_pretrained(self.tmpdirname) def get_tokenizer(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).tokenizer def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor def tearDown(self): shutil.rmtree(self.tmpdirname) def test_save_load_pretrained_additional_features(self): processor = BlipProcessor(tokenizer=self.get_tokenizer(), image_processor=self.get_image_processor()) processor.save_pretrained(self.tmpdirname) tokenizer_add_kwargs = self.get_tokenizer(bos_token="(BOS)", eos_token="(EOS)") image_processor_add_kwargs = self.get_image_processor(do_normalize=False, padding_value=1.0) processor = BlipProcessor.from_pretrained( self.tmpdirname, bos_token="(BOS)", eos_token="(EOS)", do_normalize=False, padding_value=1.0 ) self.assertEqual(processor.tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab()) self.assertIsInstance(processor.tokenizer, PreTrainedTokenizerFast) self.assertEqual(processor.image_processor.to_json_string(), image_processor_add_kwargs.to_json_string()) self.assertIsInstance(processor.image_processor, BlipImageProcessor) def test_image_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = BlipProcessor(tokenizer=tokenizer, image_processor=image_processor) image_input = self.prepare_image_inputs() input_feat_extract = image_processor(image_input, return_tensors="np") input_processor = processor(images=image_input, return_tensors="np") for key in input_feat_extract.keys(): self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) def test_tokenizer(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = BlipProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = "lower newer" encoded_processor = processor(text=input_str) encoded_tok = tokenizer(input_str, return_token_type_ids=False) for key in encoded_tok.keys(): self.assertListEqual(encoded_tok[key], encoded_processor[key]) def test_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = BlipProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = "lower newer" image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) self.assertListEqual(list(inputs.keys()), ["pixel_values", "input_ids", "attention_mask"]) # test if it raises when no input is passed with pytest.raises(ValueError): processor() def test_tokenizer_decode(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = BlipProcessor(tokenizer=tokenizer, image_processor=image_processor) predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9]] decoded_processor = processor.batch_decode(predicted_ids) decoded_tok = tokenizer.batch_decode(predicted_ids) self.assertListEqual(decoded_tok, decoded_processor) def test_model_input_names(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = BlipProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = "lower newer" image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) # For now the processor supports only ['pixel_values', 'input_ids', 'attention_mask'] self.assertListEqual(list(inputs.keys()), ["pixel_values", "input_ids", "attention_mask"]) @require_torch @require_vision def test_unstructured_kwargs_batched(self): if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") tokenizer = self.get_component("tokenizer") processor = self.processor_class(tokenizer=tokenizer, image_processor=image_processor) self.skip_processor_without_typed_kwargs(processor) input_str = ["lower newer", "upper older longer string"] image_input = self.prepare_image_inputs(batch_size=2) inputs = processor( text=input_str, images=image_input, return_tensors="pt", crop_size={"height": 214, "width": 214}, size={"height": 214, "width": 214}, padding="longest", max_length=76, ) self.assertEqual(inputs["pixel_values"].shape[2], 214) self.assertEqual(len(inputs["input_ids"][0]), 24)
transformers/tests/models/blip/test_processor_blip.py/0
{ "file_path": "transformers/tests/models/blip/test_processor_blip.py", "repo_id": "transformers", "token_count": 2449 }
# coding=utf-8 # Copyright 2023 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import random import unittest import numpy as np from datasets import load_dataset from transformers import ClapFeatureExtractor from transformers.testing_utils import require_torch, require_torchaudio from transformers.trainer_utils import set_seed from transformers.utils.import_utils import is_torch_available from ...test_sequence_feature_extraction_common import SequenceFeatureExtractionTestMixin if is_torch_available(): import torch global_rng = random.Random() # Copied from tests.models.whisper.test_feature_extraction_whisper.floats_list def floats_list(shape, scale=1.0, rng=None, name=None): """Creates a random float32 tensor""" if rng is None: rng = global_rng values = [] for batch_idx in range(shape[0]): values.append([]) for _ in range(shape[1]): values[-1].append(rng.random() * scale) return values @require_torch @require_torchaudio # Copied from tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTester with Whisper->Clap class ClapFeatureExtractionTester: def __init__( self, parent, batch_size=7, min_seq_length=400, max_seq_length=2000, feature_size=10, hop_length=160, chunk_length=8, padding_value=0.0, sampling_rate=4_000, return_attention_mask=False, do_normalize=True, ): self.parent = parent self.batch_size = batch_size self.min_seq_length = min_seq_length self.max_seq_length = max_seq_length self.seq_length_diff = (self.max_seq_length - self.min_seq_length) // (self.batch_size - 1) self.padding_value = padding_value self.sampling_rate = sampling_rate self.return_attention_mask = return_attention_mask self.do_normalize = do_normalize self.feature_size = feature_size self.chunk_length = chunk_length self.hop_length = hop_length def prepare_feat_extract_dict(self): return { "feature_size": self.feature_size, "hop_length": self.hop_length, "chunk_length": self.chunk_length, "padding_value": self.padding_value, "sampling_rate": self.sampling_rate, "return_attention_mask": self.return_attention_mask, "do_normalize": self.do_normalize, } def prepare_inputs_for_common(self, equal_length=False, numpify=False): def _flatten(list_of_lists): return list(itertools.chain(*list_of_lists)) if equal_length: speech_inputs = [floats_list((self.max_seq_length, self.feature_size)) for _ in range(self.batch_size)] else: # make sure that inputs increase in size speech_inputs = [ floats_list((x, self.feature_size)) for x in range(self.min_seq_length, self.max_seq_length, self.seq_length_diff) ] if numpify: speech_inputs = [np.asarray(x) for x in speech_inputs] return speech_inputs @require_torch @require_torchaudio class ClapFeatureExtractionTest(SequenceFeatureExtractionTestMixin, unittest.TestCase): feature_extraction_class = ClapFeatureExtractor # Copied from tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTest.setUp with Whisper->Clap def setUp(self): self.feat_extract_tester = ClapFeatureExtractionTester(self) def test_call(self): # Tests that all call wrap to encode_plus and batch_encode_plus feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) # create three inputs of length 800, 1000, and 1200 speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)] np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs] # Test feature size input_features = feature_extractor(np_speech_inputs, padding="max_length", return_tensors="np").input_features self.assertTrue(input_features.ndim == 4) # Test not batched input encoded_sequences_1 = feature_extractor(speech_inputs[0], return_tensors="np").input_features encoded_sequences_2 = feature_extractor(np_speech_inputs[0], return_tensors="np").input_features self.assertTrue(np.allclose(encoded_sequences_1, encoded_sequences_2, atol=1e-3)) # Test batched encoded_sequences_1 = feature_extractor(speech_inputs, return_tensors="np").input_features encoded_sequences_2 = feature_extractor(np_speech_inputs, return_tensors="np").input_features for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2): self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3)) # Test 2-D numpy arrays are batched. speech_inputs = [floats_list((1, x))[0] for x in (800, 800, 800)] np_speech_inputs = np.asarray(speech_inputs) encoded_sequences_1 = feature_extractor(speech_inputs, return_tensors="np").input_features encoded_sequences_2 = feature_extractor(np_speech_inputs, return_tensors="np").input_features for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2): self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3)) # Copied from tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTest.test_double_precision_pad def test_double_precision_pad(self): import torch feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) np_speech_inputs = np.random.rand(100, 32).astype(np.float64) py_speech_inputs = np_speech_inputs.tolist() for inputs in [py_speech_inputs, np_speech_inputs]: np_processed = feature_extractor.pad([{"input_features": inputs}], return_tensors="np") self.assertTrue(np_processed.input_features.dtype == np.float32) pt_processed = feature_extractor.pad([{"input_features": inputs}], return_tensors="pt") self.assertTrue(pt_processed.input_features.dtype == torch.float32) # Copied from tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTest._load_datasamples def _load_datasamples(self, num_samples): ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # automatic decoding with librispeech speech_samples = ds.sort("id").select(range(num_samples))[:num_samples]["audio"] return [x["array"] for x in speech_samples] def test_integration_fusion_short_input(self): # fmt: off EXPECTED_INPUT_FEATURES = torch.tensor( [ [ # "repeat" [ -20.1049, -19.9764, -20.0731, -19.5055, -27.5018, -22.5761, -26.6071, -29.0091, -26.4659, -26.4236, -28.8808, -31.9190, -32.4848, -34.1186, -34.0340, -32.8803, -30.9895, -37.6238, -38.0347, -40.6263, -36.3496, -42.2533, -32.9132, -27.7068, -29.3704, -30.3208, -22.5972, -27.1494, -30.1975, -31.1005, -29.9372, -27.1917, -25.9806, -30.3489, -33.2380, -31.9062, -36.5498, -32.8721, -30.5629, -27.4674, -22.2232, -22.5653, -16.3868, -17.2713, -25.9738, -30.6256, -34.3766, -31.1292, -27.8950, -27.0588, -25.6206, -23.0712, -26.6050, -28.0112, -32.6847, -34.3396, -34.9738, -35.8463, -39.2324, -37.1188, -33.3705, -28.9230, -28.9112, -28.6578 ], [ -36.7233, -30.0587, -24.8431, -18.4611, -16.8149, -23.9319, -32.8580, -34.2264, -27.4332, -26.8027, -29.2721, -33.9033, -39.3403, -35.3232, -26.8076, -28.6460, -35.2780, -36.0738, -35.4996, -37.7631, -39.5056, -34.7112, -36.8741, -34.1066, -32.9474, -33.6604, -27.9937, -30.9594, -26.2928, -32.0485, -29.2151, -29.2917, -32.7308, -29.6542, -31.1454, -37.0088, -32.3388, -37.3086, -31.1024, -27.2889, -19.6788, -21.1488, -19.5144, -14.8889, -21.2006, -24.7488, -27.7940, -31.1058, -27.5068, -21.5737, -22.3780, -21.5151, -26.3086, -30.9223, -33.5043, -32.0307, -37.3806, -41.6188, -45.6650, -40.5131, -32.5023, -26.7385, -26.3709, -26.7761 ] ], [ # "repeatpad" [ -25.7496, -24.9339, -24.1357, -23.1271, -23.7853, -26.1264, -29.1456, -33.2060, -37.8179, -42.4833, -41.9386, -41.2164, -42.3566, -44.2575, -40.0217, -36.6794, -36.6974, -38.7819, -42.0880, -45.5560, -39.9368, -36.3219, -35.5981, -36.6434, -35.1851, -33.0684, -30.0437, -30.2010, -34.3476, -42.1373, -38.8039, -37.3355, -40.4576, -41.0485, -40.6377, -38.2275, -42.7481, -34.6084, -34.7048, -29.5149, -26.3935, -26.8952, -34.1336, -26.2904, -28.2571, -32.5642, -36.7240, -35.5334, -38.2451, -34.8177, -28.9754, -25.1096, -27.9768, -32.3184, -37.0269, -40.5136, -40.8061, -36.4948, -40.3767, -38.9671, -38.3552, -34.1250, -30.9035, -31.6112 ], [ -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100. ] ], [ # None, same as "repeatpad" [ -25.7496, -24.9339, -24.1357, -23.1271, -23.7853, -26.1264, -29.1456, -33.2060, -37.8179, -42.4833, -41.9386, -41.2164, -42.3566, -44.2575, -40.0217, -36.6794, -36.6974, -38.7819, -42.0880, -45.5560, -39.9368, -36.3219, -35.5981, -36.6434, -35.1851, -33.0684, -30.0437, -30.2010, -34.3476, -42.1373, -38.8039, -37.3355, -40.4576, -41.0485, -40.6377, -38.2275, -42.7481, -34.6084, -34.7048, -29.5149, -26.3935, -26.8952, -34.1336, -26.2904, -28.2571, -32.5642, -36.7240, -35.5334, -38.2451, -34.8177, -28.9754, -25.1096, -27.9768, -32.3184, -37.0269, -40.5136, -40.8061, -36.4948, -40.3767, -38.9671, -38.3552, -34.1250, -30.9035, -31.6112 ], [ -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100. ] ], [ # "pad" [ -58.5260, -58.1155, -57.8623, -57.5059, -57.9178, -58.7171, -59.2343, -59.9833, -60.9764, -62.0722, -63.5723, -65.7111, -67.5153, -68.7088, -69.8325, -70.2987, -70.1548, -70.6233, -71.5702, -72.5159, -72.3821, -70.1817, -67.0315, -64.1387, -62.2202, -61.0717, -60.4951, -61.6005, -63.7358, -67.1400, -67.6185, -65.5635, -64.3593, -63.7138, -63.6209, -66.4950, -72.6284, -63.3961, -56.8334, -52.7319, -50.6310, -51.3728, -53.5619, -51.9190, -50.9708, -52.8684, -55.8073, -58.8227, -60.6991, -57.0547, -52.7611, -51.4388, -54.4892, -60.8950, -66.1024, -72.4352, -67.8538, -65.1463, -68.7588, -72.3080, -68.4864, -60.4688, -57.1516, -60.9460 ], [ -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100. ] ] ] ) # fmt: on MEL_BIN = [[976, 977], [976, 977], [976, 977], [196, 197]] input_speech = self._load_datasamples(1) feature_extractor = ClapFeatureExtractor() for padding, EXPECTED_VALUES, idx_in_mel in zip( ["repeat", "repeatpad", None, "pad"], EXPECTED_INPUT_FEATURES, MEL_BIN ): input_features = feature_extractor(input_speech, return_tensors="pt", padding=padding).input_features self.assertEqual(input_features.shape, (1, 4, 1001, 64)) torch.testing.assert_close(input_features[0, 0, idx_in_mel[0]], EXPECTED_VALUES[0], rtol=1e-4, atol=1e-4) torch.testing.assert_close(input_features[0, 0, idx_in_mel[1]], EXPECTED_VALUES[1], rtol=1e-4, atol=1e-4) self.assertTrue(torch.all(input_features[0, 0] == input_features[0, 1])) self.assertTrue(torch.all(input_features[0, 0] == input_features[0, 2])) self.assertTrue(torch.all(input_features[0, 0] == input_features[0, 3])) def test_integration_rand_trunc_short_input(self): # fmt: off EXPECTED_INPUT_FEATURES = torch.tensor( [ [ # "repeat" [ -35.0483, -35.7865, -38.2884, -40.0220, -42.5349, -44.9489, -43.2228, -44.6499, -47.6253, -49.6983, -50.2127, -52.5483, -52.2223, -51.9157, -49.4082, -51.2024, -57.0476, -56.2803, -58.1618, -60.7474, -55.0389, -60.9514, -59.3080, -50.4419, -47.8172, -48.7570, -55.2552, -44.5036, -44.1148, -50.8218, -51.0968, -52.9408, -51.1037, -48.9789, -47.5897, -52.0915, -55.4216, -54.1529, -58.0149, -58.0866, -52.7798, -52.6154, -45.9144, -46.2008, -40.7603, -41.1703, -50.2250, -55.4112, -59.4818, -54.5795, -53.5552, -51.3668, -49.8358, -50.3186, -54.0452, -57.6030, -61.1589, -61.6415, -63.2756, -66.5890, -62.8543, -58.0665, -56.7203, -56.7632 ], [ -47.1320, -37.9961, -34.0076, -36.7109, -47.9057, -48.4924, -43.8371, -44.9728, -48.1689, -52.9141, -57.6077, -52.8520, -44.8502, -45.6764, -51.8389, -56.4284, -54.6972, -53.4889, -55.6077, -58.7149, -60.3760, -54.0136, -56.0730, -55.9870, -54.4017, -53.1094, -53.5640, -50.3064, -49.9520, -49.3239, -48.1668, -53.4852, -50.4561, -50.8688, -55.1970, -51.5538, -53.0260, -59.6933, -54.8183, -59.5895, -55.9589, -50.3761, -44.1282, -44.1463, -43.8540, -39.1168, -45.3893, -49.5542, -53.1505, -55.2870, -50.3921, -46.8511, -47.4444, -49.5633, -56.0034, -59.0815, -59.0018, -63.7589, -69.5745, -71.5789, -64.0498, -56.0558, -54.3475, -54.7004 ] ], [ # "repeatpad" [ -40.3184, -39.7186, -39.8807, -41.6508, -45.3613, -50.4785, -57.0297, -60.4944, -59.1642, -58.9495, -60.4661, -62.5300, -58.4759, -55.2865, -54.8973, -56.0780, -57.5482, -59.6557, -64.3309, -65.0330, -59.4941, -56.8552, -55.0519, -55.9817, -56.9739, -55.2827, -54.5312, -51.4141, -50.4289, -51.9131, -57.5821, -63.9979, -59.9180, -58.9489, -62.3247, -62.6975, -63.7948, -60.5250, -64.6107, -58.7905, -57.0229, -54.3084, -49.8445, -50.4459, -57.0172, -50.6425, -52.5992, -57.4207, -61.6358, -60.6540, -63.1968, -57.4360, -52.3263, -51.7695, -57.1946, -62.9610, -66.7359, -67.0335, -63.7440, -68.1775, -66.3798, -62.8650, -59.8972, -59.3139 ], [ -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100. ] ], [ # None, same as "repeatpad" [ -40.3184, -39.7186, -39.8807, -41.6508, -45.3613, -50.4785, -57.0297, -60.4944, -59.1642, -58.9495, -60.4661, -62.5300, -58.4759, -55.2865, -54.8973, -56.0780, -57.5482, -59.6557, -64.3309, -65.0330, -59.4941, -56.8552, -55.0519, -55.9817, -56.9739, -55.2827, -54.5312, -51.4141, -50.4289, -51.9131, -57.5821, -63.9979, -59.9180, -58.9489, -62.3247, -62.6975, -63.7948, -60.5250, -64.6107, -58.7905, -57.0229, -54.3084, -49.8445, -50.4459, -57.0172, -50.6425, -52.5992, -57.4207, -61.6358, -60.6540, -63.1968, -57.4360, -52.3263, -51.7695, -57.1946, -62.9610, -66.7359, -67.0335, -63.7440, -68.1775, -66.3798, -62.8650, -59.8972, -59.3139 ], [ -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100. ] ], [ # "pad" [ -73.3190, -73.6349, -74.1451, -74.8539, -75.7476, -76.5438, -78.5540, -80.1339, -81.8911, -83.7560, -85.5387, -86.7466, -88.2072, -88.6090, -88.8243, -89.0784, -89.4364, -89.8179, -91.3146, -92.2833, -91.7221, -90.9440, -88.1315, -86.2425, -84.2281, -82.4893, -81.5993, -81.1328, -81.5759, -83.1068, -85.6525, -88.9520, -88.9187, -87.2703, -86.3052, -85.7188, -85.8802, -87.9996, -95.0464, -88.0133, -80.8561, -76.5597, -74.2816, -74.8109, -77.3615, -76.0719, -75.3426, -77.6428, -80.9663, -84.5275, -84.9907, -80.5205, -77.2851, -78.6259, -84.7740, -91.4535, -98.1894, -94.3872, -92.3735, -97.6807, -98.1501, -91.4344, -85.2842, -88.4338 ], [ -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100., -100. ] ] ] ) # fmt: on MEL_BIN = [[976, 977], [976, 977], [976, 977], [196, 197]] input_speech = self._load_datasamples(1) feature_extractor = ClapFeatureExtractor() for padding, EXPECTED_VALUES, idx_in_mel in zip( ["repeat", "repeatpad", None, "pad"], EXPECTED_INPUT_FEATURES, MEL_BIN ): input_features = feature_extractor( input_speech, return_tensors="pt", truncation="rand_trunc", padding=padding ).input_features self.assertEqual(input_features.shape, (1, 1, 1001, 64)) torch.testing.assert_close(input_features[0, 0, idx_in_mel[0]], EXPECTED_VALUES[0], rtol=1e-4, atol=1e-4) torch.testing.assert_close(input_features[0, 0, idx_in_mel[1]], EXPECTED_VALUES[1], rtol=1e-4, atol=1e-4) def test_integration_fusion_long_input(self): # fmt: off EXPECTED_INPUT_FEATURES = torch.tensor( [ [ -11.1830, -10.1894, -8.6051, -4.8578, -1.3268, -8.4606, -14.5453, -9.2017, 0.5781, 16.2129, 14.8289, 3.6326, -3.8794, -6.5544, -2.4408, 1.9531, 6.0967, 1.7590, -7.6730, -6.1571, 2.0052, 16.6694, 20.6447, 21.2145, 13.4972, 15.9043, 16.8987, 4.1766, 11.9428, 21.2372, 12.3016, 4.8604, 6.7241, 1.8543, 4.9235, 5.3188, -0.9897, -1.2416, -6.5864, 2.9529, 2.9274, 6.4753, 10.2300, 11.2127, 3.4042, -1.0055, -6.0475, -6.7524, -3.9801, -1.4434, 0.4740, -0.1584, -4.5457, -8.5746, -8.8428, -13.1475, -9.6079, -8.5798, -4.1143, -3.7966, -7.1651, -6.1517, -8.0258, -12.1486 ], [ -10.2017, -7.9924, -5.9517, -3.9372, -1.9735, -4.3130, 16.1647, 25.0592, 23.5532, 14.4974, -7.0778, -10.2262, 6.4782, 20.3454, 19.4269, 1.7976, -16.5070, 4.9380, 12.3390, 6.9285, -13.6325, -8.5298, 1.0839, -5.9629, -8.4812, 3.1331, -2.0963, -16.6046, -14.0070, -17.5707, -13.2080, -17.2168, -17.7770, -12.1111, -18.6184, -17.1897, -13.9801, -12.0426, -23.5400, -25.6823, -23.5813, -18.7847, -20.5473, -25.6458, -19.7585, -27.6007, -28.9276, -24.8948, -25.4458, -22.2807, -19.6613, -19.2669, -15.7813, -19.6821, -24.3439, -22.2598, -28.2631, -30.1017, -32.7646, -33.6525, -27.5639, -22.0548, -27.8054, -29.6947 ], [ -9.2078, -7.2963, -6.2095, -7.9959, -2.9280, -11.1843, -6.1490, 5.0733, 19.2957, 21.4578, 14.6803, -3.3153, -6.3334, -2.3542, 6.9509, 15.2965, 14.6620, 5.2075, -0.0873, 1.1919, 18.1986, 20.8470, 10.8035, 2.2516, 7.6905, 7.7427, -1.2543, -5.0018, 0.9809, -2.1584, -5.4580, -5.4760, -11.8888, -9.0605, -8.4638, -9.9897, -0.0540, -5.1629, 0.0483, -4.1504, -4.8140, -7.8236, -9.0622, -10.1742, -8.9597, -11.5380, -16.5603, -17.1858, -17.5032, -20.9326, -23.9543, -25.2602, -25.3429, -27.4536, -26.8859, -22.7852, -25.8288, -24.8399, -23.8893, -24.2096, -26.5415, -23.7281, -25.6851, -22.3629 ], [ 1.3448, 2.9883, 4.0366, -0.8019, -10.4191, -10.0883, -4.3812, 0.8136, 2.1579, 0.0832, 1.0949, -0.9759, -5.5319, -4.6009, -6.5452, -14.9155, -20.1584, -9.3611, -2.4271, 1.4031, 4.9910, 8.6916, 8.6785, 10.1973, 9.9029, 5.3840, 7.5336, 5.2803, 2.8144, -0.3138, 2.2216, 5.7328, 7.5574, 7.7402, 1.0681, 3.1049, 7.0742, 6.5588, 7.3712, 5.7881, 8.6874, 8.7725, 2.8133, -4.5809, -6.1317, -5.1719, -5.0192, -9.0977, -10.9391, -6.0769, 1.6016, -0.8965, -7.2252, -7.8632, -11.4468, -11.7446, -10.7447, -7.0601, -2.7748, -4.1798, -2.8433, -3.1352, 0.8097, 6.4212 ] ] ) # fmt: on MEL_BIN = 963 input_speech = torch.cat([torch.tensor(x) for x in self._load_datasamples(5)]) feature_extractor = ClapFeatureExtractor() for padding, EXPECTED_VALUES, block_idx in zip( ["repeat", "repeatpad", None, "pad"], EXPECTED_INPUT_FEATURES, [1, 2, 0, 3] ): set_seed(987654321) input_features = feature_extractor(input_speech, return_tensors="pt", padding=padding).input_features self.assertEqual(input_features.shape, (1, 4, 1001, 64)) torch.testing.assert_close(input_features[0, block_idx, MEL_BIN], EXPECTED_VALUES, rtol=1e-3, atol=1e-3) def test_integration_rand_trunc_long_input(self): # fmt: off EXPECTED_INPUT_FEATURES = torch.tensor( [ [ -35.4022, -32.7555, -31.2004, -32.7764, -42.5770, -41.6339, -43.1630, -44.5080, -44.3029, -48.9628, -39.5022, -39.2105, -43.1350, -43.2195, -48.4894, -52.2344, -57.6891, -52.2228, -45.5155, -44.2893, -43.4697, -46.6702, -43.7490, -40.4819, -42.7275, -46.3434, -46.8412, -41.2003, -43.1681, -46.2948, -46.1925, -47.8333, -45.6812, -44.9182, -41.7786, -43.3809, -44.3199, -42.8814, -45.4771, -46.7114, -46.9746, -42.7090, -41.6057, -38.3965, -40.1980, -41.0263, -34.1256, -28.3289, -29.0201, -30.4453, -29.5561, -30.1734, -25.9406, -19.0897, -15.8452, -20.1351, -23.6515, -23.1194, -17.1845, -19.4399, -23.6527, -22.8768, -20.7279, -22.7864 ], [ -35.7719, -27.2566, -23.6964, -27.5521, 0.2510, 7.4391, 1.3917, -13.3417, -28.1758, -17.0856, -5.7723, -0.8000, -7.8832, -15.5548, -30.5935, -24.7571, -13.7009, -10.3432, -21.2464, -24.8118, -19.4080, -14.9779, -11.7991, -18.4485, -20.1982, -17.3652, -20.6328, -28.2967, -25.7819, -21.8962, -28.5083, -29.5719, -30.2120, -35.7033, -31.8218, -34.0408, -37.7744, -33.9653, -31.3009, -30.9063, -28.6153, -32.2202, -28.5456, -28.8579, -32.5170, -37.9152, -43.0052, -46.4849, -44.0786, -39.1933, -33.2757, -31.6313, -42.6386, -52.3679, -53.5785, -55.6444, -47.0050, -47.6459, -56.6361, -60.6781, -61.5244, -55.8272, -60.4832, -58.1897 ], [ -38.2686, -36.6285, -32.5835, -35.1693, -37.7938, -37.4035, -35.3132, -35.6083, -36.3609, -40.9472, -36.7846, -36.1544, -38.9076, -39.3618, -35.4953, -34.2809, -39.9466, -39.7433, -34.8347, -37.5674, -41.5689, -38.9161, -34.3947, -30.2924, -30.4841, -34.5831, -28.9261, -24.8849, -31.2324, -27.1622, -27.2107, -25.9385, -30.1691, -30.9223, -23.9495, -25.6047, -26.7119, -28.5523, -27.7481, -32.8427, -35.4650, -31.0399, -31.2073, -30.5163, -22.9819, -20.8892, -19.2510, -24.7905, -28.9426, -28.1998, -26.7386, -25.0140, -27.9223, -32.9913, -33.1864, -34.9742, -38.5995, -39.6990, -29.3203, -22.4697, -25.6415, -33.5608, -33.0945, -27.1716 ], [ -33.2015, -28.7741, -21.9457, -23.4888, -32.1072, -8.6307, 3.2724, 5.9157, -0.9221, -30.1814, -31.0015, -27.4508, -27.0477, -9.5342, 0.3221, 0.6511, -7.1596, -25.9707, -32.8924, -32.2300, -13.8974, -0.4895, 0.9168, -10.7663, -27.1176, -35.0829, -11.6859, -4.8855, -11.8898, -26.6167, -5.6192, -3.8443, -19.7947, -14.4101, -8.6236, -21.2458, -21.0801, -17.9136, -24.4663, -18.6333, -24.8085, -15.5854, -15.4344, -11.5046, -22.3625, -27.3387, -32.4353, -30.9670, -31.3789, -35.4044, -34.4591, -25.2433, -28.0773, -33.8736, -33.0224, -33.3155, -38.5302, -39.2741, -36.6395, -34.7729, -32.4483, -42.4001, -49.2857, -39.1682 ] ] ) # fmt: on MEL_BIN = 963 SEEDS = [987654321, 1234, 666, 5555] input_speech = torch.cat([torch.tensor(x) for x in self._load_datasamples(5)]) feature_extractor = ClapFeatureExtractor() for padding, EXPECTED_VALUES, seed in zip( ["repeat", "repeatpad", None, "pad"], EXPECTED_INPUT_FEATURES, SEEDS ): set_seed(seed) input_features = feature_extractor( input_speech, return_tensors="pt", truncation="rand_trunc", padding=padding ).input_features self.assertEqual(input_features.shape, (1, 1, 1001, 64)) torch.testing.assert_close(input_features[0, 0, MEL_BIN], EXPECTED_VALUES, rtol=1e-4, atol=1e-4)
transformers/tests/models/clap/test_feature_extraction_clap.py/0
{ "file_path": "transformers/tests/models/clap/test_feature_extraction_clap.py", "repo_id": "transformers", "token_count": 19272 }
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import shutil import tempfile import unittest from transformers import ClvpFeatureExtractor, ClvpProcessor, ClvpTokenizer from transformers.testing_utils import require_torch from .test_feature_extraction_clvp import floats_list @require_torch class ClvpProcessorTest(unittest.TestCase): def setUp(self): self.checkpoint = "susnato/clvp_dev" self.tmpdirname = tempfile.mkdtemp() def tearDown(self): super().tearDown() shutil.rmtree(self.tmpdirname) gc.collect() # Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.get_tokenizer with Whisper->Clvp def get_tokenizer(self, **kwargs): return ClvpTokenizer.from_pretrained(self.checkpoint, **kwargs) # Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.get_feature_extractor with Whisper->Clvp def get_feature_extractor(self, **kwargs): return ClvpFeatureExtractor.from_pretrained(self.checkpoint, **kwargs) # Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_save_load_pretrained_default with Whisper->Clvp def test_save_load_pretrained_default(self): tokenizer = self.get_tokenizer() feature_extractor = self.get_feature_extractor() processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor) processor.save_pretrained(self.tmpdirname) processor = ClvpProcessor.from_pretrained(self.tmpdirname) self.assertEqual(processor.tokenizer.get_vocab(), tokenizer.get_vocab()) self.assertIsInstance(processor.tokenizer, ClvpTokenizer) self.assertEqual(processor.feature_extractor.to_json_string(), feature_extractor.to_json_string()) self.assertIsInstance(processor.feature_extractor, ClvpFeatureExtractor) # Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_feature_extractor with Whisper->Clvp,processor(raw_speech->processor(raw_speech=raw_speech def test_feature_extractor(self): feature_extractor = self.get_feature_extractor() tokenizer = self.get_tokenizer() processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor) raw_speech = floats_list((3, 1000)) input_feat_extract = feature_extractor(raw_speech, return_tensors="np") input_processor = processor(raw_speech=raw_speech, return_tensors="np") for key in input_feat_extract.keys(): self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) # Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_tokenizer with Whisper->Clvp def test_tokenizer(self): feature_extractor = self.get_feature_extractor() tokenizer = self.get_tokenizer() processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor) input_str = "This is a test string" encoded_processor = processor(text=input_str) encoded_tok = tokenizer(input_str) for key in encoded_tok.keys(): self.assertListEqual(encoded_tok[key], encoded_processor[key]) # Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_tokenizer_decode with Whisper->Clvp def test_tokenizer_decode(self): feature_extractor = self.get_feature_extractor() tokenizer = self.get_tokenizer() processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor) predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9]] decoded_processor = processor.batch_decode(predicted_ids) decoded_tok = tokenizer.batch_decode(predicted_ids) self.assertListEqual(decoded_tok, decoded_processor) def test_save_load_pretrained_additional_features(self): processor = ClvpProcessor(tokenizer=self.get_tokenizer(), feature_extractor=self.get_feature_extractor()) processor.save_pretrained(self.tmpdirname) tokenizer_add_kwargs = self.get_tokenizer(pad_token="(PAD)") feature_extractor_add_kwargs = self.get_feature_extractor(sampling_rate=16000) processor = ClvpProcessor.from_pretrained( self.tmpdirname, pad_token="(PAD)", sampling_rate=16000, ) self.assertEqual(processor.tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab()) self.assertIsInstance(processor.tokenizer, ClvpTokenizer) self.assertEqual(processor.feature_extractor.to_json_string(), feature_extractor_add_kwargs.to_json_string()) self.assertIsInstance(processor.feature_extractor, ClvpFeatureExtractor) def test_model_input_names(self): feature_extractor = self.get_feature_extractor() tokenizer = self.get_tokenizer() processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor) self.assertListEqual( sorted(processor.model_input_names), sorted(set(feature_extractor.model_input_names + tokenizer.model_input_names)), msg="`processor` and `feature_extractor` model input names do not match", )
transformers/tests/models/clvp/test_processor_clvp.py/0
{ "file_path": "transformers/tests/models/clvp/test_processor_clvp.py", "repo_id": "transformers", "token_count": 2197 }
# coding=utf-8 # Copyright 2022 The OpenBMB Team and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from transformers.models.cpmant.tokenization_cpmant import VOCAB_FILES_NAMES, CpmAntTokenizer from transformers.testing_utils import require_jieba, tooslow from ...test_tokenization_common import TokenizerTesterMixin @require_jieba class CPMAntTokenizationTest(TokenizerTesterMixin, unittest.TestCase): from_pretrained_id = "openbmb/cpm-ant-10b" tokenizer_class = CpmAntTokenizer test_rust_tokenizer = False def setUp(self): super().setUp() vocab_tokens = [ "<d>", "</d>", "<s>", "</s>", "</_>", "<unk>", "<pad>", "</n>", "我", "是", "C", "P", "M", "A", "n", "t", ] self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) with open(self.vocab_file, "w", encoding="utf-8") as vocab_writer: vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) @tooslow def test_pre_tokenization(self): tokenizer = CpmAntTokenizer.from_pretrained("openbmb/cpm-ant-10b") texts = "今天天气真好!" jieba_tokens = ["今天", "天气", "真", "好", "!"] tokens = tokenizer.tokenize(texts) self.assertListEqual(tokens, jieba_tokens) normalized_text = "今天天气真好!" input_tokens = [tokenizer.bos_token] + tokens input_jieba_tokens = [6, 9802, 14962, 2082, 831, 244] self.assertListEqual(tokenizer.convert_tokens_to_ids(input_tokens), input_jieba_tokens) reconstructed_text = tokenizer.decode(input_jieba_tokens) self.assertEqual(reconstructed_text, normalized_text)
transformers/tests/models/cpmant/test_tokenization_cpmant.py/0
{ "file_path": "transformers/tests/models/cpmant/test_tokenization_cpmant.py", "repo_id": "transformers", "token_count": 1090 }
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Data2VecVision model.""" import inspect import tempfile import unittest import numpy as np from parameterized import parameterized from transformers import Data2VecVisionConfig from transformers.testing_utils import ( require_torch, require_torch_multi_gpu, require_torch_sdpa, require_vision, slow, torch_device, ) from transformers.utils import ( cached_property, is_torch_available, is_torch_bf16_available_on_device, is_torch_fp16_available_on_device, is_vision_available, ) from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, _config_zero_init, floats_tensor, ids_tensor, sdpa_kernel from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from torch import nn from transformers import ( Data2VecVisionForImageClassification, Data2VecVisionForSemanticSegmentation, Data2VecVisionModel, ) from transformers.models.auto.modeling_auto import MODEL_MAPPING_NAMES if is_vision_available(): from PIL import Image from transformers import BeitImageProcessor class Data2VecVisionModelTester: def __init__( self, parent, vocab_size=100, batch_size=13, image_size=30, patch_size=2, num_channels=3, is_training=True, use_labels=True, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, type_sequence_label_size=10, initializer_range=0.02, num_labels=3, scope=None, out_indices=[0, 1, 2, 3], attn_implementation="eager", mask_ratio=0.5, ): self.parent = parent self.vocab_size = 100 self.batch_size = batch_size self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.is_training = is_training self.use_labels = use_labels self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range self.scope = scope self.out_indices = out_indices self.num_labels = num_labels # in BeiT, the seq length equals the number of patches + 1 (we add 1 for the [CLS] token) num_patches = (image_size // patch_size) ** 2 self.seq_length = num_patches + 1 self.num_masks = int(mask_ratio * self.seq_length) self.attn_implementation = attn_implementation def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) labels = None pixel_labels = None if self.use_labels: labels = ids_tensor([self.batch_size], self.type_sequence_label_size) pixel_labels = ids_tensor([self.batch_size, self.image_size, self.image_size], self.num_labels) config = self.get_config() return config, pixel_values, labels, pixel_labels def get_config(self): return Data2VecVisionConfig( vocab_size=self.vocab_size, image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, is_decoder=False, initializer_range=self.initializer_range, out_indices=self.out_indices, attn_implementation=self.attn_implementation, ) def create_and_check_model(self, config, pixel_values, labels, pixel_labels): model = Data2VecVisionModel(config=config) model.to(torch_device) model.eval() result = model(pixel_values) # expected sequence length = num_patches + 1 (we add 1 for the [CLS] token) num_patches = (self.image_size // self.patch_size) ** 2 self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, num_patches + 1, self.hidden_size)) def create_and_check_for_image_classification(self, config, pixel_values, labels, pixel_labels): config.num_labels = self.type_sequence_label_size model = Data2VecVisionForImageClassification(config) model.to(torch_device) model.eval() result = model(pixel_values, labels=labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.type_sequence_label_size)) def create_and_check_for_image_segmentation(self, config, pixel_values, labels, pixel_labels): config.num_labels = self.num_labels model = Data2VecVisionForSemanticSegmentation(config) model.to(torch_device) model.eval() result = model(pixel_values) self.parent.assertEqual( result.logits.shape, (self.batch_size, self.num_labels, self.image_size * 2, self.image_size * 2) ) result = model(pixel_values, labels=pixel_labels) self.parent.assertEqual( result.logits.shape, (self.batch_size, self.num_labels, self.image_size * 2, self.image_size * 2) ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values, labels, pixel_labels = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class Data2VecVisionModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as Data2VecVision does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = ( (Data2VecVisionModel, Data2VecVisionForImageClassification, Data2VecVisionForSemanticSegmentation) if is_torch_available() else () ) pipeline_model_mapping = ( { "image-feature-extraction": Data2VecVisionModel, "image-classification": Data2VecVisionForImageClassification, "image-segmentation": Data2VecVisionForSemanticSegmentation, } if is_torch_available() else {} ) test_pruning = False test_resize_embeddings = False test_head_masking = False def setUp(self): self.model_tester = Data2VecVisionModelTester(self) self.config_tester = ConfigTester( self, config_class=Data2VecVisionConfig, has_text_modality=False, hidden_size=37 ) def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="Data2VecVision does not use inputs_embeds") def test_inputs_embeds(self): pass @require_torch_multi_gpu @unittest.skip( reason="Data2VecVision has some layers using `add_module` which doesn't work well with `nn.DataParallel`" ) def test_multi_gpu_data_parallel_forward(self): pass def test_model_get_set_embeddings(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_for_image_segmentation(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_image_segmentation(*config_and_inputs) def test_training(self): if not self.model_tester.is_training: self.skipTest(reason="model_tester.is_training is set to False") config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True for model_class in self.all_model_classes: if model_class.__name__ in MODEL_MAPPING_NAMES.values(): continue model = model_class(config) model.to(torch_device) model.train() inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) loss = model(**inputs).loss loss.backward() def test_training_gradient_checkpointing(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() if not self.model_tester.is_training: self.skipTest(reason="model_tester.is_training is set to False") config.use_cache = False config.return_dict = True for model_class in self.all_model_classes: if model_class.__name__ in MODEL_MAPPING_NAMES.values() or not model_class.supports_gradient_checkpointing: continue # TODO: remove the following 3 lines once we have a MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING # this can then be incorporated into _prepare_for_class in test_modeling_common.py elif model_class.__name__ == "Data2VecVisionForSemanticSegmentation": batch_size, num_channels, height, width = inputs_dict["pixel_values"].shape inputs_dict["labels"] = torch.zeros( [self.model_tester.batch_size, height, width], device=torch_device ).long() model = model_class(config) model.gradient_checkpointing_enable() model.to(torch_device) model.train() inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) loss = model(**inputs).loss loss.backward() def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): # we skip lambda parameters as these require special initial values # determined by config.layer_scale_init_value if "lambda" in name: continue if param.requires_grad: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) def check_pt_tf_outputs(self, tf_outputs, pt_outputs, model_class, tol=2e-4, name="outputs", attributes=None): # We override with a slightly higher tol value, as semseg models tend to diverge a bit more super().check_pt_tf_outputs(tf_outputs, pt_outputs, model_class, tol, name, attributes) def test_for_image_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_image_classification(*config_and_inputs) @slow def test_model_from_pretrained(self): model_name = "facebook/data2vec-vision-base-ft1k" model = Data2VecVisionModel.from_pretrained(model_name) self.assertIsNotNone(model) @parameterized.expand([("float16",), ("bfloat16",), ("float32",)]) @require_torch_sdpa # Copied from tests.models.beit.test_modeling_beit.BeitModelTest.test_eager_matches_sdpa_inference with Beit->Data2VecVision def test_eager_matches_sdpa_inference(self, torch_dtype: str): # The common test modifies the num_hidden_layers to be 1. However, for Data2VecVision we want to # avoid that because the num_hidden_layers is generally assumed to be 4. Also, the code # related to attention masks in the original common tests is not required as the Data2VecVision # model does not handle attention masks. Furthermore, some extra code like modifying # the norm layers eps values for specialized configs and checking for the 'noise' # has been omitted to simply the test. if not self.has_attentions: self.skipTest(reason="Model architecture does not support attentions") if not self.all_model_classes[0]._supports_sdpa: self.skipTest(f"{self.all_model_classes[0].__name__} does not support SDPA") if torch_dtype == "float16" and not is_torch_fp16_available_on_device(torch_device): self.skipTest(f"float16 not supported on {torch_device} (on the specific device currently used)") if torch_dtype == "bfloat16" and not is_torch_bf16_available_on_device(torch_device): self.skipTest( f"bfloat16 not supported on {torch_device} (on the specific device currently used, e.g. Nvidia T4 GPU)" ) # Not sure whether it's fine to put torch.XXX in a decorator if torch is not available so hacking it here instead. if torch_dtype == "float16": torch_dtype = torch.float16 elif torch_dtype == "bfloat16": torch_dtype = torch.bfloat16 elif torch_dtype == "float32": torch_dtype = torch.float32 atols = { ("cpu", False, torch.float32): 1e-6, ("cpu", False, torch.float16): 5e-3, ("cpu", False, torch.bfloat16): 1e-2, ("cpu", True, torch.float32): 1e-6, ("cpu", True, torch.float16): 5e-3, ("cpu", True, torch.bfloat16): 1e-2, ("cuda", False, torch.float32): 1e-6, ("cuda", False, torch.bfloat16): 1e-2, ("cuda", False, torch.float16): 5e-3, ("cuda", True, torch.float32): 1e-6, ("cuda", True, torch.bfloat16): 1e-2, ("cuda", True, torch.float16): 5e-3, } rtols = { ("cpu", False, torch.float32): 1e-4, ("cpu", False, torch.float16): 5e-3, ("cpu", False, torch.bfloat16): 1e-2, ("cpu", True, torch.float32): 1e-4, ("cpu", True, torch.float16): 5e-3, ("cpu", True, torch.bfloat16): 1e-2, ("cuda", False, torch.float32): 1e-4, ("cuda", False, torch.bfloat16): 1e-2, ("cuda", False, torch.float16): 5e-3, ("cuda", True, torch.float32): 1e-4, ("cuda", True, torch.bfloat16): 3e-2, ("cuda", True, torch.float16): 5e-3, } def get_mean_reldiff(failcase, x, ref, atol, rtol): return f"{failcase}: mean relative difference: {((x - ref).abs() / (ref.abs() + 1e-12)).mean():.3e}, torch atol = {atol}, torch rtol = {rtol}" for model_class in self.all_model_classes: config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.rms_norm_eps = 1.0 config.layer_norm_eps = 1.0 config.norm_eps = 1.0 config.norm_epsilon = 1.0 config.layer_norm_epsilon = 1.0 model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model_sdpa = model_class.from_pretrained(tmpdirname, torch_dtype=torch_dtype, use_mask_token=True) model_sdpa = model_sdpa.eval().to(torch_device, dtype=torch_dtype) model_eager = model_class.from_pretrained( tmpdirname, torch_dtype=torch_dtype, attn_implementation="eager", use_mask_token=True, ) model_eager = model_eager.eval().to(torch_device, dtype=torch_dtype) # Another way to make sure norm layers have desired epsilon. (Some models don't set it from its config.) for x in model_eager.modules(): if isinstance(x, (nn.LayerNorm, nn.GroupNorm)): x.eps = 1.0 for x in model_sdpa.modules(): if isinstance(x, (nn.LayerNorm, nn.GroupNorm)): x.eps = 1.0 # We use these for loops instead of parameterized.expand just for the interest of avoiding loading/saving 16 times the model, # but it would be nicer to have an efficient way to use parameterized.expand fail_cases = [] for padding_side in ["left", "right"]: for use_mask in [False, True]: for output_attentions in [True, False]: can_output_attn = "output_attentions" in inspect.signature(model_sdpa.forward).parameters if not (self.has_attentions and can_output_attn) and output_attentions: continue # TODO: if we can also check with `batch_size=1` without being flaky? for batch_size in [7]: dummy_input = inputs_dict[model.main_input_name] if dummy_input.dtype in [torch.float32, torch.bfloat16, torch.float16]: dummy_input = dummy_input.to(torch_dtype) dummy_input = dummy_input[:batch_size] for enable_kernels in [False, True]: failcase = f"padding_side={padding_side}, use_mask={use_mask}, enable_kernels={enable_kernels}" processed_inputs = { model.main_input_name: dummy_input, "output_hidden_states": True, } if ( self.has_attentions and "output_attentions" in inspect.signature(model_sdpa.forward).parameters ): processed_inputs["output_attentions"] = output_attentions if "bool_masked_pos" in inspect.signature(model_eager.forward).parameters: dummy_mask = torch.ones((self.model_tester.num_masks,)) mask_length = self.model_tester.seq_length - 1 - dummy_mask.size(0) dummy_mask = torch.cat([dummy_mask, torch.zeros(mask_length)]) dummy_bool_masked_pos = dummy_mask.expand(batch_size, -1).bool() processed_inputs["bool_masked_pos"] = dummy_bool_masked_pos.to(torch_device) with torch.no_grad(): with sdpa_kernel( enable_flash=enable_kernels, enable_math=True, enable_mem_efficient=enable_kernels, ): prepared_inputs = self._prepare_for_class(processed_inputs, model_class) outputs_eager = model_eager(**prepared_inputs) outputs_sdpa = model_sdpa(**prepared_inputs) logits_eager = outputs_eager.hidden_states[-1] logits_sdpa = outputs_sdpa.hidden_states[-1] if torch_device in ["cpu", "cuda"]: atol = atols[torch_device, enable_kernels, torch_dtype] rtol = rtols[torch_device, enable_kernels, torch_dtype] elif torch_device == "xpu": # As of PyTorch 2.5 XPU backend supports only torch.nn.attention.SDPBackend.MATH # which is implemented on PyTorch level using aten operators and is # device agnostic with respect to implementation of each aten operator. atol = atols["cuda", False, torch_dtype] rtol = rtols["cuda", False, torch_dtype] else: atol = 1e-7 rtol = 1e-4 # Masked tokens output slightly deviates - we don't mind that. if use_mask: _logits_sdpa = torch.zeros_like(input=logits_sdpa) _logits_eager = torch.zeros_like(input=logits_eager) _logits_sdpa[:-1] = logits_sdpa[:-1] _logits_eager[:-1] = logits_eager[:-1] if padding_side == "left": _logits_sdpa[-1:, 2:] = logits_sdpa[-1:, 2:] _logits_eager[-1:, 2:] = logits_eager[-1:, 2:] elif padding_side == "right": _logits_sdpa[-1:, 2:] = logits_sdpa[-1:, :-2] _logits_eager[-1:, 2:] = logits_eager[-1:, :-2] logits_sdpa = _logits_sdpa logits_eager = _logits_eager results = [ torch.allclose(_logits_sdpa, _logits_eager, atol=atol, rtol=rtol) for (_logits_sdpa, _logits_eager) in zip(logits_sdpa, logits_eager) ] # If 80% batch elements have matched results, it's fine if np.mean(results) < 0.8: fail_cases.append( get_mean_reldiff(failcase, logits_sdpa, logits_eager, atol, rtol) ) self.assertTrue(len(fail_cases) == 0, "\n".join(fail_cases)) # We will verify our results on an image of cute cats def prepare_img(): image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") return image @require_torch @require_vision class Data2VecVisionModelIntegrationTest(unittest.TestCase): @cached_property def default_image_processor(self): return ( BeitImageProcessor.from_pretrained("facebook/data2vec-vision-base-ft1k") if is_vision_available() else None ) @slow def test_inference_image_classification_head_imagenet_1k(self): model = Data2VecVisionForImageClassification.from_pretrained("facebook/data2vec-vision-base-ft1k").to( torch_device ) image_processor = self.default_image_processor image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits # verify the logits expected_shape = torch.Size((1, 1000)) self.assertEqual(logits.shape, expected_shape) expected_slice = torch.tensor([0.3277, -0.1395, 0.0911]).to(torch_device) torch.testing.assert_close(logits[0, :3], expected_slice, rtol=1e-4, atol=1e-4) expected_top2 = [model.config.label2id[i] for i in ["remote control, remote", "tabby, tabby cat"]] self.assertEqual(logits[0].topk(2).indices.cpu().tolist(), expected_top2) @slow def test_inference_interpolate_pos_encoding(self): model_name = "facebook/data2vec-vision-base-ft1k" model = Data2VecVisionModel.from_pretrained(model_name, **{"use_absolute_position_embeddings": True}).to( torch_device ) image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") processor = BeitImageProcessor.from_pretrained("facebook/data2vec-vision-base-ft1k") inputs = processor(images=image, return_tensors="pt", size={"height": 480, "width": 480}) pixel_values = inputs.pixel_values.to(torch_device) # with interpolate_pos_encoding being False an exception should be raised with higher resolution # images than what the model supports. self.assertFalse(processor.do_center_crop) with torch.no_grad(): with self.assertRaises(ValueError, msg="doesn't match model"): model(pixel_values, interpolate_pos_encoding=False) # with interpolate_pos_encoding being True the model should process the higher resolution image # successfully and produce the expected output. with torch.no_grad(): outputs = model(pixel_values, interpolate_pos_encoding=True) expected_shape = torch.Size((1, 1801, 768)) self.assertEqual(outputs.last_hidden_state.shape, expected_shape)
transformers/tests/models/data2vec/test_modeling_data2vec_vision.py/0
{ "file_path": "transformers/tests/models/data2vec/test_modeling_data2vec_vision.py", "repo_id": "transformers", "token_count": 13104 }
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Deformable DETR model.""" import inspect import math import unittest from typing import Dict, List, Tuple from transformers import DeformableDetrConfig, ResNetConfig, is_torch_available, is_vision_available from transformers.file_utils import cached_property from transformers.testing_utils import ( require_timm, require_torch, require_torch_accelerator, require_torch_bf16, require_vision, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, _config_zero_init, floats_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import DeformableDetrForObjectDetection, DeformableDetrModel if is_vision_available(): from PIL import Image from transformers import AutoImageProcessor class DeformableDetrModelTester: def __init__( self, parent, batch_size=8, is_training=True, use_labels=True, hidden_size=32, num_hidden_layers=2, num_attention_heads=8, intermediate_size=4, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, num_queries=12, num_channels=3, image_size=196, n_targets=8, num_labels=91, num_feature_levels=4, encoder_n_points=2, decoder_n_points=6, ): self.parent = parent self.batch_size = batch_size self.is_training = is_training self.use_labels = use_labels self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.num_queries = num_queries self.num_channels = num_channels self.image_size = image_size self.n_targets = n_targets self.num_labels = num_labels self.num_feature_levels = num_feature_levels self.encoder_n_points = encoder_n_points self.decoder_n_points = decoder_n_points # we also set the expected seq length for both encoder and decoder self.encoder_seq_length = ( math.ceil(self.image_size / 8) ** 2 + math.ceil(self.image_size / 16) ** 2 + math.ceil(self.image_size / 32) ** 2 + math.ceil(self.image_size / 64) ** 2 ) self.decoder_seq_length = self.num_queries def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) pixel_mask = torch.ones([self.batch_size, self.image_size, self.image_size], device=torch_device) labels = None if self.use_labels: # labels is a list of Dict (each Dict being the labels for a given example in the batch) labels = [] for i in range(self.batch_size): target = {} target["class_labels"] = torch.randint( high=self.num_labels, size=(self.n_targets,), device=torch_device ) target["boxes"] = torch.rand(self.n_targets, 4, device=torch_device) target["masks"] = torch.rand(self.n_targets, self.image_size, self.image_size, device=torch_device) labels.append(target) config = self.get_config() return config, pixel_values, pixel_mask, labels def get_config(self): resnet_config = ResNetConfig( num_channels=3, embeddings_size=10, hidden_sizes=[10, 20, 30, 40], depths=[1, 1, 2, 1], hidden_act="relu", num_labels=3, out_features=["stage2", "stage3", "stage4"], out_indices=[2, 3, 4], ) return DeformableDetrConfig( d_model=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, dropout=self.hidden_dropout_prob, attention_dropout=self.attention_probs_dropout_prob, num_queries=self.num_queries, num_labels=self.num_labels, num_feature_levels=self.num_feature_levels, encoder_n_points=self.encoder_n_points, decoder_n_points=self.decoder_n_points, use_timm_backbone=False, backbone=None, backbone_config=resnet_config, use_pretrained_backbone=False, ) def prepare_config_and_inputs_for_common(self): config, pixel_values, pixel_mask, labels = self.prepare_config_and_inputs() inputs_dict = {"pixel_values": pixel_values, "pixel_mask": pixel_mask} return config, inputs_dict def create_and_check_deformable_detr_model(self, config, pixel_values, pixel_mask, labels): model = DeformableDetrModel(config=config) model.to(torch_device) model.eval() result = model(pixel_values=pixel_values, pixel_mask=pixel_mask) result = model(pixel_values) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.num_queries, self.hidden_size)) def create_and_check_deformable_detr_object_detection_head_model(self, config, pixel_values, pixel_mask, labels): model = DeformableDetrForObjectDetection(config=config) model.to(torch_device) model.eval() result = model(pixel_values=pixel_values, pixel_mask=pixel_mask) result = model(pixel_values) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_queries, self.num_labels)) self.parent.assertEqual(result.pred_boxes.shape, (self.batch_size, self.num_queries, 4)) result = model(pixel_values=pixel_values, pixel_mask=pixel_mask, labels=labels) self.parent.assertEqual(result.loss.shape, ()) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_queries, self.num_labels)) self.parent.assertEqual(result.pred_boxes.shape, (self.batch_size, self.num_queries, 4)) @require_torch class DeformableDetrModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (DeformableDetrModel, DeformableDetrForObjectDetection) if is_torch_available() else () pipeline_model_mapping = ( {"image-feature-extraction": DeformableDetrModel, "object-detection": DeformableDetrForObjectDetection} if is_torch_available() else {} ) is_encoder_decoder = True test_torchscript = False test_pruning = False test_head_masking = False test_missing_keys = False # special case for head models def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels) if return_labels: if model_class.__name__ == "DeformableDetrForObjectDetection": labels = [] for i in range(self.model_tester.batch_size): target = {} target["class_labels"] = torch.ones( size=(self.model_tester.n_targets,), device=torch_device, dtype=torch.long ) target["boxes"] = torch.ones( self.model_tester.n_targets, 4, device=torch_device, dtype=torch.float ) target["masks"] = torch.ones( self.model_tester.n_targets, self.model_tester.image_size, self.model_tester.image_size, device=torch_device, dtype=torch.float, ) labels.append(target) inputs_dict["labels"] = labels return inputs_dict def setUp(self): self.model_tester = DeformableDetrModelTester(self) self.config_tester = ConfigTester( self, config_class=DeformableDetrConfig, has_text_modality=False, common_properties=["num_channels", "d_model", "encoder_attention_heads", "decoder_attention_heads"], ) def test_config(self): self.config_tester.run_common_tests() def test_deformable_detr_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_deformable_detr_model(*config_and_inputs) def test_deformable_detr_object_detection_head_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_deformable_detr_object_detection_head_model(*config_and_inputs) @unittest.skip(reason="Deformable DETR does not use inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="Deformable DETR does not use inputs_embeds") def test_inputs_embeds_matches_input_ids(self): pass @unittest.skip(reason="Deformable DETR does not have a get_input_embeddings method") def test_model_get_set_embeddings(self): pass @unittest.skip(reason="Deformable DETR is not a generative model") def test_generate_without_input_ids(self): pass @unittest.skip(reason="Deformable DETR does not use token embeddings") def test_resize_tokens_embeddings(self): pass @unittest.skip(reason="Feed forward chunking is not implemented") def test_feed_forward_chunking(self): pass def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(attentions[0].shape[-3:]), [ self.model_tester.num_attention_heads, self.model_tester.num_feature_levels, self.model_tester.encoder_n_points, ], ) out_len = len(outputs) correct_outlen = 8 # loss is at first position if "labels" in inputs_dict: correct_outlen += 1 # loss is added to beginning # Object Detection model returns pred_logits and pred_boxes if model_class.__name__ == "DeformableDetrForObjectDetection": correct_outlen += 2 self.assertEqual(out_len, correct_outlen) # decoder attentions decoder_attentions = outputs.decoder_attentions self.assertIsInstance(decoder_attentions, (list, tuple)) self.assertEqual(len(decoder_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(decoder_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, self.model_tester.num_queries, self.model_tester.num_queries], ) # cross attentions cross_attentions = outputs.cross_attentions self.assertIsInstance(cross_attentions, (list, tuple)) self.assertEqual(len(cross_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(cross_attentions[0].shape[-3:]), [ self.model_tester.num_attention_heads, self.model_tester.num_feature_levels, self.model_tester.decoder_n_points, ], ) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) if hasattr(self.model_tester, "num_hidden_states_types"): added_hidden_states = self.model_tester.num_hidden_states_types elif self.is_encoder_decoder: added_hidden_states = 2 else: added_hidden_states = 1 self.assertEqual(out_len + added_hidden_states, len(outputs)) self_attentions = outputs.encoder_attentions self.assertEqual(len(self_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(self_attentions[0].shape[-3:]), [ self.model_tester.num_attention_heads, self.model_tester.num_feature_levels, self.model_tester.encoder_n_points, ], ) def test_model_outputs_equivalence(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() def set_nan_tensor_to_zero(t): t[t != t] = 0 return t def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}): with torch.no_grad(): tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs) dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple() def recursive_check(tuple_object, dict_object): if isinstance(tuple_object, (List, Tuple)): for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object): recursive_check(tuple_iterable_value, dict_iterable_value) elif isinstance(tuple_object, Dict): for tuple_iterable_value, dict_iterable_value in zip( tuple_object.values(), dict_object.values() ): recursive_check(tuple_iterable_value, dict_iterable_value) elif tuple_object is None: return else: self.assertTrue( torch.allclose( set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5 ), msg=( "Tuple and dict output are not equal. Difference:" f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:" f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has" f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}." ), ) recursive_check(tuple_output, dict_output) for model_class in self.all_model_classes: print("Model class:", model_class) model = model_class(config) model.to(torch_device) model.eval() tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) check_equivalence(model, tuple_inputs, dict_inputs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) check_equivalence(model, tuple_inputs, dict_inputs, {"output_hidden_states": True}) tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) check_equivalence(model, tuple_inputs, dict_inputs, {"output_attentions": True}) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs, {"output_hidden_states": True}) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs, {"output_attentions": True}) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence( model, tuple_inputs, dict_inputs, {"output_hidden_states": True, "output_attentions": True} ) def test_retain_grad_hidden_states_attentions(self): # removed retain_grad and grad on decoder_hidden_states, as queries don't require grad config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = True config.output_attentions = True # no need to test all models as different heads yield the same functionality model_class = self.all_model_classes[0] model = model_class(config) model.to(torch_device) inputs = self._prepare_for_class(inputs_dict, model_class) outputs = model(**inputs) # we take the second output since last_hidden_state is the second item output = outputs[1] encoder_hidden_states = outputs.encoder_hidden_states[0] encoder_attentions = outputs.encoder_attentions[0] encoder_hidden_states.retain_grad() encoder_attentions.retain_grad() decoder_attentions = outputs.decoder_attentions[0] decoder_attentions.retain_grad() cross_attentions = outputs.cross_attentions[0] cross_attentions.retain_grad() output.flatten()[0].backward(retain_graph=True) self.assertIsNotNone(encoder_hidden_states.grad) self.assertIsNotNone(encoder_attentions.grad) self.assertIsNotNone(decoder_attentions.grad) self.assertIsNotNone(cross_attentions.grad) def test_forward_auxiliary_loss(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.auxiliary_loss = True # only test for object detection and segmentation model for model_class in self.all_model_classes[1:]: model = model_class(config) model.to(torch_device) inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) outputs = model(**inputs) self.assertIsNotNone(outputs.auxiliary_outputs) self.assertEqual(len(outputs.auxiliary_outputs), self.model_tester.num_hidden_layers - 1) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] if model.config.is_encoder_decoder: expected_arg_names = ["pixel_values", "pixel_mask"] expected_arg_names.extend( ["head_mask", "decoder_head_mask", "encoder_outputs"] if "head_mask" and "decoder_head_mask" in arg_names else [] ) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) else: expected_arg_names = ["pixel_values", "pixel_mask"] self.assertListEqual(arg_names[:1], expected_arg_names) def test_different_timm_backbone(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() # let's pick a random timm backbone config.backbone = "tf_mobilenetv3_small_075" config.backbone_config = None config.use_timm_backbone = True config.backbone_kwargs = {"out_indices": [1, 2, 3, 4]} for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) if model_class.__name__ == "DeformableDetrForObjectDetection": expected_shape = ( self.model_tester.batch_size, self.model_tester.num_queries, self.model_tester.num_labels, ) self.assertEqual(outputs.logits.shape, expected_shape) # Confirm out_indices was propogated to backbone self.assertEqual(len(model.model.backbone.conv_encoder.intermediate_channel_sizes), 4) else: # Confirm out_indices was propogated to backbone self.assertEqual(len(model.backbone.conv_encoder.intermediate_channel_sizes), 4) self.assertTrue(outputs) def test_hf_backbone(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() # Load a pretrained HF checkpoint as backbone config.backbone = "microsoft/resnet-18" config.backbone_config = None config.use_timm_backbone = False config.use_pretrained_backbone = True config.backbone_kwargs = {"out_indices": [1, 2, 3, 4]} for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) if model_class.__name__ == "DeformableDetrForObjectDetection": expected_shape = ( self.model_tester.batch_size, self.model_tester.num_queries, self.model_tester.num_labels, ) self.assertEqual(outputs.logits.shape, expected_shape) # Confirm out_indices was propogated to backbone self.assertEqual(len(model.model.backbone.conv_encoder.intermediate_channel_sizes), 4) else: # Confirm out_indices was propogated to backbone self.assertEqual(len(model.backbone.conv_encoder.intermediate_channel_sizes), 4) self.assertTrue(outputs) def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: print("Model class:", model_class) model = model_class(config=configs_no_init) for name, param in model.named_parameters(): if param.requires_grad: if param.requires_grad: if ( "level_embed" in name or "sampling_offsets.bias" in name or "value_proj" in name or "output_proj" in name or "reference_points" in name ): continue self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) @unittest.skip(reason="No support for low_cpu_mem_usage=True.") def test_save_load_low_cpu_mem_usage(self): pass @unittest.skip(reason="No support for low_cpu_mem_usage=True.") def test_save_load_low_cpu_mem_usage_checkpoints(self): pass @unittest.skip(reason="No support for low_cpu_mem_usage=True.") def test_save_load_low_cpu_mem_usage_no_safetensors(self): pass def test_two_stage_training(self): model_class = DeformableDetrForObjectDetection config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True config.two_stage = True config.auxiliary_loss = True config.with_box_refine = True model = model_class(config) model.to(torch_device) model.train() inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) loss = model(**inputs).loss loss.backward() def create_and_check_model_fp16_forward(self): model_class = DeformableDetrForObjectDetection config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config) model.to(torch_device) model.half() model.eval() inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) output = model(**inputs)["last_hidden_state"] self.parent.assertFalse(torch.isnan(output).any().item()) @require_torch_bf16 def create_and_check_model_bf16_forward(self): model_class = DeformableDetrForObjectDetection config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config, torch_dtype=torch.bfloat16) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) output = model(**inputs)["last_hidden_state"] self.parent.assertFalse(torch.isnan(output).any().item()) TOLERANCE = 1e-4 # We will verify our results on an image of cute cats def prepare_img(): image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") return image @require_timm @require_vision @slow class DeformableDetrModelIntegrationTests(unittest.TestCase): @cached_property def default_image_processor(self): return AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") if is_vision_available() else None def test_inference_object_detection_head(self): model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr").to(torch_device) image_processor = self.default_image_processor image = prepare_img() encoding = image_processor(images=image, return_tensors="pt").to(torch_device) pixel_values = encoding["pixel_values"].to(torch_device) pixel_mask = encoding["pixel_mask"].to(torch_device) with torch.no_grad(): outputs = model(pixel_values, pixel_mask) expected_shape_logits = torch.Size((1, model.config.num_queries, model.config.num_labels)) self.assertEqual(outputs.logits.shape, expected_shape_logits) expected_logits = torch.tensor( [[-9.6645, -4.3449, -5.8705], [-9.7035, -3.8504, -5.0724], [-10.5634, -5.3379, -7.5116]] ).to(torch_device) expected_boxes = torch.tensor( [[0.8693, 0.2289, 0.2492], [0.3150, 0.5489, 0.5845], [0.5563, 0.7580, 0.8518]] ).to(torch_device) torch.testing.assert_close(outputs.logits[0, :3, :3], expected_logits, rtol=1e-4, atol=1e-4) expected_shape_boxes = torch.Size((1, model.config.num_queries, 4)) self.assertEqual(outputs.pred_boxes.shape, expected_shape_boxes) torch.testing.assert_close(outputs.pred_boxes[0, :3, :3], expected_boxes, rtol=1e-4, atol=1e-4) # verify postprocessing results = image_processor.post_process_object_detection( outputs, threshold=0.3, target_sizes=[image.size[::-1]] )[0] expected_scores = torch.tensor([0.7999, 0.7894, 0.6331, 0.4720, 0.4382]).to(torch_device) expected_labels = [17, 17, 75, 75, 63] expected_slice_boxes = torch.tensor([16.5028, 52.8390, 318.2544, 470.7841]).to(torch_device) self.assertEqual(len(results["scores"]), 5) torch.testing.assert_close(results["scores"], expected_scores, rtol=1e-4, atol=1e-4) self.assertSequenceEqual(results["labels"].tolist(), expected_labels) torch.testing.assert_close(results["boxes"][0, :], expected_slice_boxes) def test_inference_object_detection_head_with_box_refine_two_stage(self): model = DeformableDetrForObjectDetection.from_pretrained( "SenseTime/deformable-detr-with-box-refine-two-stage" ).to(torch_device) image_processor = self.default_image_processor image = prepare_img() encoding = image_processor(images=image, return_tensors="pt").to(torch_device) pixel_values = encoding["pixel_values"].to(torch_device) pixel_mask = encoding["pixel_mask"].to(torch_device) with torch.no_grad(): outputs = model(pixel_values, pixel_mask) expected_shape_logits = torch.Size((1, model.config.num_queries, model.config.num_labels)) self.assertEqual(outputs.logits.shape, expected_shape_logits) expected_logits = torch.tensor( [[-6.7108, -4.3213, -6.3777], [-8.9014, -6.1799, -6.7240], [-6.9315, -4.4735, -6.2298]] ).to(torch_device) expected_boxes = torch.tensor( [[0.2583, 0.5499, 0.4683], [0.7652, 0.9068, 0.4882], [0.5490, 0.2763, 0.0564]] ).to(torch_device) torch.testing.assert_close(outputs.logits[0, :3, :3], expected_logits, rtol=1e-4, atol=1e-4) expected_shape_boxes = torch.Size((1, model.config.num_queries, 4)) self.assertEqual(outputs.pred_boxes.shape, expected_shape_boxes) torch.testing.assert_close(outputs.pred_boxes[0, :3, :3], expected_boxes, rtol=1e-4, atol=1e-4) @require_torch_accelerator def test_inference_object_detection_head_equivalence_cpu_gpu(self): image_processor = self.default_image_processor image = prepare_img() encoding = image_processor(images=image, return_tensors="pt") pixel_values = encoding["pixel_values"] pixel_mask = encoding["pixel_mask"] # 1. run model on CPU model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr-single-scale") with torch.no_grad(): cpu_outputs = model(pixel_values, pixel_mask) # 2. run model on GPU model.to(torch_device) with torch.no_grad(): gpu_outputs = model(pixel_values.to(torch_device), pixel_mask.to(torch_device)) # 3. assert equivalence for key in cpu_outputs.keys(): assert torch.allclose(cpu_outputs[key], gpu_outputs[key].cpu(), atol=1e-4) expected_logits = torch.tensor( [[-9.9051, -4.2541, -6.4852], [-9.6947, -4.0854, -6.8033], [-10.0665, -5.8470, -7.7003]] ) assert torch.allclose(cpu_outputs.logits[0, :3, :3], expected_logits, atol=1e-4)
transformers/tests/models/deformable_detr/test_modeling_deformable_detr.py/0
{ "file_path": "transformers/tests/models/deformable_detr/test_modeling_deformable_detr.py", "repo_id": "transformers", "token_count": 15674 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the Flax Dinov2 model.""" import inspect import unittest import numpy as np from transformers import Dinov2Config from transformers.testing_utils import require_flax, require_vision, slow from transformers.utils import cached_property, is_flax_available, is_vision_available from ...test_configuration_common import ConfigTester from ...test_modeling_flax_common import FlaxModelTesterMixin, floats_tensor if is_flax_available(): import jax from transformers.models.dinov2.modeling_flax_dinov2 import FlaxDinov2ForImageClassification, FlaxDinov2Model if is_vision_available(): from PIL import Image from transformers import AutoImageProcessor class FlaxDinov2ModelTester: def __init__( self, parent, batch_size=2, image_size=30, patch_size=2, num_channels=3, is_training=True, use_labels=True, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, type_sequence_label_size=10, initializer_range=0.02, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.is_training = is_training self.use_labels = use_labels self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range # in Dinov2, the seq length equals the number of patches + 1 (we add 1 for the [CLS] token) num_patches = (image_size // patch_size) ** 2 self.seq_length = num_patches + 1 def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) config = Dinov2Config( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, is_decoder=False, initializer_range=self.initializer_range, ) return config, pixel_values # Copied from transformers.models.vit.test_modeling_flax_vit.FlaxViTModelTester.prepare_config_and_inputs with ViT -> Dinov2 def create_and_check_model(self, config, pixel_values): model = FlaxDinov2Model(config=config) result = model(pixel_values) # expected sequence length = num_patches + 1 (we add 1 for the [CLS] token) image_size = (self.image_size, self.image_size) patch_size = (self.patch_size, self.patch_size) num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, num_patches + 1, self.hidden_size)) # Copied from transformers.models.vit.test_modeling_flax_vit.FlaxViTModelTester.create_and_check_for_image_classification with ViT -> Dinov2 def create_and_check_for_image_classification(self, config, pixel_values): config.num_labels = self.type_sequence_label_size model = FlaxDinov2ForImageClassification(config=config) result = model(pixel_values) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.type_sequence_label_size)) # test greyscale images config.num_channels = 1 model = FlaxDinov2ForImageClassification(config) pixel_values = floats_tensor([self.batch_size, 1, self.image_size, self.image_size]) result = model(pixel_values) # Copied from transformers.models.vit.test_modeling_flax_vit.FlaxViTModelTester.prepare_config_and_inputs_for_common def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, pixel_values, ) = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_flax # Copied from transformers.models.vit.test_modeling_flax_vit.FlaxViTModelTest with google/vit-base-patch16-224 -> facebook/dinov2-base class FlaxDionv2ModelTest(FlaxModelTesterMixin, unittest.TestCase): all_model_classes = (FlaxDinov2Model, FlaxDinov2ForImageClassification) if is_flax_available() else () def setUp(self) -> None: self.model_tester = FlaxDinov2ModelTester(self) self.config_tester = ConfigTester(self, config_class=Dinov2Config, has_text_modality=False, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_for_image_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_image_classification(*config_and_inputs) # We need to override this test because Dinov2's forward signature is different than text models. def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.__call__) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["pixel_values"] self.assertListEqual(arg_names[:1], expected_arg_names) # We need to override this test because Dinov2 expects pixel_values instead of input_ids def test_jit_compilation(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: with self.subTest(model_class.__name__): prepared_inputs_dict = self._prepare_for_class(inputs_dict, model_class) model = model_class(config) @jax.jit def model_jitted(pixel_values, **kwargs): return model(pixel_values=pixel_values, **kwargs) with self.subTest("JIT Enabled"): jitted_outputs = model_jitted(**prepared_inputs_dict).to_tuple() with self.subTest("JIT Disabled"): with jax.disable_jit(): outputs = model_jitted(**prepared_inputs_dict).to_tuple() self.assertEqual(len(outputs), len(jitted_outputs)) for jitted_output, output in zip(jitted_outputs, outputs): self.assertEqual(jitted_output.shape, output.shape) @slow def test_model_from_pretrained(self): for model_class_name in self.all_model_classes: model = model_class_name.from_pretrained("facebook/dinov2-base") outputs = model(np.ones((1, 3, 224, 224))) self.assertIsNotNone(outputs) # We will verify our results on an image of cute cats def prepare_img(): image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") return image @require_vision @require_flax class FlaxDinov2ModelIntegrationTest(unittest.TestCase): @cached_property def default_image_processor(self): return AutoImageProcessor.from_pretrained("facebook/dinov2-base") if is_vision_available() else None @slow def test_inference_no_head(self): model = FlaxDinov2Model.from_pretrained("facebook/dinov2-base") image_processor = self.default_image_processor image = prepare_img() pixel_values = image_processor(images=image, return_tensors="np").pixel_values # forward pass outputs = model(pixel_values=pixel_values) # verify the logits expected_shape = (1, 257, 768) self.assertEqual(outputs.last_hidden_state.shape, expected_shape) expected_slice = np.array( [ [-2.1629121, -0.46566057, 1.0925977], [-3.5971704, -1.0283585, -1.1780515], [-2.900407, 1.1334689, -0.74357724], ] ) self.assertTrue(np.allclose(outputs.last_hidden_state[0, :3, :3], expected_slice, atol=1e-4)) @slow def test_inference_image_classification_head_imagenet_1k(self): model = FlaxDinov2ForImageClassification.from_pretrained( "facebook/dinov2-base-imagenet1k-1-layer", from_pt=True ) image_processor = self.default_image_processor image = prepare_img() inputs = image_processor(images=image, return_tensors="np") # forward pass outputs = model(**inputs) logits = outputs.logits # verify the logits expected_shape = (1, 1000) self.assertEqual(logits.shape, expected_shape) expected_slice = np.array([-2.1776447, 0.36716992, 0.13870952]) self.assertTrue(np.allclose(logits[0, :3], expected_slice, atol=1e-4)) expected_class_idx = 281 self.assertEqual(logits.argmax(-1).item(), expected_class_idx)
transformers/tests/models/dinov2/test_modeling_flax_dinov2.py/0
{ "file_path": "transformers/tests/models/dinov2/test_modeling_flax_dinov2.py", "repo_id": "transformers", "token_count": 4480 }
# coding=utf-8 # Copyright 2020 Huggingface # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import unittest from transformers import is_tf_available from transformers.testing_utils import require_tf, slow from ...test_configuration_common import ConfigTester from ...test_modeling_tf_common import TFModelTesterMixin, ids_tensor, random_attention_mask from ...test_pipeline_mixin import PipelineTesterMixin if is_tf_available(): import numpy import tensorflow as tf from transformers import ( BertConfig, DPRConfig, TFDPRContextEncoder, TFDPRQuestionEncoder, TFDPRReader, ) class TFDPRModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_input_mask=True, use_token_type_ids=True, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=16, type_sequence_label_size=2, initializer_range=0.02, num_labels=3, num_choices=4, scope=None, projection_dim=0, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_token_type_ids = use_token_type_ids self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.type_vocab_size = type_vocab_size self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range self.num_labels = num_labels self.num_choices = num_choices self.scope = scope self.projection_dim = projection_dim def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: # follow test_modeling_tf_ctrl.py input_mask = random_attention_mask([self.batch_size, self.seq_length]) token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = BertConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, max_position_embeddings=self.max_position_embeddings, type_vocab_size=self.type_vocab_size, is_decoder=False, initializer_range=self.initializer_range, ) config = DPRConfig(projection_dim=self.projection_dim, **config.to_dict()) return config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels def create_and_check_dpr_context_encoder( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = TFDPRContextEncoder(config=config) result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.pooler_output.shape, (self.batch_size, self.projection_dim or self.hidden_size)) def create_and_check_dpr_question_encoder( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = TFDPRQuestionEncoder(config=config) result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.pooler_output.shape, (self.batch_size, self.projection_dim or self.hidden_size)) def create_and_check_dpr_reader( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = TFDPRReader(config=config) result = model(input_ids, attention_mask=input_mask) self.parent.assertEqual(result.start_logits.shape, (self.batch_size, self.seq_length)) self.parent.assertEqual(result.end_logits.shape, (self.batch_size, self.seq_length)) self.parent.assertEqual(result.relevance_logits.shape, (self.batch_size,)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids} return config, inputs_dict @require_tf class TFDPRModelTest(TFModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( ( TFDPRContextEncoder, TFDPRQuestionEncoder, TFDPRReader, ) if is_tf_available() else () ) pipeline_model_mapping = {"feature-extraction": TFDPRQuestionEncoder} if is_tf_available() else {} test_resize_embeddings = False test_missing_keys = False test_pruning = False test_head_masking = False test_onnx = False def setUp(self): self.model_tester = TFDPRModelTester(self) self.config_tester = ConfigTester(self, config_class=DPRConfig, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_dpr_context_encoder_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_dpr_context_encoder(*config_and_inputs) def test_dpr_question_encoder_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_dpr_question_encoder(*config_and_inputs) def test_dpr_reader_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_dpr_reader(*config_and_inputs) @slow def test_model_from_pretrained(self): model_name = "facebook/dpr-ctx_encoder-single-nq-base" model = TFDPRContextEncoder.from_pretrained(model_name) self.assertIsNotNone(model) model_name = "facebook/dpr-ctx_encoder-single-nq-base" model = TFDPRContextEncoder.from_pretrained(model_name) self.assertIsNotNone(model) model_name = "facebook/dpr-ctx_encoder-single-nq-base" model = TFDPRQuestionEncoder.from_pretrained(model_name) self.assertIsNotNone(model) model_name = "facebook/dpr-ctx_encoder-single-nq-base" model = TFDPRReader.from_pretrained(model_name) self.assertIsNotNone(model) @require_tf class TFDPRModelIntegrationTest(unittest.TestCase): @slow def test_inference_no_head(self): model = TFDPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base") input_ids = tf.constant( [[101, 7592, 1010, 2003, 2026, 3899, 10140, 1029, 102]] ) # [CLS] hello, is my dog cute? [SEP] output = model(input_ids)[0] # embedding shape = (1, 768) # compare the actual values for a slice. expected_slice = tf.constant( [ [ 0.03236253, 0.12753335, 0.16818509, 0.00279786, 0.3896933, 0.24264945, 0.2178971, -0.02335227, -0.08481959, -0.14324117, ] ] ) self.assertTrue(numpy.allclose(output[:, :10].numpy(), expected_slice.numpy(), atol=1e-4))
transformers/tests/models/dpr/test_modeling_tf_dpr.py/0
{ "file_path": "transformers/tests/models/dpr/test_modeling_tf_dpr.py", "repo_id": "transformers", "token_count": 4394 }
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch emu3 model.""" import unittest import numpy as np import pytest import requests from huggingface_hub import hf_hub_download from parameterized import parameterized from transformers import Emu3Config, Emu3TextConfig, is_torch_available, is_vision_available, set_seed from transformers.testing_utils import ( require_bitsandbytes, require_torch, require_torch_large_gpu, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_vision_available(): from PIL import Image if is_torch_available(): import torch from transformers import ( Emu3ForCausalLM, Emu3ForConditionalGeneration, Emu3Processor, Emu3TextModel, ) class Emu3Text2TextModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=False, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=2, num_key_value_heads=2, intermediate_size=37, max_position_embeddings=512, initializer_range=0.02, pad_token_id=0, bos_token_id=1, eos_token_id=2, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.intermediate_size = intermediate_size self.max_position_embeddings = max_position_embeddings self.initializer_range = initializer_range self.pad_token_id = pad_token_id self.bos_token_id = bos_token_id self.eos_token_id = eos_token_id def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) attention_mask = input_ids.ne(1).to(torch_device) config = self.get_config() return config, input_ids, attention_mask def get_config(self): return Emu3TextConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, num_key_value_heads=self.num_key_value_heads, intermediate_size=self.intermediate_size, max_position_embeddings=self.max_position_embeddings, is_decoder=False, initializer_range=self.initializer_range, pad_token_id=self.pad_token_id, bos_token_id=self.bos_token_id, eos_token_id=self.eos_token_id, ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, attention_mask, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": attention_mask} return config, inputs_dict @require_torch class Emu3Text2TextModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (Emu3ForCausalLM,) if is_torch_available() else () all_generative_model_classes = (Emu3ForCausalLM,) if is_torch_available() else () pipeline_model_mapping = ( { "text-generation": Emu3ForCausalLM, } if is_torch_available() else {} ) test_headmasking = False test_pruning = False fx_compatible = False def setUp(self): self.model_tester = Emu3Text2TextModelTester(self) self.config_tester = ConfigTester(self, config_class=Emu3TextConfig, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() @parameterized.expand([("linear",), ("dynamic",)]) def test_model_rope_scaling(self, scaling_type): config, _ = self.model_tester.prepare_config_and_inputs_for_common() short_input = ids_tensor([1, 10], config.vocab_size) long_input = ids_tensor([1, int(config.max_position_embeddings * 1.5)], config.vocab_size) set_seed(42) # Fixed seed at init time so the two models get the same random weights original_model = Emu3TextModel(config) original_model.to(torch_device) original_model.eval() original_short_output = original_model(short_input).last_hidden_state original_long_output = original_model(long_input).last_hidden_state set_seed(42) # Fixed seed at init time so the two models get the same random weights config.rope_scaling = {"type": scaling_type, "factor": 10.0} scaled_model = Emu3TextModel(config) scaled_model.to(torch_device) scaled_model.eval() scaled_short_output = scaled_model(short_input).last_hidden_state scaled_long_output = scaled_model(long_input).last_hidden_state # Dynamic scaling does not change the RoPE embeddings until it receives an input longer than the original # maximum sequence length, so the outputs for the short input should match. if scaling_type == "dynamic": torch.testing.assert_close(original_short_output, scaled_short_output, rtol=1e-5, atol=1e-5) else: self.assertFalse(torch.allclose(original_short_output, scaled_short_output, atol=1e-5)) # The output should be different for long inputs self.assertFalse(torch.allclose(original_long_output, scaled_long_output, atol=1e-5)) @unittest.skip("Doesn't work, tensors are not almost same") # TODO raushan fixme def test_custom_4d_attention_mask(self): pass class Emu3Vision2TextModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=False, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=2, num_key_value_heads=2, intermediate_size=37, max_position_embeddings=512, initializer_range=0.02, pad_token_id=0, bos_token_id=1, eos_token_id=2, image_token_id=3, image_size=30, codebook_size=20, temporal_downsample_factor=1, base_channels=32, vq_channel_multiplier=[1, 1], image_seq_length=100, vq_img_token_start_id=3, ): self.parent = parent self.batch_size = batch_size self.is_training = is_training self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.intermediate_size = intermediate_size self.max_position_embeddings = max_position_embeddings self.initializer_range = initializer_range self.pad_token_id = pad_token_id self.bos_token_id = bos_token_id self.eos_token_id = eos_token_id self.image_token_id = image_token_id self.image_size = image_size self.codebook_size = codebook_size self.temporal_downsample_factor = temporal_downsample_factor self.vq_channel_multiplier = vq_channel_multiplier self.vq_img_token_start_id = vq_img_token_start_id self.base_channels = base_channels self.seq_length = seq_length + image_seq_length self.image_seq_length = image_seq_length def prepare_config_and_inputs(self): config = self.get_config() input_ids = ids_tensor([self.batch_size, self.seq_length], config.text_config.vocab_size) attention_mask = input_ids.ne(1).to(torch_device) input_ids[input_ids == self.image_token_id] = self.pad_token_id input_ids[:, : self.image_seq_length] = self.image_token_id pixel_values = floats_tensor( [ self.batch_size, 3, self.image_size, self.image_size, ] ) image_sizes = [[self.image_size, self.image_size]] * self.batch_size image_sizes = torch.tensor(image_sizes, device=torch_device, dtype=torch.int64) return config, input_ids, attention_mask, pixel_values, image_sizes def get_config(self): # create dummy vocab map for image2bpe mapping if it needs remapping # we assume that vocab size is big enough to account for `codebook_size` amount of # image tokens somewhere at the beginning of total vocab size vocab_map = {i: chr(i) for i in range(self.vocab_size)} start = self.vq_img_token_start_id end = self.vq_img_token_start_id + self.codebook_size for i in range(start, end): # dummy str for each token, anything that fits pattern "<|visual token XXXXXX|>" vocab_map[i] = f"<|visual token{i:06d}|>" # add tokens that have to be in the vocab, we'll retrieve their ids later in modeling code vocab_map[self.image_token_id] = "<image>" vocab_map[self.image_token_id + 1] = "<|extra_200|>" vocab_map = {v: k for k, v in vocab_map.items()} text_config = Emu3TextConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, num_key_value_heads=self.num_key_value_heads, intermediate_size=self.intermediate_size, max_position_embeddings=self.max_position_embeddings, initializer_range=self.initializer_range, pad_token_id=self.pad_token_id, bos_token_id=self.bos_token_id, eos_token_id=self.eos_token_id, ) vq_config = { "codebook_size": self.codebook_size, "temporal_downsample_factor": self.temporal_downsample_factor, "base_channels": self.base_channels, "channel_multiplier": self.vq_channel_multiplier, "hidden_size": self.base_channels, } return Emu3Config(text_config=text_config, vq_config=vq_config, vocabulary_map=vocab_map) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, attention_mask, pixel_values, image_sizes, ) = config_and_inputs inputs_dict = { "input_ids": input_ids, "attention_mask": attention_mask, "pixel_values": pixel_values, "image_sizes": image_sizes, } return config, inputs_dict @require_torch class Emu3Vision2TextModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (Emu3ForConditionalGeneration,) if is_torch_available() else () all_generative_model_classes = (Emu3ForConditionalGeneration,) if is_torch_available() else () pipeline_model_mapping = {} test_headmasking = False test_pruning = False fx_compatible = False def setUp(self): self.model_tester = Emu3Vision2TextModelTester(self) self.config_tester = ConfigTester( self, config_class=Emu3Config, has_text_modality=False, common_properties=["vocabulary_map"] ) def test_config(self): self.config_tester.run_common_tests() # overwrite inputs_embeds tests because we need to delete "pixel values" for LVLMs def test_inputs_embeds(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) input_ids = inputs["input_ids"] del inputs["input_ids"] del inputs["pixel_values"] wte = model.get_input_embeddings() inputs["inputs_embeds"] = wte(input_ids) with torch.no_grad(): model(**inputs) # overwrite inputs_embeds tests because we need to delete "pixel values" for LVLMs # while some other models require pixel_values to be present def test_inputs_embeds_matches_input_ids(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) input_ids = inputs["input_ids"] del inputs["input_ids"] del inputs["pixel_values"] inputs_embeds = model.get_input_embeddings()(input_ids) with torch.no_grad(): out_ids = model(input_ids=input_ids, **inputs)[0] out_embeds = model(inputs_embeds=inputs_embeds, **inputs)[0] torch.testing.assert_close(out_embeds, out_ids) @unittest.skip( "Emu3 has a VQ module that uses `weight.data` directly in forward which prevent offloding on that module" ) def test_disk_offload_safetensors(self): pass @unittest.skip( "Emu3 has a VQ module that uses `weight.data` directly in forward which prevent offloding on that module" ) def test_disk_offload_bin(self): pass @unittest.skip( "Emu3 has a VQ module that uses `weight.data` directly in forward which prevent offloding on that module" ) def test_cpu_offload(self): pass @unittest.skip("Doesn't work, tensors are not almost same") # TODO raushan fixme def test_custom_4d_attention_mask(self): pass @unittest.skip("VQ-VAE module doesn't initialize weights properly") def test_initialization(self): pass @pytest.mark.generate @unittest.skip("Emu3 has dynamic control flow in vision backbone") def test_generate_with_static_cache(self): pass @require_torch class Emu3IntegrationTest(unittest.TestCase): @slow @require_bitsandbytes def test_model_generation(self): model = Emu3ForConditionalGeneration.from_pretrained("BAAI/Emu3-Chat-hf", load_in_4bit=True) processor = Emu3Processor.from_pretrained("BAAI/Emu3-Chat-hf") image = Image.open(requests.get("https://picsum.photos/id/237/200/200", stream=True).raw) prompt = "USER: <image>Describe what do you see here and tell me about the history behind it? ASSISTANT:" inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device, torch.float16) # greedy generation outputs EXPECTED_TEXT_COMPLETION = ['USER: 64*64Describe what do you see here and tell me about the history behind it? ASSISTANT: The image captures a moment of tranquility with a black Labrador Retriever resting on a wooden floor. The dog, with its glossy black coat, is lying down with its front legs stretched out in'] # fmt: skip generated_ids = model.generate(**inputs, max_new_tokens=40, do_sample=False) text = processor.batch_decode(generated_ids, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, text) @slow @require_bitsandbytes @require_torch_large_gpu def test_model_generation_batched(self): model = Emu3ForConditionalGeneration.from_pretrained("BAAI/Emu3-Chat-hf", load_in_4bit=True) processor = Emu3Processor.from_pretrained("BAAI/Emu3-Chat-hf") processor.tokenizer.padding_side = "left" image = Image.open(requests.get("https://picsum.photos/id/237/50/50", stream=True).raw) image_2 = Image.open(requests.get("https://picsum.photos/id/247/50/50", stream=True).raw) prompts = [ "USER: <image>Describe what do you see here? ASSISTANT:", "USER: <image>What can you say about the image? ASSISTANT:", ] inputs = processor(images=[image, image_2], text=prompts, padding=True, return_tensors="pt").to( model.device, torch.float16 ) # greedy generation outputs EXPECTED_TEXT_COMPLETION = [ "USER: 64*64Describe what do you see here? ASSISTANT: The image depicts a black panther in a crouched position. The panther's body is elongated and curved, with its head lowered and ears pointed forward, suggesting alertness or focus.", 'USER: 64*64What can you say about the image? ASSISTANT: The image depicts a serene natural landscape. The foreground consists of a grassy area with some patches of bare earth. The middle ground shows a steep, reddish-brown cliff, which could be a' ] # fmt: skip generated_ids = model.generate(**inputs, max_new_tokens=40, do_sample=False) text = processor.batch_decode(generated_ids, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, text) @slow @require_bitsandbytes @require_torch_large_gpu def test_model_generation_multi_image(self): model = Emu3ForConditionalGeneration.from_pretrained("BAAI/Emu3-Chat-hf", load_in_4bit=True) processor = Emu3Processor.from_pretrained("BAAI/Emu3-Chat-hf") image = Image.open(requests.get("https://picsum.photos/id/237/50/50", stream=True).raw) image_2 = Image.open(requests.get("https://picsum.photos/id/247/50/50", stream=True).raw) prompt = "USER: <image><image>What do these two images have in common? ASSISTANT:" inputs = processor(images=[image, image_2], text=prompt, return_tensors="pt").to(model.device, torch.float16) # greedy generation outputs EXPECTED_TEXT_COMPLETION = ["USER: 64*6464*64What do these two images have in common? ASSISTANT: Both images feature a black animal, but they are not the same animal. The top image shows a close-up of a black cow's head, while the bottom image depicts a black cow in a natural"] # fmt: skip generated_ids = model.generate(**inputs, max_new_tokens=40, do_sample=False) text = processor.batch_decode(generated_ids, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, text) @slow @require_bitsandbytes @require_torch_large_gpu def test_model_generate_images(self): model = Emu3ForConditionalGeneration.from_pretrained("BAAI/Emu3-Gen-hf", load_in_4bit=True) processor = Emu3Processor.from_pretrained("BAAI/Emu3-Gen-hf") inputs = processor( text=["a portrait of young girl. masterpiece, film grained, best quality."], padding=True, return_tensors="pt", return_for_image_generation=True, image_area=1600, ).to(model.device) self.assertTrue(inputs.input_ids.shape[1] == 21) image_sizes = inputs.pop("image_sizes") HEIGHT, WIDTH = image_sizes[0] VISUAL_TOKENS = model.vocabulary_mapping.image_tokens def prefix_allowed_tokens_fn(batch_id, input_ids): height, width = HEIGHT, WIDTH visual_tokens = VISUAL_TOKENS image_wrapper_token_id = torch.tensor([processor.tokenizer.image_wrapper_token_id], device=model.device) eoi_token_id = torch.tensor([processor.tokenizer.eoi_token_id], device=model.device) eos_token_id = torch.tensor([processor.tokenizer.eos_token_id], device=model.device) pad_token_id = torch.tensor([processor.tokenizer.pad_token_id], device=model.device) eof_token_id = torch.tensor([processor.tokenizer.eof_token_id], device=model.device) eol_token_id = processor.tokenizer.encode("<|extra_200|>", return_tensors="pt")[0] position = torch.nonzero(input_ids == image_wrapper_token_id, as_tuple=True)[0][0] offset = input_ids.shape[0] - position if offset % (width + 1) == 0: return (eol_token_id,) elif offset == (width + 1) * height + 1: return (eof_token_id,) elif offset == (width + 1) * height + 2: return (eoi_token_id,) elif offset == (width + 1) * height + 3: return (eos_token_id,) elif offset > (width + 1) * height + 3: return (pad_token_id,) else: return visual_tokens out = model.generate( **inputs, max_new_tokens=200, prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, do_sample=False, ) self.assertTrue(out.shape[1] == 54) image = model.decode_image_tokens(out[:, inputs.input_ids.shape[1] :], height=HEIGHT, width=WIDTH) images = processor.postprocess(list(image.float()), return_tensors="np") self.assertTrue(images["pixel_values"].shape == (3, 40, 40)) self.assertTrue(isinstance(images["pixel_values"], np.ndarray)) filepath = hf_hub_download( repo_id="raushan-testing-hf/images_test", filename="emu3_image.npy", repo_type="dataset", ) original_pixels = np.load(filepath) self.assertTrue(np.allclose(original_pixels, images["pixel_values"]))
transformers/tests/models/emu3/test_modeling_emu3.py/0
{ "file_path": "transformers/tests/models/emu3/test_modeling_emu3.py", "repo_id": "transformers", "token_count": 9656 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import numpy as np from transformers import AutoTokenizer, GemmaConfig, is_flax_available from transformers.testing_utils import require_flax, require_read_token, slow from ...generation.test_flax_utils import FlaxGenerationTesterMixin from ...test_modeling_flax_common import FlaxModelTesterMixin, ids_tensor if is_flax_available(): import jax import jax.numpy as jnp from transformers.models.gemma.modeling_flax_gemma import ( FlaxGemmaForCausalLM, FlaxGemmaModel, ) class FlaxGemmaModelTester: def __init__( self, parent, batch_size=2, seq_length=7, is_training=True, use_input_mask=True, use_token_type_ids=False, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, num_key_value_heads=2, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, initializer_range=0.02, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_token_type_ids = use_token_type_ids self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.initializer_range = initializer_range self.scope = None self.bos_token_id = vocab_size - 1 self.eos_token_id = vocab_size - 1 self.pad_token_id = vocab_size - 1 def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = np.tril(np.ones((self.batch_size, self.seq_length))) config = GemmaConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, num_key_value_heads=self.num_key_value_heads, head_dim=self.hidden_size // self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, max_position_embeddings=self.max_position_embeddings, use_cache=True, is_decoder=False, initializer_range=self.initializer_range, ) return config, input_ids, input_mask def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, input_ids, attention_mask = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": attention_mask} return config, inputs_dict def check_use_cache_forward(self, model_class_name, config, input_ids, attention_mask): max_decoder_length = 20 model = model_class_name(config) past_key_values = model.init_cache(input_ids.shape[0], max_decoder_length) attention_mask = jnp.ones((input_ids.shape[0], max_decoder_length), dtype="i4") position_ids = jnp.broadcast_to( jnp.arange(input_ids.shape[-1] - 1)[None, :], (input_ids.shape[0], input_ids.shape[-1] - 1) ) outputs_cache = model( input_ids[:, :-1], attention_mask=attention_mask, past_key_values=past_key_values, position_ids=position_ids, ) position_ids = jnp.array(input_ids.shape[0] * [[input_ids.shape[-1] - 1]], dtype="i4") outputs_cache_next = model( input_ids[:, -1:], attention_mask=attention_mask, past_key_values=outputs_cache.past_key_values, position_ids=position_ids, ) outputs = model(input_ids) diff = np.max(np.abs((outputs_cache_next[0][:, -1, :5] - outputs[0][:, -1, :5]))) self.parent.assertTrue(diff < 1e-3, msg=f"Max diff is {diff}") def check_use_cache_forward_with_attn_mask(self, model_class_name, config, input_ids, attention_mask): max_decoder_length = 20 model = model_class_name(config) attention_mask_cache = jnp.concatenate( [attention_mask, jnp.zeros((attention_mask.shape[0], max_decoder_length - attention_mask.shape[1]))], axis=-1, ) past_key_values = model.init_cache(input_ids.shape[0], max_decoder_length) position_ids = jnp.broadcast_to( jnp.arange(input_ids.shape[-1] - 1)[None, :], (input_ids.shape[0], input_ids.shape[-1] - 1) ) outputs_cache = model( input_ids[:, :-1], attention_mask=attention_mask_cache, past_key_values=past_key_values, position_ids=position_ids, ) position_ids = jnp.array(input_ids.shape[0] * [[input_ids.shape[-1] - 1]], dtype="i4") outputs_cache_next = model( input_ids[:, -1:], past_key_values=outputs_cache.past_key_values, attention_mask=attention_mask_cache, position_ids=position_ids, ) outputs = model(input_ids, attention_mask=attention_mask) diff = np.max(np.abs((outputs_cache_next[0][:, -1, :5] - outputs[0][:, -1, :5]))) self.parent.assertTrue(diff < 1e-3, msg=f"Max diff is {diff}") @require_flax class FlaxGemmaModelTest(FlaxModelTesterMixin, FlaxGenerationTesterMixin, unittest.TestCase): all_model_classes = (FlaxGemmaModel, FlaxGemmaForCausalLM) if is_flax_available() else () all_generative_model_classes = (FlaxGemmaForCausalLM,) if is_flax_available() else () def setUp(self): self.model_tester = FlaxGemmaModelTester(self) def test_use_cache_forward(self): for model_class_name in self.all_model_classes: config, input_ids, attention_mask = self.model_tester.prepare_config_and_inputs() self.model_tester.check_use_cache_forward(model_class_name, config, input_ids, attention_mask) def test_use_cache_forward_with_attn_mask(self): for model_class_name in self.all_model_classes: config, input_ids, attention_mask = self.model_tester.prepare_config_and_inputs() self.model_tester.check_use_cache_forward_with_attn_mask( model_class_name, config, input_ids, attention_mask ) @slow def test_model_from_pretrained(self): for model_class_name in self.all_model_classes: model = model_class_name.from_pretrained("google/gemma-2b", from_pt=True) outputs = model(np.ones((1, 1))) self.assertIsNotNone(outputs) @slow @require_flax @require_read_token class FlaxGemmaIntegrationTest(unittest.TestCase): input_text = ["The capital of France is", "To play the perfect cover drive"] model_id = "google/gemma-2b" revision = "flax" def setUp(self): self.model, self.params = FlaxGemmaForCausalLM.from_pretrained( self.model_id, revision=self.revision, _do_init=False ) self.tokenizer = AutoTokenizer.from_pretrained(self.model_id) self.tokenizer.padding_side = "left" def test_logits(self): inputs = self.tokenizer(self.input_text, return_tensors="np", padding=True) # fmt: off EXPECTED_MEAN = [ [-16.427, -21.386, -35.491, -36.258, -31.401, -36.370, -37.598], [-21.386, -32.150, -33.155, -34.344, -34.706, -34.678, -38.495], ] EXPECTED_SLICE = [-33.462, -16.481, -30.837, -32.195, -33.113] # fmt: on logits = self.model(**inputs, params=self.params).logits diff_mean = jnp.abs(logits.mean(-1) - np.array(EXPECTED_MEAN)).max() diff_slice = jnp.abs(logits[0, -1, 475:480] - np.array(EXPECTED_SLICE)).max() self.assertAlmostEqual(diff_mean, 0, places=3) self.assertAlmostEqual(diff_slice, 0, places=3) def test_generation(self): EXPECTED_TEXTS = [ "The capital of France is a city of contrasts. It is a city of history, of art, of culture, of fashion", "To play the perfect cover drive, you need to have a good technique and a good mindset.\n\nThe cover drive is a shot", ] inputs = self.tokenizer(self.input_text, return_tensors="np", padding=True) output = self.model.generate(**inputs, params=self.params, max_new_tokens=20, do_sample=False) output_text = self.tokenizer.batch_decode(output.sequences, skip_special_tokens=True) self.assertEqual(output_text, EXPECTED_TEXTS) def test_jit_generation(self): EXPECTED_TEXTS = [ "The capital of France is a city of contrasts. It is a city of history, culture, and art, but it is", "To play the perfect cover drive, you need to have a good technique and a good mindset.\n\nThe cover drive is a shot", ] inputs = self.tokenizer(self.input_text, return_tensors="np", padding=True) def generate(input_ids, attention_mask): outputs = self.model.generate( input_ids, attention_mask=attention_mask, params=self.params, max_new_tokens=20, do_sample=False ) return outputs jit_generate = jax.jit(generate) output_sequences = jit_generate(**inputs).sequences output_text = self.tokenizer.batch_decode(output_sequences, skip_special_tokens=True) self.assertEqual(output_text, EXPECTED_TEXTS)
transformers/tests/models/gemma/test_modeling_flax_gemma.py/0
{ "file_path": "transformers/tests/models/gemma/test_modeling_flax_gemma.py", "repo_id": "transformers", "token_count": 4825 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shutil import tempfile import unittest from transformers import AutoProcessor, GotOcr2Processor, PreTrainedTokenizerFast from transformers.testing_utils import require_vision from transformers.utils import is_vision_available from ...test_processing_common import ProcessorTesterMixin if is_vision_available(): from transformers import GotOcr2ImageProcessor @require_vision class GotOcr2ProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = GotOcr2Processor def setUp(self): self.tmpdirname = tempfile.mkdtemp() image_processor = GotOcr2ImageProcessor() tokenizer = PreTrainedTokenizerFast.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf") processor_kwargs = self.prepare_processor_dict() processor = GotOcr2Processor(image_processor, tokenizer, **processor_kwargs) processor.save_pretrained(self.tmpdirname) def get_tokenizer(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).tokenizer def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor def tearDown(self): shutil.rmtree(self.tmpdirname) def test_ocr_queries(self): processor = self.get_processor() image_input = self.prepare_image_inputs() inputs = processor(image_input, return_tensors="pt") self.assertEqual(inputs["input_ids"].shape, (1, 286)) self.assertEqual(inputs["pixel_values"].shape, (1, 3, 384, 384)) inputs = processor(image_input, return_tensors="pt", format=True) self.assertEqual(inputs["input_ids"].shape, (1, 288)) self.assertEqual(inputs["pixel_values"].shape, (1, 3, 384, 384)) inputs = processor(image_input, return_tensors="pt", color="red") self.assertEqual(inputs["input_ids"].shape, (1, 290)) self.assertEqual(inputs["pixel_values"].shape, (1, 3, 384, 384)) inputs = processor(image_input, return_tensors="pt", box=[0, 0, 100, 100]) self.assertEqual(inputs["input_ids"].shape, (1, 303)) self.assertEqual(inputs["pixel_values"].shape, (1, 3, 384, 384)) inputs = processor([image_input, image_input], return_tensors="pt", multi_page=True, format=True) self.assertEqual(inputs["input_ids"].shape, (1, 547)) self.assertEqual(inputs["pixel_values"].shape, (2, 3, 384, 384)) inputs = processor(image_input, return_tensors="pt", crop_to_patches=True, max_patches=6) self.assertEqual(inputs["input_ids"].shape, (1, 1826)) self.assertEqual(inputs["pixel_values"].shape, (7, 3, 384, 384))
transformers/tests/models/got_ocr2/test_processor_got_ocr2.py/0
{ "file_path": "transformers/tests/models/got_ocr2/test_processor_got_ocr2.py", "repo_id": "transformers", "token_count": 1208 }
# coding=utf-8 # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import unittest from transformers.models.gpt_neox_japanese.tokenization_gpt_neox_japanese import ( VOCAB_FILES_NAMES, GPTNeoXJapaneseTokenizer, ) from transformers.testing_utils import require_tokenizers, slow from ...test_tokenization_common import TokenizerTesterMixin @require_tokenizers class GPTNeoXJapaneseTokenizationTest(TokenizerTesterMixin, unittest.TestCase): from_pretrained_id = "abeja/gpt-neox-japanese-2.7b" tokenizer_class = GPTNeoXJapaneseTokenizer test_rust_tokenizer = False from_pretrained_kwargs = {"do_clean_text": False, "add_prefix_space": False} def setUp(self): super().setUp() vocab_tokens = [ "こん", "こんに", "にちは", "ばんは", "世界,㔺界", "、", "。", "<BR>", "<SP>", "<TAB>", "<URL>", "<EMAIL>", "<TEL>", "<DATE>", "<PRICE>", "<BLOCK>", "<KIGOU>", "<U2000U2BFF>", "<|emoji1|>", "<unk>", "<|startoftext|>", "<|endoftext|>", ] emoji_tokens = {"emoji": {"\ud83d\ude00": "<|emoji1|>"}, "emoji_inv": {"<|emoji1|>": "\ud83d\ude00"}} # 😀 self.special_tokens_map = {"unk_token": "<unk>"} self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) self.emoji_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["emoji_file"]) with open(self.vocab_file, "w", encoding="utf-8") as vocab_writer: vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) with open(self.emoji_file, "w") as emoji_writer: emoji_writer.write(json.dumps(emoji_tokens)) def get_tokenizer(self, **kwargs): kwargs.update(self.special_tokens_map) return GPTNeoXJapaneseTokenizer.from_pretrained(self.tmpdirname, **kwargs) def get_input_output_texts(self, tokenizer): input_text = "こんにちは、世界。 \nこんばんは、㔺界。😀" output_text = "こんにちは、世界。 \nこんばんは、世界。😀" return input_text, output_text def get_clean_sequence(self, tokenizer): input_text, output_text = self.get_input_output_texts(tokenizer) ids = tokenizer.encode(output_text, add_special_tokens=False) text = tokenizer.decode(ids, clean_up_tokenization_spaces=False) return text, ids def test_pretokenized_inputs(self): pass # TODO add if relevant def test_maximum_encoding_length_pair_input(self): pass # TODO add if relevant def test_maximum_encoding_length_single_input(self): pass # TODO add if relevant def test_full_tokenizer(self): tokenizer = self.get_tokenizer() # Testing tokenization input_text = "こんにちは、世界。 こんばんは、㔺界。" expected_token = ["こん", "にちは", "、", "世界", "。", "<SP>", "こん", "ばんは", "、", "㔺界", "。"] tokens = tokenizer.tokenize(input_text) self.assertListEqual(tokens, expected_token) # Testing conversion to ids without special tokens expected_ids = [0, 2, 5, 4, 6, 8, 0, 3, 5, 4, 6] input_ids = tokenizer.convert_tokens_to_ids(tokens) self.assertListEqual(input_ids, expected_ids) # Testing conversion to ids with special tokens input_tokens = tokens + [tokenizer.unk_token] expected_ids = [0, 2, 5, 4, 6, 8, 0, 3, 5, 4, 6, 19] input_ids = tokenizer.convert_tokens_to_ids(input_tokens) self.assertListEqual(input_ids, expected_ids) @slow def test_sequence_builders(self): tokenizer = self.tokenizer_class.from_pretrained("abeja/gpt-neox-japanese-2.7b") ids_1 = tokenizer.encode("ありがとう。", add_special_tokens=False) ids_2 = tokenizer.encode("どういたしまして。", add_special_tokens=False) encoded_sentence = tokenizer.build_inputs_with_special_tokens(ids_1) encoded_pair = tokenizer.build_inputs_with_special_tokens(ids_1, ids_2) assert encoded_sentence == ids_1 assert encoded_pair == ids_1 + ids_2 @unittest.skip def test_conversion_reversible(self): # Intentionally convert some words to accommodate character fluctuations unique to Japanese pass @unittest.skip(reason="tokenizer has no padding token") def test_padding_different_model_input_name(self): pass
transformers/tests/models/gpt_neox_japanese/test_tokenization_gpt_neox_japanese.py/0
{ "file_path": "transformers/tests/models/gpt_neox_japanese/test_tokenization_gpt_neox_japanese.py", "repo_id": "transformers", "token_count": 2318 }
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch GroupViT model.""" import inspect import os import random import tempfile import unittest import numpy as np import requests from transformers import GroupViTConfig, GroupViTTextConfig, GroupViTVisionConfig from transformers.testing_utils import is_flaky, is_pt_tf_cross_test, require_torch, require_vision, slow, torch_device from transformers.utils import is_torch_available, is_vision_available from ...test_configuration_common import ConfigTester from ...test_modeling_common import ( ModelTesterMixin, _config_zero_init, floats_tensor, ids_tensor, random_attention_mask, ) from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from torch import nn from transformers import GroupViTModel, GroupViTTextModel, GroupViTVisionModel if is_vision_available(): from PIL import Image from transformers import CLIPProcessor class GroupViTVisionModelTester: def __init__( self, parent, batch_size=12, image_size=30, patch_size=2, num_channels=3, is_training=True, hidden_size=32, depths=[6, 3, 3], num_group_tokens=[64, 8, 0], num_output_groups=[64, 8, 8], num_attention_heads=4, intermediate_size=37, dropout=0.1, attention_dropout=0.1, initializer_range=0.02, scope=None, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.is_training = is_training self.hidden_size = hidden_size self.depths = depths self.num_hidden_layers = sum(depths) self.expected_num_hidden_layers = len(depths) + 1 self.num_group_tokens = num_group_tokens self.num_output_groups = num_output_groups self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.dropout = dropout self.attention_dropout = attention_dropout self.initializer_range = initializer_range self.scope = scope num_patches = (image_size // patch_size) ** 2 # no [CLS] token for GroupViT self.seq_length = num_patches def prepare_config_and_inputs(self): rng = random.Random(0) pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size], rng=rng) config = self.get_config() return config, pixel_values def get_config(self): return GroupViTVisionConfig( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_size=self.hidden_size, depths=self.depths, num_group_tokens=self.num_group_tokens, num_output_groups=self.num_output_groups, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, dropout=self.dropout, attention_dropout=self.attention_dropout, initializer_range=self.initializer_range, ) def create_and_check_model(self, config, pixel_values): model = GroupViTVisionModel(config=config) model.to(torch_device) model.eval() with torch.no_grad(): result = model(pixel_values) self.parent.assertEqual( result.last_hidden_state.shape, (self.batch_size, self.num_output_groups[-1], self.hidden_size) ) self.parent.assertEqual(result.pooler_output.shape, (self.batch_size, self.hidden_size)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class GroupViTVisionModelTest(ModelTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as GROUPVIT does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = (GroupViTVisionModel,) if is_torch_available() else () test_pruning = False test_torchscript = False test_resize_embeddings = False test_head_masking = False def setUp(self): self.model_tester = GroupViTVisionModelTester(self) self.config_tester = ConfigTester( self, config_class=GroupViTVisionConfig, has_text_modality=False, hidden_size=37 ) def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="GroupViT does not use inputs_embeds") def test_inputs_embeds(self): pass @is_flaky(description="The `index` computed with `max()` in `hard_softmax` is not stable.") def test_batching_equivalence(self): super().test_batching_equivalence() @is_pt_tf_cross_test def test_pt_tf_model_equivalence(self): import tensorflow as tf seed = 338 random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) tf.random.set_seed(seed) return super().test_pt_tf_model_equivalence() def test_model_get_set_embeddings(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["pixel_values"] self.assertListEqual(arg_names[:1], expected_arg_names) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True seq_len = getattr(self.model_tester, "seq_length", None) expected_num_attention_outputs = sum(g > 0 for g in self.model_tester.num_group_tokens) for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions # GroupViT returns attention grouping of each stage self.assertEqual(len(attentions), sum(g > 0 for g in self.model_tester.num_group_tokens)) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions # GroupViT returns attention grouping of each stage self.assertEqual(len(attentions), expected_num_attention_outputs) out_len = len(outputs) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) added_hidden_states = 1 self.assertEqual(out_len + added_hidden_states, len(outputs)) self_attentions = outputs.attentions # GroupViT returns attention grouping of each stage self.assertEqual(len(self_attentions), expected_num_attention_outputs) for i, self_attn in enumerate(self_attentions): if self_attn is None: continue self.assertListEqual( list(self_attentions[i].shape[-2:]), [ self.model_tester.num_output_groups[i], self.model_tester.num_output_groups[i - 1] if i > 0 else seq_len, ], ) @unittest.skip def test_training(self): pass @unittest.skip def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass @unittest.skip(reason="GroupViTVisionModel has no base class and is not available in MODEL_MAPPING") def test_save_load_fast_init_from_base(self): pass @unittest.skip(reason="GroupViTVisionModel has no base class and is not available in MODEL_MAPPING") def test_save_load_fast_init_to_base(self): pass # override since the attention mask from GroupViT is not used to compute loss, thus no grad def test_retain_grad_hidden_states_attentions(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = True config.output_attentions = self.has_attentions # no need to test all models as different heads yield the same functionality model_class = self.all_model_classes[0] model = model_class(config) model.to(torch_device) inputs = self._prepare_for_class(inputs_dict, model_class) outputs = model(**inputs) output = outputs[0] if config.is_encoder_decoder: # Seq2Seq models encoder_hidden_states = outputs.encoder_hidden_states[0] encoder_hidden_states.retain_grad() decoder_hidden_states = outputs.decoder_hidden_states[0] decoder_hidden_states.retain_grad() if self.has_attentions: encoder_attentions = outputs.encoder_attentions[0] encoder_attentions.retain_grad() decoder_attentions = outputs.decoder_attentions[0] decoder_attentions.retain_grad() cross_attentions = outputs.cross_attentions[0] cross_attentions.retain_grad() output.flatten()[0].backward(retain_graph=True) self.assertIsNotNone(encoder_hidden_states.grad) self.assertIsNotNone(decoder_hidden_states.grad) if self.has_attentions: self.assertIsNotNone(encoder_attentions.grad) self.assertIsNotNone(decoder_attentions.grad) self.assertIsNotNone(cross_attentions.grad) else: # Encoder-/Decoder-only models hidden_states = outputs.hidden_states[0] hidden_states.retain_grad() if self.has_attentions: attentions = outputs.attentions[0] attentions.retain_grad() output.flatten()[0].backward(retain_graph=True) self.assertIsNotNone(hidden_states.grad) if self.has_attentions: self.assertIsNone(attentions.grad) @slow def test_model_from_pretrained(self): model_name = "nvidia/groupvit-gcc-yfcc" model = GroupViTVisionModel.from_pretrained(model_name) self.assertIsNotNone(model) class GroupViTTextModelTester: def __init__( self, parent, batch_size=12, seq_length=7, is_training=True, use_input_mask=True, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, dropout=0.1, attention_dropout=0.1, max_position_embeddings=512, initializer_range=0.02, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.dropout = dropout self.attention_dropout = attention_dropout self.max_position_embeddings = max_position_embeddings self.initializer_range = initializer_range self.scope = scope def prepare_config_and_inputs(self): rng = random.Random(0) input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size, rng=rng) input_mask = None if self.use_input_mask: input_mask = random_attention_mask([self.batch_size, self.seq_length]) if input_mask is not None: batch_size, seq_length = input_mask.shape rnd_start_indices = np.random.randint(1, seq_length - 1, size=(batch_size,)) for batch_idx, start_index in enumerate(rnd_start_indices): input_mask[batch_idx, :start_index] = 1 input_mask[batch_idx, start_index:] = 0 config = self.get_config() return config, input_ids, input_mask def get_config(self): return GroupViTTextConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, dropout=self.dropout, attention_dropout=self.attention_dropout, max_position_embeddings=self.max_position_embeddings, initializer_range=self.initializer_range, ) def create_and_check_model(self, config, input_ids, input_mask): model = GroupViTTextModel(config=config) model.to(torch_device) model.eval() with torch.no_grad(): result = model(input_ids, attention_mask=input_mask) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(result.pooler_output.shape, (self.batch_size, self.hidden_size)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, input_ids, input_mask = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": input_mask} return config, inputs_dict @require_torch class GroupViTTextModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (GroupViTTextModel,) if is_torch_available() else () test_pruning = False test_head_masking = False def setUp(self): self.model_tester = GroupViTTextModelTester(self) self.config_tester = ConfigTester(self, config_class=GroupViTTextConfig, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) @unittest.skip def test_training(self): pass @unittest.skip def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass @unittest.skip(reason="GroupViTTextModel does not use inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="GroupViTTextModel has no base class and is not available in MODEL_MAPPING") def test_save_load_fast_init_from_base(self): pass @unittest.skip(reason="GroupViTTextModel has no base class and is not available in MODEL_MAPPING") def test_save_load_fast_init_to_base(self): pass @slow def test_model_from_pretrained(self): model_name = "nvidia/groupvit-gcc-yfcc" model = GroupViTTextModel.from_pretrained(model_name) self.assertIsNotNone(model) class GroupViTModelTester: def __init__(self, parent, text_kwargs=None, vision_kwargs=None, is_training=True): if text_kwargs is None: text_kwargs = {} if vision_kwargs is None: vision_kwargs = {} self.parent = parent self.text_model_tester = GroupViTTextModelTester(parent, **text_kwargs) self.vision_model_tester = GroupViTVisionModelTester(parent, **vision_kwargs) self.batch_size = self.text_model_tester.batch_size # need bs for batching_equivalence test self.is_training = is_training def prepare_config_and_inputs(self): text_config, input_ids, attention_mask = self.text_model_tester.prepare_config_and_inputs() vision_config, pixel_values = self.vision_model_tester.prepare_config_and_inputs() config = self.get_config() return config, input_ids, attention_mask, pixel_values def get_config(self): return GroupViTConfig.from_text_vision_configs( self.text_model_tester.get_config(), self.vision_model_tester.get_config(), projection_dim=64 ) def create_and_check_model(self, config, input_ids, attention_mask, pixel_values): model = GroupViTModel(config).to(torch_device).eval() with torch.no_grad(): result = model(input_ids, pixel_values, attention_mask) self.parent.assertEqual( result.logits_per_image.shape, (self.vision_model_tester.batch_size, self.text_model_tester.batch_size) ) self.parent.assertEqual( result.logits_per_text.shape, (self.text_model_tester.batch_size, self.vision_model_tester.batch_size) ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, input_ids, attention_mask, pixel_values = config_and_inputs inputs_dict = { "input_ids": input_ids, "attention_mask": attention_mask, "pixel_values": pixel_values, "return_loss": True, } return config, inputs_dict @require_torch class GroupViTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (GroupViTModel,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": GroupViTModel} if is_torch_available() else {} test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False def setUp(self): self.model_tester = GroupViTModelTester(self) common_properties = ["projection_dim", "projection_intermediate_dim", "logit_scale_init_value"] self.config_tester = ConfigTester( self, config_class=GroupViTConfig, has_text_modality=False, common_properties=common_properties ) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_config(self): self.config_tester.run_common_tests() @is_flaky(description="The `index` computed with `max()` in `hard_softmax` is not stable.") def test_batching_equivalence(self): super().test_batching_equivalence() @unittest.skip(reason="hidden_states are tested in individual model tests") def test_hidden_states_output(self): pass @unittest.skip(reason="input_embeds are tested in individual model tests") def test_inputs_embeds(self): pass @unittest.skip(reason="tested in individual model tests") def test_retain_grad_hidden_states_attentions(self): pass @unittest.skip(reason="GroupViTModel does not have input/output embeddings") def test_model_get_set_embeddings(self): pass # overwritten from parent as this equivalent test needs a specific `seed` and hard to get a good one! def check_pt_tf_outputs(self, tf_outputs, pt_outputs, model_class, tol=2e-5, name="outputs", attributes=None): super().check_pt_tf_outputs(tf_outputs, pt_outputs, model_class, tol=tol, name=name, attributes=attributes) @is_pt_tf_cross_test def test_pt_tf_model_equivalence(self): import tensorflow as tf seed = 163 random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) tf.random.set_seed(seed) return super().test_pt_tf_model_equivalence() # override as the `logit_scale` parameter initilization is different for GROUPVIT def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): if param.requires_grad: # check if `logit_scale` is initilized as per the original implementation if name == "logit_scale": self.assertAlmostEqual( param.data.item(), np.log(1 / 0.07), delta=1e-3, msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) else: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) def _create_and_check_torchscript(self, config, inputs_dict): if not self.test_torchscript: self.skipTest(reason="test_torchscript is set to False") configs_no_init = _config_zero_init(config) # To be sure we have no Nan configs_no_init.torchscript = True configs_no_init.return_dict = False for model_class in self.all_model_classes: model = model_class(config=configs_no_init) model.to(torch_device) model.eval() try: input_ids = inputs_dict["input_ids"] pixel_values = inputs_dict["pixel_values"] # GROUPVIT needs pixel_values traced_model = torch.jit.trace(model, (input_ids, pixel_values)) except RuntimeError: self.fail("Couldn't trace module.") with tempfile.TemporaryDirectory() as tmp_dir_name: pt_file_name = os.path.join(tmp_dir_name, "traced_model.pt") try: torch.jit.save(traced_model, pt_file_name) except Exception: self.fail("Couldn't save module.") try: loaded_model = torch.jit.load(pt_file_name) except Exception: self.fail("Couldn't load module.") model.to(torch_device) model.eval() loaded_model.to(torch_device) loaded_model.eval() model_state_dict = model.state_dict() loaded_model_state_dict = loaded_model.state_dict() non_persistent_buffers = {} for key in loaded_model_state_dict.keys(): if key not in model_state_dict.keys(): non_persistent_buffers[key] = loaded_model_state_dict[key] loaded_model_state_dict = { key: value for key, value in loaded_model_state_dict.items() if key not in non_persistent_buffers } self.assertEqual(set(model_state_dict.keys()), set(loaded_model_state_dict.keys())) model_buffers = list(model.buffers()) for non_persistent_buffer in non_persistent_buffers.values(): found_buffer = False for i, model_buffer in enumerate(model_buffers): if torch.equal(non_persistent_buffer, model_buffer): found_buffer = True break self.assertTrue(found_buffer) model_buffers.pop(i) models_equal = True for layer_name, p1 in model_state_dict.items(): p2 = loaded_model_state_dict[layer_name] if p1.data.ne(p2.data).sum() > 0: models_equal = False self.assertTrue(models_equal) def test_load_vision_text_config(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() # Save GroupViTConfig and check if we can load GroupViTVisionConfig from it with tempfile.TemporaryDirectory() as tmp_dir_name: config.save_pretrained(tmp_dir_name) vision_config = GroupViTVisionConfig.from_pretrained(tmp_dir_name) self.assertDictEqual(config.vision_config.to_dict(), vision_config.to_dict()) # Save GroupViTConfig and check if we can load GroupViTTextConfig from it with tempfile.TemporaryDirectory() as tmp_dir_name: config.save_pretrained(tmp_dir_name) text_config = GroupViTTextConfig.from_pretrained(tmp_dir_name) self.assertDictEqual(config.text_config.to_dict(), text_config.to_dict()) @slow def test_model_from_pretrained(self): model_name = "nvidia/groupvit-gcc-yfcc" model = GroupViTModel.from_pretrained(model_name) self.assertIsNotNone(model) # We will verify our results on an image of cute cats def prepare_img(): url = "http://images.cocodataset.org/val2017/000000039769.jpg" im = Image.open(requests.get(url, stream=True).raw) return im @require_vision @require_torch class GroupViTModelIntegrationTest(unittest.TestCase): @slow def test_inference(self): model_name = "nvidia/groupvit-gcc-yfcc" model = GroupViTModel.from_pretrained(model_name) processor = CLIPProcessor.from_pretrained(model_name) image = prepare_img() inputs = processor( text=["a photo of a cat", "a photo of a dog"], images=image, padding=True, return_tensors="pt" ) # forward pass with torch.no_grad(): outputs = model(**inputs) # verify the logits self.assertEqual( outputs.logits_per_image.shape, torch.Size((inputs.pixel_values.shape[0], inputs.input_ids.shape[0])), ) self.assertEqual( outputs.logits_per_text.shape, torch.Size((inputs.input_ids.shape[0], inputs.pixel_values.shape[0])), ) expected_logits = torch.tensor([[13.3523, 6.3629]]) torch.testing.assert_close(outputs.logits_per_image, expected_logits, rtol=1e-3, atol=1e-3)
transformers/tests/models/groupvit/test_modeling_groupvit.py/0
{ "file_path": "transformers/tests/models/groupvit/test_modeling_groupvit.py", "repo_id": "transformers", "token_count": 13058 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Informer model.""" import inspect import tempfile import unittest import numpy as np from huggingface_hub import hf_hub_download from transformers import is_torch_available from transformers.testing_utils import is_flaky, require_torch, slow, torch_device from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin TOLERANCE = 1e-4 if is_torch_available(): import torch from transformers import InformerConfig, InformerForPrediction, InformerModel from transformers.models.informer.modeling_informer import ( InformerDecoder, InformerEncoder, InformerSinusoidalPositionalEmbedding, ) @require_torch class InformerModelTester: def __init__( self, parent, batch_size=13, prediction_length=7, context_length=14, cardinality=19, embedding_dimension=5, num_time_features=4, is_training=True, hidden_size=16, num_hidden_layers=2, num_attention_heads=4, intermediate_size=4, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, lags_sequence=[1, 2, 3, 4, 5], sampling_factor=10, distil=False, ): self.parent = parent self.batch_size = batch_size self.prediction_length = prediction_length self.context_length = context_length self.cardinality = cardinality self.num_time_features = num_time_features self.lags_sequence = lags_sequence self.embedding_dimension = embedding_dimension self.is_training = is_training self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.encoder_seq_length = min( sampling_factor * np.ceil(np.log1p(context_length)).astype("int").item(), context_length ) self.decoder_seq_length = min( sampling_factor * np.ceil(np.log1p(prediction_length)).astype("int").item(), prediction_length ) self.sampling_factor = sampling_factor self.distil = distil def get_config(self): return InformerConfig( prediction_length=self.prediction_length, d_model=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, dropout=self.hidden_dropout_prob, attention_dropout=self.attention_probs_dropout_prob, context_length=self.context_length, lags_sequence=self.lags_sequence, num_time_features=self.num_time_features, num_static_categorical_features=1, num_static_real_features=1, cardinality=[self.cardinality], embedding_dimension=[self.embedding_dimension], sampling_factor=self.sampling_factor, distil=self.distil, ) def prepare_informer_inputs_dict(self, config): _past_length = config.context_length + max(config.lags_sequence) static_categorical_features = ids_tensor([self.batch_size, 1], config.cardinality[0]) static_real_features = floats_tensor([self.batch_size, 1]) past_time_features = floats_tensor([self.batch_size, _past_length, config.num_time_features]) past_values = floats_tensor([self.batch_size, _past_length]) past_observed_mask = floats_tensor([self.batch_size, _past_length]) > 0.5 # decoder inputs future_time_features = floats_tensor([self.batch_size, config.prediction_length, config.num_time_features]) future_values = floats_tensor([self.batch_size, config.prediction_length]) inputs_dict = { "past_values": past_values, "static_categorical_features": static_categorical_features, "static_real_features": static_real_features, "past_time_features": past_time_features, "past_observed_mask": past_observed_mask, "future_time_features": future_time_features, "future_values": future_values, } return inputs_dict def prepare_config_and_inputs(self): config = self.get_config() inputs_dict = self.prepare_informer_inputs_dict(config) return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict def check_encoder_decoder_model_standalone(self, config, inputs_dict): model = InformerModel(config=config).to(torch_device).eval() outputs = model(**inputs_dict) encoder_last_hidden_state = outputs.encoder_last_hidden_state last_hidden_state = outputs.last_hidden_state with tempfile.TemporaryDirectory() as tmpdirname: encoder = model.get_encoder() encoder.save_pretrained(tmpdirname) encoder = InformerEncoder.from_pretrained(tmpdirname).to(torch_device) transformer_inputs, _, _, _ = model.create_network_inputs(**inputs_dict) enc_input = transformer_inputs[:, : config.context_length, ...] dec_input = transformer_inputs[:, config.context_length :, ...] encoder_last_hidden_state_2 = encoder(inputs_embeds=enc_input)[0] self.parent.assertTrue((encoder_last_hidden_state_2 - encoder_last_hidden_state).abs().max().item() < 1e-3) embed_positions = InformerSinusoidalPositionalEmbedding( config.context_length + config.prediction_length, config.d_model ).to(torch_device) self.parent.assertTrue(torch.equal(model.encoder.embed_positions.weight, embed_positions.weight)) self.parent.assertTrue(torch.equal(model.decoder.embed_positions.weight, embed_positions.weight)) with tempfile.TemporaryDirectory() as tmpdirname: decoder = model.get_decoder() decoder.save_pretrained(tmpdirname) decoder = InformerDecoder.from_pretrained(tmpdirname).to(torch_device) last_hidden_state_2 = decoder( inputs_embeds=dec_input, encoder_hidden_states=encoder_last_hidden_state, )[0] self.parent.assertTrue((last_hidden_state_2 - last_hidden_state).abs().max().item() < 1e-3) @require_torch class InformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (InformerModel, InformerForPrediction) if is_torch_available() else () all_generative_model_classes = (InformerForPrediction,) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": InformerModel} if is_torch_available() else {} is_encoder_decoder = True test_pruning = False test_head_masking = False test_missing_keys = False test_torchscript = False test_inputs_embeds = False def setUp(self): self.model_tester = InformerModelTester(self) self.config_tester = ConfigTester( self, config_class=InformerConfig, has_text_modality=False, prediction_length=self.model_tester.prediction_length, ) def test_config(self): self.config_tester.run_common_tests() def test_save_load_strict(self): config, _ = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True) self.assertEqual(info["missing_keys"], []) def test_encoder_decoder_model_standalone(self): config_and_inputs = self.model_tester.prepare_config_and_inputs_for_common() self.model_tester.check_encoder_decoder_model_standalone(*config_and_inputs) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.encoder_hidden_states if config.is_encoder_decoder else outputs.hidden_states expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers + 1 ) self.assertEqual(len(hidden_states), expected_num_layers) if hasattr(self.model_tester, "encoder_seq_length"): seq_length = self.model_tester.context_length if hasattr(self.model_tester, "chunk_length") and self.model_tester.chunk_length > 1: seq_length = seq_length * self.model_tester.chunk_length else: seq_length = self.model_tester.seq_length self.assertListEqual( list(hidden_states[0].shape[-2:]), [seq_length, self.model_tester.hidden_size], ) if config.is_encoder_decoder: hidden_states = outputs.decoder_hidden_states self.assertIsInstance(hidden_states, (list, tuple)) self.assertEqual(len(hidden_states), expected_num_layers) seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "prediction_length", seq_len) self.assertListEqual( list(hidden_states[0].shape[-2:]), [decoder_seq_length, self.model_tester.hidden_size], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) @unittest.skip(reason="Informer does not have tokens embeddings") def test_resize_tokens_embeddings(self): pass @unittest.skip def test_model_outputs_equivalence(self): pass @unittest.skip def test_determinism(self): pass @unittest.skip(reason="randomly selects U keys while calculating attentions") def test_batching_equivalence(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass # # Input is 'static_categorical_features' not 'input_ids' def test_model_main_input_name(self): model_signature = inspect.signature(getattr(InformerModel, "forward")) # The main input is the name of the argument after `self` observed_main_input_name = list(model_signature.parameters.keys())[1] self.assertEqual(InformerModel.main_input_name, observed_main_input_name) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = [ "past_values", "past_time_features", "past_observed_mask", "static_categorical_features", "static_real_features", "future_values", "future_time_features", ] expected_arg_names.extend( [ "future_observed_mask", "decoder_attention_mask", "head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs", "past_key_values", "output_hidden_states", "output_attentions", "use_cache", "return_dict", ] if "future_observed_mask" in arg_names else [ "decoder_attention_mask", "head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs", "past_key_values", "output_hidden_states", "output_attentions", "use_cache", "return_dict", ] ) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len) encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", seq_len) context_length = getattr(self.model_tester, "context_length", seq_len) prediction_length = getattr(self.model_tester, "prediction_length", seq_len) for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, encoder_seq_length, context_length], ) out_len = len(outputs) correct_outlen = 7 if "last_hidden_state" in outputs: correct_outlen += 1 if "past_key_values" in outputs: correct_outlen += 1 # past_key_values have been returned if "loss" in outputs: correct_outlen += 1 if "params" in outputs: correct_outlen += 1 self.assertEqual(out_len, correct_outlen) # decoder attentions decoder_attentions = outputs.decoder_attentions self.assertIsInstance(decoder_attentions, (list, tuple)) self.assertEqual(len(decoder_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(decoder_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, decoder_seq_length, prediction_length], ) # cross attentions cross_attentions = outputs.cross_attentions self.assertIsInstance(cross_attentions, (list, tuple)) self.assertEqual(len(cross_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(cross_attentions[0].shape[-3:]), [ self.model_tester.num_attention_heads, decoder_seq_length, encoder_seq_length, ], ) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) self.assertEqual(out_len + 2, len(outputs)) self_attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(self_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(self_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, encoder_seq_length, context_length], ) @is_flaky() def test_retain_grad_hidden_states_attentions(self): super().test_retain_grad_hidden_states_attentions() @unittest.skip(reason="Model does not have input embeddings") def test_model_get_set_embeddings(self): pass def prepare_batch(filename="train-batch.pt"): file = hf_hub_download(repo_id="hf-internal-testing/tourism-monthly-batch", filename=filename, repo_type="dataset") batch = torch.load(file, map_location=torch_device) return batch @require_torch @slow class InformerModelIntegrationTests(unittest.TestCase): def test_inference_no_head(self): model = InformerModel.from_pretrained("huggingface/informer-tourism-monthly").to(torch_device) batch = prepare_batch() torch.manual_seed(0) with torch.no_grad(): output = model( past_values=batch["past_values"], past_time_features=batch["past_time_features"], past_observed_mask=batch["past_observed_mask"], static_categorical_features=batch["static_categorical_features"], future_values=batch["future_values"], future_time_features=batch["future_time_features"], ).last_hidden_state expected_shape = torch.Size((64, model.config.context_length, model.config.d_model)) self.assertEqual(output.shape, expected_shape) expected_slice = torch.tensor( [[0.4699, 0.7295, 0.8967], [0.4858, 0.3810, 0.9641], [-0.0233, 0.3608, 1.0303]], device=torch_device, ) torch.testing.assert_close(output[0, :3, :3], expected_slice, rtol=TOLERANCE, atol=TOLERANCE) def test_inference_head(self): model = InformerForPrediction.from_pretrained("huggingface/informer-tourism-monthly").to(torch_device) batch = prepare_batch("val-batch.pt") torch.manual_seed(0) with torch.no_grad(): output = model( past_values=batch["past_values"], past_time_features=batch["past_time_features"], past_observed_mask=batch["past_observed_mask"], static_categorical_features=batch["static_categorical_features"], future_time_features=batch["future_time_features"], ).encoder_last_hidden_state # encoder distils the context length to 1/8th of the original length expected_shape = torch.Size((64, model.config.context_length // 8, model.config.d_model)) self.assertEqual(output.shape, expected_shape) expected_slice = torch.tensor( [[0.4170, 0.9067, 0.8153], [0.3004, 0.7574, 0.7066], [0.6803, -0.6323, 1.2802]], device=torch_device ) torch.testing.assert_close(output[0, :3, :3], expected_slice, rtol=TOLERANCE, atol=TOLERANCE) def test_seq_to_seq_generation(self): model = InformerForPrediction.from_pretrained("huggingface/informer-tourism-monthly").to(torch_device) batch = prepare_batch("val-batch.pt") torch.manual_seed(0) with torch.no_grad(): outputs = model.generate( static_categorical_features=batch["static_categorical_features"], past_time_features=batch["past_time_features"], past_values=batch["past_values"], future_time_features=batch["future_time_features"], past_observed_mask=batch["past_observed_mask"], ) expected_shape = torch.Size((64, model.config.num_parallel_samples, model.config.prediction_length)) self.assertEqual(outputs.sequences.shape, expected_shape) expected_slice = torch.tensor([3400.8005, 4289.2637, 7101.9209], device=torch_device) mean_prediction = outputs.sequences.mean(dim=1) torch.testing.assert_close(mean_prediction[0, -3:], expected_slice, rtol=1e-1)
transformers/tests/models/informer/test_modeling_informer.py/0
{ "file_path": "transformers/tests/models/informer/test_modeling_informer.py", "repo_id": "transformers", "token_count": 10324 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Llava model.""" import unittest import requests from parameterized import parameterized from transformers import ( AutoProcessor, AutoTokenizer, LlavaConfig, LlavaForConditionalGeneration, is_torch_available, is_vision_available, ) from transformers.testing_utils import ( cleanup, require_bitsandbytes, require_torch, require_vision, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor if is_torch_available(): import torch if is_vision_available(): from PIL import Image class LlavaVisionText2TextModelTester: def __init__( self, parent, ignore_index=-100, image_token_index=0, projector_hidden_act="gelu", seq_length=7, vision_feature_select_strategy="default", vision_feature_layer=-1, text_config={ "model_type": "llama", "seq_length": 7, "is_training": True, "use_input_mask": True, "use_token_type_ids": False, "use_labels": True, "vocab_size": 99, "hidden_size": 32, "num_hidden_layers": 2, "num_attention_heads": 4, "intermediate_size": 37, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "attention_probs_dropout_prob": 0.1, "max_position_embeddings": 512, "type_vocab_size": 16, "type_sequence_label_size": 2, "initializer_range": 0.02, "num_labels": 3, "num_choices": 4, "pad_token_id": 1, }, is_training=True, vision_config={ "image_size": 8, "patch_size": 2, "num_channels": 3, "is_training": True, "hidden_size": 32, "projection_dim": 32, "num_hidden_layers": 2, "num_attention_heads": 4, "intermediate_size": 37, "dropout": 0.1, "attention_dropout": 0.1, "initializer_range": 0.02, }, ): self.parent = parent self.ignore_index = ignore_index self.image_token_index = image_token_index self.projector_hidden_act = projector_hidden_act self.vision_feature_select_strategy = vision_feature_select_strategy self.vision_feature_layer = vision_feature_layer self.text_config = text_config self.vision_config = vision_config self.pad_token_id = text_config["pad_token_id"] self.num_hidden_layers = text_config["num_hidden_layers"] self.vocab_size = text_config["vocab_size"] self.hidden_size = text_config["hidden_size"] self.num_attention_heads = text_config["num_attention_heads"] self.is_training = is_training self.batch_size = 3 self.num_channels = 3 self.image_size = 336 self.num_image_tokens = (self.vision_config["image_size"] // self.vision_config["patch_size"]) ** 2 self.seq_length = seq_length + self.num_image_tokens self.encoder_seq_length = self.seq_length def get_config(self): return LlavaConfig( text_config=self.text_config, vision_config=self.vision_config, ignore_index=self.ignore_index, image_token_index=self.image_token_index, projector_hidden_act=self.projector_hidden_act, vision_feature_select_strategy=self.vision_feature_select_strategy, vision_feature_layer=self.vision_feature_layer, image_seq_length=self.num_image_tokens, ) def prepare_config_and_inputs(self): pixel_values = floats_tensor( [ self.batch_size, self.vision_config["num_channels"], self.vision_config["image_size"], self.vision_config["image_size"], ] ) config = self.get_config() return config, pixel_values def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values = config_and_inputs input_ids = ids_tensor([self.batch_size, self.seq_length], config.text_config.vocab_size - 1) + 1 attention_mask = input_ids.ne(1).to(torch_device) input_ids[input_ids == config.image_token_index] = self.pad_token_id input_ids[:, : self.num_image_tokens] = config.image_token_index inputs_dict = { "pixel_values": pixel_values, "input_ids": input_ids, "attention_mask": attention_mask, } return config, inputs_dict def create_and_check_llava_model_fp16_forward(self, config, input_ids, pixel_values, attention_mask): model = LlavaForConditionalGeneration(config=config) model.to(torch_device) model.eval() with torch.autocast(device_type="cuda", dtype=torch.float16): logits = model( input_ids=input_ids, attention_mask=attention_mask, pixel_values=pixel_values.to(torch.bfloat16), return_dict=True, )["logits"] self.parent.assertFalse(torch.isnan(logits).any().item()) @require_torch class LlavaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): """ Model tester for `LlavaForConditionalGeneration`. """ all_model_classes = (LlavaForConditionalGeneration,) if is_torch_available() else () all_generative_model_classes = (LlavaForConditionalGeneration,) if is_torch_available() else () pipeline_model_mapping = ( {"image-to-text": LlavaForConditionalGeneration, "image-text-to-text": LlavaForConditionalGeneration} if is_torch_available() else {} ) test_pruning = False test_head_masking = False _is_composite = True def setUp(self): self.model_tester = LlavaVisionText2TextModelTester(self) common_properties = ["image_token_index", "vision_feature_layer", "image_seq_length"] self.config_tester = ConfigTester( self, config_class=LlavaConfig, has_text_modality=False, common_properties=common_properties ) def test_config(self): self.config_tester.run_common_tests() # overwrite inputs_embeds tests because we need to delete "pixel values" for LVLMs def test_inputs_embeds(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) input_ids = inputs["input_ids"] del inputs["input_ids"] del inputs["pixel_values"] wte = model.get_input_embeddings() inputs["inputs_embeds"] = wte(input_ids) with torch.no_grad(): model(**inputs) # overwrite inputs_embeds tests because we need to delete "pixel values" for LVLMs # while some other models require pixel_values to be present def test_inputs_embeds_matches_input_ids(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) input_ids = inputs["input_ids"] del inputs["input_ids"] del inputs["pixel_values"] inputs_embeds = model.get_input_embeddings()(input_ids) with torch.no_grad(): out_ids = model(input_ids=input_ids, **inputs)[0] out_embeds = model(inputs_embeds=inputs_embeds, **inputs)[0] torch.testing.assert_close(out_embeds, out_ids) def test_mismatching_num_image_tokens(self): """ Tests that VLMs through an error with explicit message saying what is wrong when number of images don't match number of image tokens in the text. Also we need to test multi-image cases when one prompr has multiple image tokens. """ config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config).to(torch_device) _ = model(**input_dict) # successfull forward with no modifications # remove one image but leave the image token in text input_dict["pixel_values"] = input_dict["pixel_values"][-1:, ...] with self.assertRaises(ValueError): _ = model(**input_dict) # simulate multi-image case by concatenating inputs where each has exactly one image/image-token input_ids = input_dict["input_ids"][:1] pixel_values = input_dict["pixel_values"][:1] input_ids = torch.cat([input_ids, input_ids], dim=0) # one image and two image tokens raise an error with self.assertRaises(ValueError): _ = model(input_ids=input_ids, pixel_values=pixel_values) # two images and two image tokens don't raise an error pixel_values = torch.cat([pixel_values, pixel_values], dim=0) _ = model(input_ids=input_ids, pixel_values=pixel_values) @parameterized.expand( [ (-1,), ([-1],), ([-1, -2],), ], ) def test_vision_feature_layers(self, vision_feature_layer): """ Test that we can use either one vision feature layer, or a list of vision feature layers. """ config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() config.vision_feature_layer = vision_feature_layer num_feature_layers = 1 if isinstance(vision_feature_layer, int) else len(vision_feature_layer) hidden_size = config.vision_config.hidden_size expected_features = hidden_size * num_feature_layers for model_class in self.all_model_classes: model = model_class(config).to(torch_device) # We should have the right number of input features, # and should be able to run a forward pass without exploding assert model.multi_modal_projector.linear_1.in_features == expected_features model(**input_dict) @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass @unittest.skip(reason="Compile not yet supported because in LLava models") def test_sdpa_can_compile_dynamic(self): pass @unittest.skip(reason="Compile not yet supported because in LLava models") def test_sdpa_can_dispatch_on_flash(self): pass @unittest.skip("FlashAttention only support fp16 and bf16 data type") def test_flash_attn_2_fp32_ln(self): pass @unittest.skip( "VLMs need lots of steps to prepare images/mask correctly to get pad-free inputs. Can be tested as part of LLM test" ) def test_flash_attention_2_padding_matches_padding_free_with_position_ids(self): pass @require_torch class LlavaForConditionalGenerationIntegrationTest(unittest.TestCase): def setUp(self): self.processor = AutoProcessor.from_pretrained("llava-hf/bakLlava-v1-hf") def tearDown(self): cleanup(torch_device, gc_collect=True) @slow @require_bitsandbytes def test_small_model_integration_test(self): # Let' s make sure we test the preprocessing to replace what is used model = LlavaForConditionalGeneration.from_pretrained("llava-hf/bakLlava-v1-hf", load_in_4bit=True) prompt = "<image>\nUSER: What are the things I should be cautious about when I visit this place?\nASSISTANT:" image_file = "https://llava-vl.github.io/static/images/view.jpg" raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = self.processor(images=raw_image, text=prompt, return_tensors="pt").to(torch_device) output = model.generate(**inputs, max_new_tokens=20) EXPECTED_DECODED_TEXT = "\nUSER: What are the things I should be cautious about when I visit this place?\nASSISTANT: When visiting this place, there are a few things one should be cautious about. Firstly," # fmt: skip self.assertEqual( self.processor.decode(output[0], skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_bitsandbytes def test_small_model_integration_test_llama_single(self): # Let' s make sure we test the preprocessing to replace what is used model_id = "llava-hf/llava-1.5-7b-hf" model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf", load_in_4bit=True) processor = AutoProcessor.from_pretrained(model_id) prompt = "USER: <image>\nWhat are the things I should be cautious about when I visit this place? ASSISTANT:" image_file = "https://llava-vl.github.io/static/images/view.jpg" raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(images=raw_image, text=prompt, return_tensors="pt").to(torch_device, torch.float16) output = model.generate(**inputs, max_new_tokens=900, do_sample=False) EXPECTED_DECODED_TEXT = "USER: \nWhat are the things I should be cautious about when I visit this place? ASSISTANT: When visiting this place, which is a pier or dock extending over a body of water, there are a few things to be cautious about. First, be aware of the weather conditions, as sudden changes in weather can make the pier unsafe to walk on. Second, be mindful of the water depth and any potential hazards, such as submerged rocks or debris, that could cause accidents or injuries. Additionally, be cautious of the tides and currents, as they can change rapidly and pose a risk to swimmers or those who venture too close to the edge of the pier. Finally, be respectful of the environment and other visitors, and follow any posted rules or guidelines for the area." # fmt: skip self.assertEqual( processor.decode(output[0], skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_bitsandbytes def test_small_model_integration_test_llama_batched(self): # Let' s make sure we test the preprocessing to replace what is used model_id = "llava-hf/llava-1.5-7b-hf" model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf", load_in_4bit=True) processor = AutoProcessor.from_pretrained(model_id) prompts = [ "USER: <image>\nWhat are the things I should be cautious about when I visit this place? What should I bring with me? ASSISTANT:", "USER: <image>\nWhat is this? ASSISTANT:", ] image1 = Image.open(requests.get("https://llava-vl.github.io/static/images/view.jpg", stream=True).raw) image2 = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw) inputs = processor(images=[image1, image2], text=prompts, return_tensors="pt", padding=True).to(torch_device) output = model.generate(**inputs, max_new_tokens=20) EXPECTED_DECODED_TEXT = ['USER: \nWhat are the things I should be cautious about when I visit this place? What should I bring with me? ASSISTANT: When visiting this place, which is a pier or dock extending over a body of water, you', 'USER: \nWhat is this? ASSISTANT: The image features two cats lying down on a pink couch. One cat is located on'] # fmt: skip self.assertEqual( processor.batch_decode(output, skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_bitsandbytes def test_small_model_integration_test_batch(self): # Let' s make sure we test the preprocessing to replace what is used model = LlavaForConditionalGeneration.from_pretrained("llava-hf/bakLlava-v1-hf", load_in_4bit=True) # The first batch is longer in terms of text, but only has 1 image. The second batch will be padded in text, but the first will be padded because images take more space!. prompts = [ "USER: <image>\nWhat are the things I should be cautious about when I visit this place? What should I bring with me?\nASSISTANT:", "USER: <image>\nWhat is this?\nASSISTANT:", ] image1 = Image.open(requests.get("https://llava-vl.github.io/static/images/view.jpg", stream=True).raw) image2 = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw) inputs = self.processor(images=[image1, image2], text=prompts, return_tensors="pt", padding=True).to( torch_device ) output = model.generate(**inputs, max_new_tokens=20) EXPECTED_DECODED_TEXT = [ 'USER: \nWhat are the things I should be cautious about when I visit this place? What should I bring with me?\nASSISTANT: When visiting this place, there are a few things to be cautious about and items to bring.', 'USER: \nWhat is this?\nASSISTANT: Cats' ] # fmt: skip self.assertEqual( self.processor.batch_decode(output, skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_bitsandbytes def test_small_model_integration_test_llama_batched_regression(self): # Let' s make sure we test the preprocessing to replace what is used model_id = "llava-hf/llava-1.5-7b-hf" # Multi-image & multi-prompt (e.g. 3 images and 2 prompts now fails with SDPA, this tests if "eager" works as before) model = LlavaForConditionalGeneration.from_pretrained( "llava-hf/llava-1.5-7b-hf", load_in_4bit=True, attn_implementation="eager" ) processor = AutoProcessor.from_pretrained(model_id, pad_token="<pad>") prompts = [ "USER: <image>\nWhat are the things I should be cautious about when I visit this place? What should I bring with me?\nASSISTANT:", "USER: <image>\nWhat is this?\nASSISTANT: Two cats lying on a bed!\nUSER: <image>\nAnd this?\nASSISTANT:", ] image1 = Image.open(requests.get("https://llava-vl.github.io/static/images/view.jpg", stream=True).raw) image2 = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw) inputs = processor(images=[image1, image2, image1], text=prompts, return_tensors="pt", padding=True).to( torch_device ) output = model.generate(**inputs, max_new_tokens=20) EXPECTED_DECODED_TEXT = ['USER: \nWhat are the things I should be cautious about when I visit this place? What should I bring with me?\nASSISTANT: When visiting this place, which appears to be a dock or pier extending over a body of water', 'USER: \nWhat is this?\nASSISTANT: Two cats lying on a bed!\nUSER: \nAnd this?\nASSISTANT: A cat sleeping on a bed.'] # fmt: skip self.assertEqual( processor.batch_decode(output, skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_torch @require_vision def test_batched_generation(self): model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf", load_in_4bit=True) processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf") prompt1 = "<image>\n<image>\nUSER: What's the the difference of two images?\nASSISTANT:" prompt2 = "<image>\nUSER: Describe the image.\nASSISTANT:" prompt3 = "<image>\nUSER: Describe the image.\nASSISTANT:" url1 = "https://images.unsplash.com/photo-1552053831-71594a27632d?q=80&w=3062&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" url2 = "https://images.unsplash.com/photo-1617258683320-61900b281ced?q=80&w=3087&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" image1 = Image.open(requests.get(url1, stream=True).raw) image2 = Image.open(requests.get(url2, stream=True).raw) inputs = processor( images=[image1, image2, image1, image2], text=[prompt1, prompt2, prompt3], return_tensors="pt", padding=True, ).to(torch_device) model = model.eval() EXPECTED_OUTPUT = [ "\n \nUSER: What's the the difference of two images?\nASSISTANT: The difference between the two images is that one shows a dog standing on a grassy field, while", "\nUSER: Describe the image.\nASSISTANT: The image features a brown and white dog sitting on a sidewalk. The dog is holding a small", "\nUSER: Describe the image.\nASSISTANT: The image features a lone llama standing on a grassy hill. The llama is the", ] generate_ids = model.generate(**inputs, max_new_tokens=20) outputs = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) self.assertEqual(outputs, EXPECTED_OUTPUT) def test_tokenizer_integration(self): slow_tokenizer = AutoTokenizer.from_pretrained("liuhaotian/llava-v1.6-34b", use_fast=False) slow_tokenizer.add_tokens("<image>", True) fast_tokenizer = AutoTokenizer.from_pretrained( "liuhaotian/llava-v1.6-34b", bos_token="<|startoftext|>", eos_token="<|endoftext|>", from_slow=True, legacy=False, ) fast_tokenizer.add_tokens("<image>", True) prompt = "<|im_start|>system\nAnswer the questions.<|im_end|><|im_start|>user\n<image>\nWhat is shown in this image?<|im_end|><|im_start|>assistant\n" EXPECTED_OUTPUT = ['<|im_start|>', 'system', '\n', 'Answer', '▁the', '▁questions', '.', '<|im_end|>', '<|im_start|>', 'user', '\n', '<image>', '\n', 'What', '▁is', '▁shown', '▁in', '▁this', '▁image', '?', '<|im_end|>', '<|im_start|>', 'ass', 'istant', '\n'] # fmt: skip self.assertEqual(slow_tokenizer.tokenize(prompt), EXPECTED_OUTPUT) self.assertEqual(fast_tokenizer.tokenize(prompt), EXPECTED_OUTPUT) @slow @require_bitsandbytes def test_generation_no_images(self): model_id = "llava-hf/llava-1.5-7b-hf" model = LlavaForConditionalGeneration.from_pretrained(model_id, load_in_4bit=True) processor = AutoProcessor.from_pretrained(model_id) # Prepare inputs with no images inputs = processor(text="Hello, I am", return_tensors="pt").to(torch_device) # Make sure that `generate` works _ = model.generate(**inputs, max_new_tokens=20) @slow @require_bitsandbytes def test_generation_siglip_backbone(self): model_id = "llava-hf/llava-interleave-qwen-0.5b-hf" model = LlavaForConditionalGeneration.from_pretrained(model_id, torch_dtype="float16", device_map=torch_device) processor = AutoProcessor.from_pretrained(model_id) # check processing with expansion of inputs (w/o expansion should work with any backbone) processor.vision_feature_select_strategy = "default" processor.patch_size = 14 image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor( text="<|im_start|>user\n<image>\nWhat are these?<|im_end|>\n<|im_start|>assistant", images=raw_image, return_tensors="pt", ).to(torch_device, torch.float16) # Make sure that `generate` works output = model.generate(**inputs, max_new_tokens=30) EXPECTED_DECODED_TEXT = "user\n\nWhat are these?\nassistant The image shows two cats, one on the left and one on the right. They appear to be resting or sleeping on a pink blanket. The cat" self.assertTrue(processor.batch_decode(output, skip_special_tokens=True)[0] == EXPECTED_DECODED_TEXT) @slow def test_pixtral(self): model_id = "mistral-community/pixtral-12b" model = LlavaForConditionalGeneration.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) IMG_URLS = [ Image.open(requests.get("https://picsum.photos/id/237/400/300", stream=True).raw), Image.open(requests.get("https://picsum.photos/id/231/200/300", stream=True).raw), Image.open(requests.get("https://picsum.photos/id/27/500/500", stream=True).raw), Image.open(requests.get("https://picsum.photos/id/17/150/600", stream=True).raw), ] PROMPT = "<s>[INST]Describe the images.\n[IMG][IMG][IMG][IMG][/INST]" # image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=PROMPT, images=IMG_URLS, return_tensors="pt").to(model.device) generate_ids = model.generate(**inputs, max_new_tokens=500) ouptut = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(ouptut) # fmt: off EXPECTED_GENERATION = """ Describe the images. Certainly! Here are the descriptions of the images: 1. **Image 1**: This image features a black dog with a glossy coat sitting on a wooden surface. The dog has a calm and attentive expression, looking directly at the camera. The wooden background has a rustic appearance with visible grain and texture. 2. **Image 2**: This image captures a breathtaking view of a mountainous landscape. The mountains are rugged and covered with patches of green vegetation. The sky above is clear, and the scene conveys a sense of tranquility and natural beauty. 3. **Image 3**: This image shows a beach scene during sunset. The waves are gently rolling onto the shore, and several people can be seen in the water, possibly surfing or swimming. The sky is painted with warm hues of orange and yellow, creating a serene and picturesque atmosphere. 4. **Image 4**: This image depicts a narrow, winding path that cuts through a lush, green landscape. On either side of the path, there is dense grass and various trees, including a prominent tree with white blossoms. The sky is clear and blue, adding to the peaceful and inviting ambiance of the scene. These descriptions provide a detailed overview of the content and atmosphere of each image. """ # fmt: on # check that both inputs are handled correctly and generate the same output self.assertEqual(ouptut, EXPECTED_GENERATION) @slow @require_bitsandbytes def test_pixtral_4bit(self): model_id = "mistral-community/pixtral-12b" model = LlavaForConditionalGeneration.from_pretrained(model_id, load_in_4bit=True) processor = AutoProcessor.from_pretrained(model_id) IMG_URLS = [ Image.open(requests.get("https://picsum.photos/id/237/400/300", stream=True).raw), Image.open(requests.get("https://picsum.photos/id/231/200/300", stream=True).raw), ] PROMPT = "<s>[INST][IMG][IMG]Describe the images.[/INST]" inputs = processor(text=PROMPT, images=IMG_URLS, return_tensors="pt").to(torch_device, torch.float16) generate_ids = model.generate(**inputs, max_new_tokens=50) output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] EXPECTED_GENERATION = "Describe the images.The image showcases a dog, which is prominently positioned in the center, taking up a significant portion of the frame. The dog is situated against a backdrop of a wooden surface, which spans the entire image. The dog appears to be a black Labrador" # fmt: skip self.assertEqual(output, EXPECTED_GENERATION) @slow @require_bitsandbytes def test_pixtral_batched(self): model_id = "mistral-community/pixtral-12b" model = LlavaForConditionalGeneration.from_pretrained(model_id, load_in_4bit=True) processor = AutoProcessor.from_pretrained(model_id) processor.tokenizer.pad_token_id = processor.tokenizer.eos_token_id IMG_URLS = [ Image.open(requests.get("https://picsum.photos/id/237/400/300", stream=True).raw), Image.open(requests.get("https://picsum.photos/id/17/150/500", stream=True).raw), ] PROMPT = [ "<s>[INST][IMG]What breed is the dog?[/INST]", "<s>[INST][IMG]What is shown in this image?[/INST]", ] inputs = processor(text=PROMPT, images=IMG_URLS, padding=True, return_tensors="pt").to( torch_device, torch.float16 ) generate_ids = model.generate(**inputs, max_new_tokens=50) output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) EXPECTED_GENERATION = [ 'What breed is the dog?The dog in the image is a black Labrador Retriever.', 'What is shown in this image?The image depicts a narrow, winding dirt path surrounded by lush greenery. The path is flanked by grass and shrubs on both sides. On the left side, there are tall trees and dense foliage, while on the right side, there' ] # fmt: skip self.assertEqual(output, EXPECTED_GENERATION)
transformers/tests/models/llava/test_modeling_llava.py/0
{ "file_path": "transformers/tests/models/llava/test_modeling_llava.py", "repo_id": "transformers", "token_count": 12888 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the MgpstrProcessor.""" import json import os import shutil import tempfile import unittest import numpy as np import pytest from transformers import MgpstrTokenizer from transformers.models.mgp_str.tokenization_mgp_str import VOCAB_FILES_NAMES from transformers.testing_utils import require_torch, require_vision from transformers.utils import IMAGE_PROCESSOR_NAME, is_torch_available, is_vision_available if is_torch_available(): import torch if is_vision_available(): from PIL import Image from transformers import MgpstrProcessor, ViTImageProcessor @require_torch @require_vision class MgpstrProcessorTest(unittest.TestCase): image_processing_class = ViTImageProcessor if is_vision_available() else None @property def image_processor_dict(self): return self.prepare_image_processor_dict() def setUp(self): self.image_size = (3, 32, 128) self.tmpdirname = tempfile.mkdtemp() vocab = ['[GO]', '[s]', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] # fmt: skip vocab_tokens = dict(zip(vocab, range(len(vocab)))) self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) with open(self.vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") image_processor_map = { "do_normalize": False, "do_resize": True, "image_processor_type": "ViTImageProcessor", "resample": 3, "size": {"height": 32, "width": 128}, } self.image_processor_file = os.path.join(self.tmpdirname, IMAGE_PROCESSOR_NAME) with open(self.image_processor_file, "w", encoding="utf-8") as fp: json.dump(image_processor_map, fp) # We copy here rather than use the ProcessorTesterMixin as this processor has a `char_tokenizer` instad of a # tokenizer attribute, which means all the tests would need to be overridden. @require_vision def prepare_image_inputs(self): """This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True, or a list of PyTorch tensors if one specifies torchify=True. """ image_inputs = [np.random.randint(255, size=(3, 30, 400), dtype=np.uint8)] image_inputs = [Image.fromarray(np.moveaxis(x, 0, -1)) for x in image_inputs] return image_inputs def get_tokenizer(self, **kwargs): return MgpstrTokenizer.from_pretrained(self.tmpdirname, **kwargs) def get_image_processor(self, **kwargs): return ViTImageProcessor.from_pretrained(self.tmpdirname, **kwargs) def tearDown(self): shutil.rmtree(self.tmpdirname) def test_save_load_pretrained_default(self): tokenizer = self.get_tokenizer() image_processor = self.get_image_processor() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) processor.save_pretrained(self.tmpdirname) processor = MgpstrProcessor.from_pretrained(self.tmpdirname, use_fast=False) self.assertEqual(processor.char_tokenizer.get_vocab(), tokenizer.get_vocab()) self.assertIsInstance(processor.char_tokenizer, MgpstrTokenizer) self.assertEqual(processor.image_processor.to_json_string(), image_processor.to_json_string()) self.assertIsInstance(processor.image_processor, ViTImageProcessor) def test_save_load_pretrained_additional_features(self): tokenizer = self.get_tokenizer() image_processor = self.get_image_processor() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) processor.save_pretrained(self.tmpdirname) tokenizer_add_kwargs = self.get_tokenizer(bos_token="(BOS)", eos_token="(EOS)") image_processor_add_kwargs = self.get_image_processor(do_normalize=False, padding_value=1.0) processor = MgpstrProcessor.from_pretrained( self.tmpdirname, bos_token="(BOS)", eos_token="(EOS)", do_normalize=False, padding_value=1.0 ) self.assertEqual(processor.char_tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab()) self.assertIsInstance(processor.char_tokenizer, MgpstrTokenizer) self.assertEqual(processor.image_processor.to_json_string(), image_processor_add_kwargs.to_json_string()) self.assertIsInstance(processor.image_processor, ViTImageProcessor) def test_image_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) image_input = self.prepare_image_inputs() input_image_proc = image_processor(image_input, return_tensors="np") input_processor = processor(images=image_input, return_tensors="np") for key in input_image_proc.keys(): self.assertAlmostEqual(input_image_proc[key].sum(), input_processor[key].sum(), delta=1e-2) def test_tokenizer(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = "test" encoded_processor = processor(text=input_str) encoded_tok = tokenizer(input_str) for key in encoded_tok.keys(): self.assertListEqual(encoded_tok[key], encoded_processor[key]) def test_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = "test" image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) self.assertListEqual(list(inputs.keys()), ["pixel_values", "labels"]) # test if it raises when no input is passed with pytest.raises(ValueError): processor() def test_tokenizer_decode(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9], [3, 4, 3, 1, 1, 8, 9]] decoded_processor = processor.char_decode(predicted_ids) decoded_tok = tokenizer.batch_decode(predicted_ids) decode_strs = [seq.replace(" ", "") for seq in decoded_tok] self.assertListEqual(decode_strs, decoded_processor) def test_model_input_names(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = None image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) self.assertListEqual(list(inputs.keys()), processor.model_input_names) def test_processor_batch_decode(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = MgpstrProcessor(tokenizer=tokenizer, image_processor=image_processor) char_input = torch.randn(1, 27, 38) bpe_input = torch.randn(1, 27, 50257) wp_input = torch.randn(1, 27, 30522) results = processor.batch_decode([char_input, bpe_input, wp_input]) self.assertListEqual(list(results.keys()), ["generated_text", "scores", "char_preds", "bpe_preds", "wp_preds"])
transformers/tests/models/mgp_str/test_processor_mgp_str.py/0
{ "file_path": "transformers/tests/models/mgp_str/test_processor_mgp_str.py", "repo_id": "transformers", "token_count": 3307 }
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import is_flax_available from transformers.testing_utils import require_flax, require_sentencepiece, require_tokenizers, require_torch, slow if is_flax_available(): import optax from flax.training.common_utils import onehot from transformers import AutoTokenizer, FlaxMT5ForConditionalGeneration from transformers.models.t5.modeling_flax_t5 import shift_tokens_right @require_torch @require_sentencepiece @require_tokenizers @require_flax class MT5IntegrationTest(unittest.TestCase): @slow def test_small_integration_test(self): """ For comparision run: >>> import t5 # pip install t5==0.7.1 >>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary >>> path_to_mtf_small_mt5_checkpoint = '<fill_in>' >>> path_to_mtf_small_mt5_spm_model_path = '<fill_in>' >>> t5_model = t5.models.MtfModel(model_dir=path_to_mtf_small_mt5_checkpoint, batch_size=1, tpu=None) >>> vocab = SentencePieceVocabulary(path_to_mtf_small_mt5_spm_model_path) >>> score = t5_model.score(inputs=["Hello there"], targets=["Hi I am"], vocabulary=vocab) """ model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small") tokenizer = AutoTokenizer.from_pretrained("google/mt5-small") input_ids = tokenizer("Hello there", return_tensors="np").input_ids labels = tokenizer("Hi I am", return_tensors="np").input_ids decoder_input_ids = shift_tokens_right(labels, model.config.pad_token_id, model.config.decoder_start_token_id) logits = model(input_ids, decoder_input_ids=decoder_input_ids).logits loss = optax.softmax_cross_entropy(logits, onehot(labels, logits.shape[-1])).mean() mtf_score = -(labels.shape[-1] * loss.item()) EXPECTED_SCORE = -84.9127 self.assertTrue(abs(mtf_score - EXPECTED_SCORE) < 1e-4)
transformers/tests/models/mt5/test_modeling_flax_mt5.py/0
{ "file_path": "transformers/tests/models/mt5/test_modeling_flax_mt5.py", "repo_id": "transformers", "token_count": 950 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. team. All rights reserved. # Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Nemotron model.""" import tempfile import unittest import pytest from transformers import NemotronConfig, is_torch_available from transformers.testing_utils import ( is_flaky, require_flash_attn, require_read_token, require_torch, require_torch_accelerator, require_torch_gpu, require_torch_sdpa, slow, torch_device, ) from ...models.gemma.test_modeling_gemma import GemmaModelTest, GemmaModelTester from ...test_configuration_common import ConfigTester if is_torch_available(): import torch from transformers import ( AutoTokenizer, NemotronForCausalLM, NemotronForQuestionAnswering, NemotronForSequenceClassification, NemotronForTokenClassification, NemotronModel, ) class NemotronModelTester(GemmaModelTester): if is_torch_available(): config_class = NemotronConfig model_class = NemotronModel for_causal_lm_class = NemotronForCausalLM for_sequence_class = NemotronForSequenceClassification for_token_class = NemotronForTokenClassification @require_torch class NemotronModelTest(GemmaModelTest): # Need to use `0.8` instead of `0.9` for `test_cpu_offload` # This is because we are hitting edge cases with the causal_mask buffer model_split_percents = [0.5, 0.7, 0.8] all_model_classes = ( ( NemotronModel, NemotronForCausalLM, NemotronForSequenceClassification, NemotronForQuestionAnswering, NemotronForTokenClassification, ) if is_torch_available() else () ) all_generative_model_classes = (NemotronForCausalLM,) if is_torch_available() else () pipeline_model_mapping = ( { "feature-extraction": NemotronModel, "text-classification": NemotronForSequenceClassification, "text-generation": NemotronForCausalLM, "zero-shot": NemotronForSequenceClassification, "question-answering": NemotronForQuestionAnswering, "token-classification": NemotronForTokenClassification, } if is_torch_available() else {} ) test_headmasking = False test_pruning = False fx_compatible = False # used in `test_torch_compile_for_training` _torch_compile_train_cls = NemotronForCausalLM if is_torch_available() else None def setUp(self): self.model_tester = NemotronModelTester(self) self.config_tester = ConfigTester(self, config_class=NemotronConfig, hidden_size=37) @unittest.skip("Eager and SDPA do not produce the same outputs, thus this test fails") def test_model_outputs_equivalence(self, **kwargs): pass @require_torch_sdpa @require_torch_accelerator @slow def test_sdpa_equivalence(self): for model_class in self.all_model_classes: if not model_class._supports_sdpa: self.skipTest(reason="Model does not support SDPA") config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model_sdpa = model_class.from_pretrained( tmpdirname, torch_dtype=torch.float16, attn_implementation="sdpa" ) model_sdpa.to(torch_device) model = model_class.from_pretrained(tmpdirname, torch_dtype=torch.float16, attn_implementation="eager") model.to(torch_device) dummy_input = inputs_dict[model_class.main_input_name] dummy_input = dummy_input.to(torch_device) outputs = model(dummy_input, output_hidden_states=True) outputs_sdpa = model_sdpa(dummy_input, output_hidden_states=True) logits = outputs.hidden_states[-1] logits_sdpa = outputs_sdpa.hidden_states[-1] # nemotron sdpa needs a high tolerance assert torch.allclose(logits_sdpa, logits, atol=1e-2) @require_flash_attn @require_torch_gpu @pytest.mark.flash_attn_test @is_flaky() @slow def test_flash_attn_2_equivalence(self): for model_class in self.all_model_classes: if not model_class._supports_flash_attn_2: self.skipTest(reason="Model does not support Flash Attention 2") config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model_fa = model_class.from_pretrained( tmpdirname, torch_dtype=torch.float16, attn_implementation="flash_attention_2" ) model_fa.to(torch_device) model = model_class.from_pretrained(tmpdirname, torch_dtype=torch.float16, attn_implementation="eager") model.to(torch_device) dummy_input = inputs_dict[model_class.main_input_name] dummy_input = dummy_input.to(torch_device) outputs = model(dummy_input, output_hidden_states=True) outputs_fa = model_fa(dummy_input, output_hidden_states=True) logits = outputs.hidden_states[-1] logits_fa = outputs_fa.hidden_states[-1] # nemotron flash attention 2 needs a high tolerance assert torch.allclose(logits_fa, logits, atol=1e-2) @require_torch_gpu class NemotronIntegrationTest(unittest.TestCase): # This variable is used to determine which CUDA device are we using for our runners (A10 or T4) # Depending on the hardware we get different logits / generations cuda_compute_capability_major_version = None @classmethod def setUpClass(cls): if is_torch_available() and torch.cuda.is_available(): # 8 is for A100 / A10 and 7 for T4 cls.cuda_compute_capability_major_version = torch.cuda.get_device_capability()[0] @slow @require_read_token def test_nemotron_8b_generation_sdpa(self): text = ["What is the largest planet in solar system?"] EXPECTED_TEXT = [ "What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer", ] model_id = "thhaus/nemotron3-8b" model = NemotronForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa" ) tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(text, return_tensors="pt").to(torch_device) output = model.generate(**inputs, do_sample=False) output_text = tokenizer.batch_decode(output, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, output_text) @slow @require_read_token def test_nemotron_8b_generation_eager(self): text = ["What is the largest planet in solar system?"] EXPECTED_TEXT = [ "What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer", ] model_id = "thhaus/nemotron3-8b" model = NemotronForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto", attn_implementation="eager" ) tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(text, return_tensors="pt").to(torch_device) output = model.generate(**inputs, do_sample=False) output_text = tokenizer.batch_decode(output, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, output_text) @slow @require_read_token def test_nemotron_8b_generation_fa2(self): text = ["What is the largest planet in solar system?"] EXPECTED_TEXT = [ "What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer", ] model_id = "thhaus/nemotron3-8b" model = NemotronForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto", attn_implementation="flash_attention_2" ) tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(text, return_tensors="pt").to(torch_device) output = model.generate(**inputs, do_sample=False) output_text = tokenizer.batch_decode(output, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, output_text)
transformers/tests/models/nemotron/test_modeling_nemotron.py/0
{ "file_path": "transformers/tests/models/nemotron/test_modeling_nemotron.py", "repo_id": "transformers", "token_count": 4013 }
# coding=utf-8 # Copyright 2023 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers.testing_utils import require_torch, require_vision, slow from transformers.utils import is_torch_available, is_vision_available from ...test_image_processing_common import ImageProcessingTestMixin, prepare_image_inputs if is_vision_available(): from PIL import Image from transformers import AutoProcessor, Owlv2ForObjectDetection, Owlv2ImageProcessor if is_torch_available(): import torch class Owlv2ImageProcessingTester: def __init__( self, parent, batch_size=7, num_channels=3, image_size=18, min_resolution=30, max_resolution=400, do_resize=True, size=None, do_normalize=True, image_mean=[0.48145466, 0.4578275, 0.40821073], image_std=[0.26862954, 0.26130258, 0.27577711], do_convert_rgb=True, ): self.parent = parent self.batch_size = batch_size self.num_channels = num_channels self.image_size = image_size self.min_resolution = min_resolution self.max_resolution = max_resolution self.do_resize = do_resize self.size = size if size is not None else {"height": 18, "width": 18} self.do_normalize = do_normalize self.image_mean = image_mean self.image_std = image_std self.do_convert_rgb = do_convert_rgb def prepare_image_processor_dict(self): return { "do_resize": self.do_resize, "size": self.size, "do_normalize": self.do_normalize, "image_mean": self.image_mean, "image_std": self.image_std, } def expected_output_image_shape(self, images): return self.num_channels, self.size["height"], self.size["width"] def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False): return prepare_image_inputs( batch_size=self.batch_size, num_channels=self.num_channels, min_resolution=self.min_resolution, max_resolution=self.max_resolution, equal_resolution=equal_resolution, numpify=numpify, torchify=torchify, ) @require_torch @require_vision class Owlv2ImageProcessingTest(ImageProcessingTestMixin, unittest.TestCase): image_processing_class = Owlv2ImageProcessor if is_vision_available() else None def setUp(self): super().setUp() self.image_processor_tester = Owlv2ImageProcessingTester(self) @property def image_processor_dict(self): return self.image_processor_tester.prepare_image_processor_dict() def test_image_processor_properties(self): image_processing = self.image_processing_class(**self.image_processor_dict) self.assertTrue(hasattr(image_processing, "do_resize")) self.assertTrue(hasattr(image_processing, "size")) self.assertTrue(hasattr(image_processing, "do_normalize")) self.assertTrue(hasattr(image_processing, "image_mean")) self.assertTrue(hasattr(image_processing, "image_std")) def test_image_processor_from_dict_with_kwargs(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) self.assertEqual(image_processor.size, {"height": 18, "width": 18}) image_processor = self.image_processing_class.from_dict( self.image_processor_dict, size={"height": 42, "width": 42} ) self.assertEqual(image_processor.size, {"height": 42, "width": 42}) @slow def test_image_processor_integration_test(self): processor = Owlv2ImageProcessor() image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") pixel_values = processor(image, return_tensors="pt").pixel_values mean_value = round(pixel_values.mean().item(), 4) self.assertEqual(mean_value, 0.2353) @slow def test_image_processor_integration_test_resize(self): checkpoint = "google/owlv2-base-patch16-ensemble" processor = AutoProcessor.from_pretrained(checkpoint) model = Owlv2ForObjectDetection.from_pretrained(checkpoint) image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") text = ["cat"] target_size = image.size[::-1] expected_boxes = torch.tensor( [ [341.66656494140625, 23.38756561279297, 642.321044921875, 371.3482971191406], [6.753320693969727, 51.96149826049805, 326.61810302734375, 473.12982177734375], ] ) # single image inputs = processor(text=[text], images=[image], return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = processor.post_process_object_detection(outputs, threshold=0.2, target_sizes=[target_size])[0] boxes = results["boxes"] self.assertTrue( torch.allclose(boxes, expected_boxes, atol=1e-2), f"Single image bounding boxes fail. Expected {expected_boxes}, got {boxes}", ) # batch of images inputs = processor(text=[text, text], images=[image, image], return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = processor.post_process_object_detection( outputs, threshold=0.2, target_sizes=[target_size, target_size] ) for result in results: boxes = result["boxes"] self.assertTrue( torch.allclose(boxes, expected_boxes, atol=1e-2), f"Batch image bounding boxes fail. Expected {expected_boxes}, got {boxes}", ) @unittest.skip(reason="OWLv2 doesn't treat 4 channel PIL and numpy consistently yet") # FIXME Amy def test_call_numpy_4_channels(self): pass
transformers/tests/models/owlv2/test_image_processing_owlv2.py/0
{ "file_path": "transformers/tests/models/owlv2/test_image_processing_owlv2.py", "repo_id": "transformers", "token_count": 2696 }
# coding=utf-8 # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch PhiMoE model.""" import unittest from typing import List from parameterized import parameterized from transformers import PhimoeConfig, StaticCache, is_torch_available, set_seed from transformers.testing_utils import ( require_torch, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( AutoTokenizer, PhimoeForCausalLM, PhimoeForSequenceClassification, PhimoeModel, ) end_of_text_token = 32000 class PhimoeMiniWithStaticCache(torch.nn.Module): def __init__(self, model: PhimoeForCausalLM, batch_size: int, max_seq_len: int): super().__init__() self.model = model self.cache = StaticCache( config=model.config, batch_size=batch_size, max_cache_len=max_seq_len, device=self.model.device, dtype=self.model.dtype, ) def forward( self, input_ids: torch.LongTensor = None, ) -> torch.FloatTensor: return self.model.forward( input_ids=input_ids, use_cache=True, return_dict=True, past_key_values=self.cache, ).logits @staticmethod def generate(model: PhimoeForCausalLM, prompt_tokens: torch.LongTensor, max_seq_len: int) -> List[int]: model = PhimoeMiniWithStaticCache(model, 1, max_seq_len + prompt_tokens.shape[-1]) response_tokens = [] for input_pos in range(prompt_tokens.shape[-1]): result = model.forward( input_ids=prompt_tokens[:, input_pos : input_pos + 1], ) response_tokens.append(prompt_tokens[0][input_pos].item()) current_token = torch.argmax(result[:, -1, :], dim=-1).item() response_tokens.append(current_token) while current_token != end_of_text_token and len(response_tokens) < max_seq_len: result = model.forward( input_ids=torch.tensor([[current_token]], dtype=torch.long), ) current_token = torch.argmax(result[:, -1, :], dim=-1).item() response_tokens.append(current_token) return response_tokens class PhimoeModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_input_mask=True, use_token_type_ids=False, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, num_key_value_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=131072, type_vocab_size=16, type_sequence_label_size=2, initializer_range=0.02, num_labels=3, num_choices=4, pad_token_id=0, scope=None, original_max_position_embeddings=4096, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_token_type_ids = use_token_type_ids self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.type_vocab_size = type_vocab_size self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range self.num_labels = num_labels self.num_choices = num_choices self.pad_token_id = pad_token_id self.scope = scope self.original_max_position_embeddings = original_max_position_embeddings # Copied from tests.models.llama.test_modeling_llama.LlamaModelTester.prepare_config_and_inputs def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = torch.tril(torch.ones_like(input_ids).to(torch_device)) token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = self.get_config() return config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels def get_config(self): return PhimoeConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, num_key_value_heads=self.num_key_value_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, max_position_embeddings=self.max_position_embeddings, type_vocab_size=self.type_vocab_size, is_decoder=False, initializer_range=self.initializer_range, pad_token_id=self.pad_token_id, num_experts_per_tok=2, num_local_experts=2, original_max_position_embeddings=self.original_max_position_embeddings, ) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTester.create_and_check_model with Llama->Phimoe def create_and_check_model( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = PhimoeModel(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTester.create_and_check_model_as_decoder with Llama->Phimoe def create_and_check_model_as_decoder( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, encoder_hidden_states, encoder_attention_mask, ): config.add_cross_attention = True model = PhimoeModel(config) model.to(torch_device) model.eval() result = model( input_ids, attention_mask=input_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, ) result = model( input_ids, attention_mask=input_mask, encoder_hidden_states=encoder_hidden_states, ) result = model(input_ids, attention_mask=input_mask) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTester.create_and_check_for_causal_lm with Llama->Phimoe def create_and_check_for_causal_lm( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, encoder_hidden_states, encoder_attention_mask, ): model = PhimoeForCausalLM(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, labels=token_labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTester.create_and_check_decoder_model_past_large_inputs with Llama->Phimoe def create_and_check_decoder_model_past_large_inputs( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, encoder_hidden_states, encoder_attention_mask, ): config.is_decoder = True config.add_cross_attention = True model = PhimoeForCausalLM(config=config) model.to(torch_device) model.eval() # first forward pass outputs = model( input_ids, attention_mask=input_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, use_cache=True, ) past_key_values = outputs.past_key_values # create hypothetical multiple next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 3), config.vocab_size) next_mask = ids_tensor((self.batch_size, 3), vocab_size=2) # append to next input_ids and next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) next_attention_mask = torch.cat([input_mask, next_mask], dim=-1) output_from_no_past = model( next_input_ids, attention_mask=next_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, output_hidden_states=True, )["hidden_states"][0] output_from_past = model( next_tokens, attention_mask=next_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, output_hidden_states=True, )["hidden_states"][0] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -3:, random_slice_idx].detach() output_from_past_slice = output_from_past[:, :, random_slice_idx].detach() self.parent.assertTrue(output_from_past_slice.shape[1] == next_tokens.shape[1]) # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTester.prepare_config_and_inputs_for_common def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": input_mask} return config, inputs_dict @require_torch class PhimoeModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( (PhimoeModel, PhimoeForCausalLM, PhimoeForSequenceClassification) if is_torch_available() else () ) all_generative_model_classes = (PhimoeForCausalLM,) if is_torch_available() else () pipeline_model_mapping = ( { "feature-extraction": PhimoeModel, "text-classification": PhimoeForSequenceClassification, "text-generation": PhimoeForCausalLM, "zero-shot": PhimoeForSequenceClassification, } if is_torch_available() else {} ) test_headmasking = False test_pruning = False # TODO (ydshieh): Check this. See https://app.circleci.com/pipelines/github/huggingface/transformers/79292/workflows/fa2ba644-8953-44a6-8f67-ccd69ca6a476/jobs/1012905 def is_pipeline_test_to_skip( self, pipeline_test_casse_name, config_class, model_architecture, tokenizer_name, processor_name ): return True # Copied from tests.models.llama.test_modeling_llama.LlamaModelTest.setUp with Llama->Phimoe def setUp(self): self.model_tester = PhimoeModelTester(self) self.config_tester = ConfigTester(self, config_class=PhimoeConfig, hidden_size=37) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTest.test_config def test_config(self): self.config_tester.run_common_tests() # Copied from tests.models.llama.test_modeling_llama.LlamaModelTest.test_model def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTest.test_llama_sequence_classification_model with Llama->Phimoe,llama->phimoe def test_phimoe_sequence_classification_model(self): config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() config.num_labels = 3 input_ids = input_dict["input_ids"] attention_mask = input_ids.ne(1).to(torch_device) sequence_labels = ids_tensor([self.model_tester.batch_size], self.model_tester.type_sequence_label_size) model = PhimoeForSequenceClassification(config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=attention_mask, labels=sequence_labels) self.assertEqual(result.logits.shape, (self.model_tester.batch_size, self.model_tester.num_labels)) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTest.test_llama_sequence_classification_model_for_single_label with Llama->Phimoe,llama->phimoe def test_phimoe_sequence_classification_model_for_single_label(self): config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() config.num_labels = 3 config.problem_type = "single_label_classification" input_ids = input_dict["input_ids"] attention_mask = input_ids.ne(1).to(torch_device) sequence_labels = ids_tensor([self.model_tester.batch_size], self.model_tester.type_sequence_label_size) model = PhimoeForSequenceClassification(config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=attention_mask, labels=sequence_labels) self.assertEqual(result.logits.shape, (self.model_tester.batch_size, self.model_tester.num_labels)) # Copied from tests.models.llama.test_modeling_llama.LlamaModelTest.test_llama_sequence_classification_model_for_multi_label with Llama->Phimoe,llama->phimoe def test_phimoe_sequence_classification_model_for_multi_label(self): config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() config.num_labels = 3 config.problem_type = "multi_label_classification" input_ids = input_dict["input_ids"] attention_mask = input_ids.ne(1).to(torch_device) sequence_labels = ids_tensor( [self.model_tester.batch_size, config.num_labels], self.model_tester.type_sequence_label_size ).to(torch.float) model = PhimoeForSequenceClassification(config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=attention_mask, labels=sequence_labels) self.assertEqual(result.logits.shape, (self.model_tester.batch_size, self.model_tester.num_labels)) @parameterized.expand([("longrope",)]) def test_model_rope_scaling_from_config(self, scaling_type): config, _ = self.model_tester.prepare_config_and_inputs_for_common() short_input = ids_tensor([1, 10], config.vocab_size) long_input = ids_tensor([1, int(config.original_max_position_embeddings * 1.5)], config.vocab_size) set_seed(42) # Fixed seed at init time so the two models get the same random weights original_model = PhimoeModel(config) original_model.to(torch_device) original_model.eval() original_short_output = original_model(short_input).last_hidden_state original_long_output = original_model(long_input).last_hidden_state set_seed(42) # Fixed seed at init time so the two models get the same random weights n_factors = config.hidden_size // config.num_attention_heads // 2 config.rope_scaling = { "type": scaling_type, "short_factor": [3.0 for _ in range(n_factors)], "long_factor": [5.0 for _ in range(n_factors)], "short_mscale": 1.243163121016122, "long_mscale": 1.243163121016122, "original_max_position_embeddings": 4096, } scaled_model = PhimoeModel(config) scaled_model.to(torch_device) scaled_model.eval() scaled_short_output = scaled_model(short_input).last_hidden_state scaled_long_output = scaled_model(long_input).last_hidden_state # Scaling changes the RoPE embeddings, both for the short and long outputs self.assertFalse(torch.allclose(original_short_output, scaled_short_output, atol=1e-5)) self.assertFalse(torch.allclose(original_long_output, scaled_long_output, atol=1e-5)) @parameterized.expand([("longrope",)]) def test_model_rope_scaling_short_long_factor(self, scaling_type): config, _ = self.model_tester.prepare_config_and_inputs_for_common() n_factors = config.hidden_size // config.num_key_value_heads // 2 config.rope_scaling = { "type": scaling_type, "short_factor": [3.0 for _ in range(n_factors)], "long_factor": [5.0 for _ in range(n_factors)], "short_mscale": 1.243163121016122, "long_mscale": 1.243163121016122, "original_max_position_embeddings": 4096, } input_tensor = ids_tensor([1, 4090], config.vocab_size) model = PhimoeForCausalLM(config) model.to(torch_device) model.eval() generation_args_short = { "max_length": config.original_max_position_embeddings, "temperature": 0.0, "use_cache": True, "do_sample": False, "return_dict_in_generate": True, } output_with_short_factor = model.generate(input_tensor, **generation_args_short) keys_with_short_factor = output_with_short_factor.past_key_values[0][0] generation_args_long = { "max_length": config.original_max_position_embeddings + 5, "temperature": 0.0, "use_cache": True, "do_sample": False, "return_dict_in_generate": True, "output_logits": True, } output_with_long_factor = model.generate(input_tensor, **generation_args_long) keys_with_long_factor = output_with_long_factor.past_key_values[0][0] last_token_logits = output_with_long_factor.logits[-1][-1] regenerated_last_token_logits = model(output_with_long_factor.sequences[:, :-1]).logits[0][-1] keys_with_long_factor = keys_with_long_factor[:, :, : config.original_max_position_embeddings - 1, :] # KV cache is re-computed after reaching the (`config.original_max_position_embeddings`+1)th token position self.assertFalse(torch.allclose(keys_with_short_factor, keys_with_long_factor, atol=1e-3, rtol=1e-3)) # Last token generated using long factor torch.testing.assert_close(last_token_logits, regenerated_last_token_logits, rtol=1e-2, atol=1e-2) @slow @require_torch class PhimoeIntegrationTest(unittest.TestCase): def test_model_phimoe_instruct_logits(self): input_ids = { "input_ids": torch.tensor( [[1212, 318, 281, 1672, 2643, 290, 428, 318, 257, 1332]], dtype=torch.long, device=torch_device ) } model = PhimoeForCausalLM.from_pretrained("microsoft/Phi-3.5-MoE-instruct").to(torch_device) model.eval() output = model(**input_ids).logits EXPECTED_OUTPUT = torch.tensor([[-3.5312, -2.5000, -1.2734, 0.3555, -0.7578, -0.4727, 0.5977, -0.4316, 0.2256, -1.2188, -1.6797, 0.9961, 3.7656, 11.3125, -1.3828, -4.8438, -5.7500, -1.9375, 0.7227, -0.3438, -0.2100, -0.4277, -0.0444, -0.5352, -0.6406, -0.1016, -0.4258, -1.0234, 0.4297, -0.6250], [-0.9883, 0.1455, -0.4902, 2.3594, 0.7031, 3.1406, 0.4375, 0.2559, 0.6172, -2.1094, -1.3359, 2.5938, 4.9062, 10.8125, -0.1094, 1.5781, -4.9375, 0.7148, -0.0972, 1.7656, -0.0801, 0.2217, 0.1875, -0.4629, 1.5781, 0.3535, 0.0874, 0.6836, -0.0518, -1.2969]]).to(torch_device) # fmt: skip torch.testing.assert_close(EXPECTED_OUTPUT, output[0, :2, :30], rtol=1e-4, atol=1e-4) def test_phimoe_instruct_generation(self): model = PhimoeForCausalLM.from_pretrained("microsoft/Phi-3.5-MoE-instruct") tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-MoE-instruct") messages = [ { "role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user.", }, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(inputs, max_new_tokens=32) output_text = tokenizer.batch_decode(outputs) EXPECTED_OUTPUT = [ "<|system|> You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user.<|end|><|user|> Can you provide ways to eat combinations of bananas and dragonfruits?<|end|><|assistant|> Certainly! Bananas and dragonfruits are both delicious and nutritious fruits that can be combined in various ways to create tast" ] self.assertListEqual(output_text, EXPECTED_OUTPUT) def test_phimoe_instruct_with_static_cache(self): model = PhimoeForCausalLM.from_pretrained("microsoft/Phi-3.5-MoE-instruct") tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-MoE-instruct") messages = [ { "role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user.", }, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") response_tokens = PhimoeMiniWithStaticCache.generate(model, inputs, 64) output_text = tokenizer.batch_decode(torch.tensor([response_tokens], dtype=torch.long, device=torch_device)) EXPECTED_OUTPUT = [ "<|system|> You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user.<|end|><|user|> Can you provide ways to eat combinations of bananas and dragonfruits?<|end|><|assistant|> Certainly! Bananas and dragonfruits are both delicious and nutritious fruits that can" ] self.assertListEqual(output_text, EXPECTED_OUTPUT)
transformers/tests/models/phimoe/test_modeling_phimoe.py/0
{ "file_path": "transformers/tests/models/phimoe/test_modeling_phimoe.py", "repo_id": "transformers", "token_count": 11255 }
from __future__ import annotations import json import os import shutil import tempfile import unittest from unittest.mock import patch import numpy as np from transformers import BartTokenizer from transformers.models.bert.tokenization_bert import VOCAB_FILES_NAMES as DPR_VOCAB_FILES_NAMES from transformers.models.dpr.tokenization_dpr import DPRQuestionEncoderTokenizer from transformers.models.roberta.tokenization_roberta import VOCAB_FILES_NAMES as BART_VOCAB_FILES_NAMES from transformers.testing_utils import require_sentencepiece, require_tf, require_tokenizers, slow from transformers.utils import cached_property, is_datasets_available, is_faiss_available, is_tf_available if is_tf_available() and is_datasets_available() and is_faiss_available(): import faiss import tensorflow as tf from datasets import Dataset from transformers import ( AutoConfig, RagConfig, RagRetriever, RagTokenizer, TFAutoModel, TFAutoModelForSeq2SeqLM, TFRagModel, TFRagSequenceForGeneration, TFRagTokenForGeneration, ) from transformers.modeling_tf_outputs import TFBaseModelOutput from ..bart.test_modeling_tf_bart import TFBartModelTester from ..dpr.test_modeling_tf_dpr import TFDPRModelTester TOLERANCE = 1e-3 def require_retrieval(test_case): """ Decorator marking a test that requires a set of dependencies necessary for pefrorm retrieval with [`RagRetriever`]. These tests are skipped when respective libraries are not installed. """ if not (is_tf_available() and is_datasets_available() and is_faiss_available()): test_case = unittest.skip("test requires tensorflow, datasets and faiss")(test_case) return test_case @require_tf @require_retrieval @require_sentencepiece class TFRagTestMixin: all_model_classes = ( (TFRagModel, TFRagTokenForGeneration, TFRagSequenceForGeneration) if is_tf_available() and is_datasets_available() and is_faiss_available() else () ) all_generative_model_classes = ( (TFRagTokenForGeneration, TFRagSequenceForGeneration) if is_tf_available() and is_datasets_available() and is_faiss_available() else () ) retrieval_vector_size = 32 n_docs = 3 max_combined_length = 16 def setUp(self): self.tmpdirname = tempfile.mkdtemp() # DPR tok vocab_tokens = [ "[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]", "want", "##want", "##ed", "wa", "un", "runn", "##ing", ",", "low", "lowest", ] dpr_tokenizer_path = os.path.join(self.tmpdirname, "dpr_tokenizer") os.makedirs(dpr_tokenizer_path, exist_ok=True) self.vocab_file = os.path.join(dpr_tokenizer_path, DPR_VOCAB_FILES_NAMES["vocab_file"]) with open(self.vocab_file, "w", encoding="utf-8") as vocab_writer: vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) # BART tok vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "\u0120", "\u0120l", "\u0120n", "\u0120lo", "\u0120low", "er", "\u0120lowest", "\u0120newer", "\u0120wider", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""] self.special_tokens_map = {"unk_token": "<unk>"} bart_tokenizer_path = os.path.join(self.tmpdirname, "bart_tokenizer") os.makedirs(bart_tokenizer_path, exist_ok=True) self.vocab_file = os.path.join(bart_tokenizer_path, BART_VOCAB_FILES_NAMES["vocab_file"]) self.merges_file = os.path.join(bart_tokenizer_path, BART_VOCAB_FILES_NAMES["merges_file"]) with open(self.vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") with open(self.merges_file, "w", encoding="utf-8") as fp: fp.write("\n".join(merges)) @cached_property def dpr_tokenizer(self) -> DPRQuestionEncoderTokenizer: return DPRQuestionEncoderTokenizer.from_pretrained(os.path.join(self.tmpdirname, "dpr_tokenizer")) @cached_property def bart_tokenizer(self) -> BartTokenizer: return BartTokenizer.from_pretrained(os.path.join(self.tmpdirname, "bart_tokenizer")) def tearDown(self): shutil.rmtree(self.tmpdirname) def get_retriever(self, config): dataset = Dataset.from_dict( { "id": ["0", "1", "3"], "text": ["foo", "bar", "qux"], "title": ["Foo", "Bar", "Qux"], "embeddings": [ np.ones(self.retrieval_vector_size), 2 * np.ones(self.retrieval_vector_size), 3 * np.ones(self.retrieval_vector_size), ], } ) dataset.add_faiss_index("embeddings", string_factory="Flat", metric_type=faiss.METRIC_INNER_PRODUCT) tokenizer = self.bart_tokenizer with patch("transformers.models.rag.retrieval_rag.load_dataset") as mock_load_dataset: mock_load_dataset.return_value = dataset retriever = RagRetriever( config, question_encoder_tokenizer=self.dpr_tokenizer, generator_tokenizer=tokenizer, ) return retriever def check_model_with_retriever( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) for model_class in self.all_model_classes: model = model_class(config, retriever=self.get_retriever(config)) self.assertTrue(model.config.is_encoder_decoder) outputs = model( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) # logits self.assertEqual( outputs.logits.shape, (self.n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (self.n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], self.n_docs)) def check_model_generate_from_context_input_ids( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for i, model_class in enumerate(self.all_generative_model_classes): model = model_class(config) self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.numpy(), prefix=config.generator.prefix, return_tensors="tf", ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) retrieved_doc_embeds = tf.cast(retrieved_doc_embeds, tf.float32) # compute doc_scores doc_scores = tf.squeeze( tf.matmul(tf.expand_dims(question_hidden_states, axis=[1]), retrieved_doc_embeds, transpose_b=True), axis=[1], ) outputs = model.generate( context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, ) self.assertIsNotNone(outputs) def check_model_generate( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) for model_class in self.all_generative_model_classes: model = model_class(config, retriever=self.get_retriever(config)) self.assertTrue(model.config.is_encoder_decoder) input_ids = tf.cast(input_ids, tf.int32) outputs = model.generate( input_ids=input_ids, num_beams=2, num_return_sequences=2, decoder_start_token_id=config.generator.eos_token_id, max_new_tokens=5, ) self.assertIsNotNone(outputs) def check_model_without_retriever( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for model_class in self.all_model_classes: model = model_class(config) self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.numpy(), prefix=config.generator.prefix, return_tensors="tf", ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) retrieved_doc_embeds = tf.cast(retrieved_doc_embeds, tf.float32) # compute doc_scores doc_scores = tf.squeeze( tf.matmul(tf.expand_dims(question_hidden_states, axis=[1]), retrieved_doc_embeds, transpose_b=True), axis=[1], ) outputs = model( input_ids=None, context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) # logits self.assertEqual( outputs.logits.shape, (self.n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (self.n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], self.n_docs)) def check_model_custom_n_docs( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, n_docs, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for model_class in self.all_model_classes: model = model_class(config) self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.numpy(), prefix=config.generator.prefix, return_tensors="tf", n_docs=n_docs, ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) retrieved_doc_embeds = tf.cast(retrieved_doc_embeds, tf.float32) # compute doc_scores doc_scores = tf.squeeze( tf.matmul(tf.expand_dims(question_hidden_states, axis=[1]), retrieved_doc_embeds, transpose_b=True), axis=[1], ) outputs = model( input_ids=None, context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, n_docs=n_docs, ) # logits self.assertEqual( outputs.logits.shape, (n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], n_docs)) def check_model_with_mismatch_n_docs_value( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, retriever_n_docs, generator_n_docs, **kwargs, ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for model_class in self.all_model_classes: model = model_class(config) self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.numpy(), prefix=config.generator.prefix, return_tensors="tf", n_docs=retriever_n_docs, ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) retrieved_doc_embeds = tf.cast(retrieved_doc_embeds, tf.float32) # compute doc_scores doc_scores = tf.squeeze( tf.matmul(tf.expand_dims(question_hidden_states, axis=[1]), retrieved_doc_embeds, transpose_b=True), axis=[1], ) self.assertRaises( AssertionError, model.__call__, input_ids=None, context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, n_docs=generator_n_docs, ) def check_model_with_encoder_outputs( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) for model_class in self.all_model_classes: model = model_class(config, retriever=self.get_retriever(config)) self.assertTrue(model.config.is_encoder_decoder) outputs = model( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) encoder_outputs = TFBaseModelOutput(outputs.generator_enc_last_hidden_state) # run only generator outputs = model( input_ids=None, encoder_outputs=encoder_outputs, doc_scores=outputs.doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) # logits self.assertEqual( outputs.logits.shape, (self.n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (self.n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], self.n_docs)) def test_model_with_retriever(self): inputs_dict = self.config_and_inputs self.check_model_with_retriever(**inputs_dict) def test_model_without_retriever(self): inputs_dict = self.config_and_inputs self.check_model_without_retriever(**inputs_dict) @slow def test_model_generate_from_context_input_ids(self): inputs_dict = self.config_and_inputs self.check_model_generate_from_context_input_ids(**inputs_dict) def test_model_with_encoder_outputs(self): inputs_dict = self.config_and_inputs self.check_model_with_encoder_outputs(**inputs_dict) @slow def test_model_generate(self): inputs_dict = self.config_and_inputs self.check_model_generate(**inputs_dict) def test_model_with_custom_n_docs(self): inputs_dict = self.config_and_inputs inputs_dict["n_docs"] = 1 self.check_model_custom_n_docs(**inputs_dict) def test_model_with_mismatch_n_docs_value(self): inputs_dict = self.config_and_inputs inputs_dict["retriever_n_docs"] = 3 inputs_dict["generator_n_docs"] = 2 self.check_model_with_mismatch_n_docs_value(**inputs_dict) @require_tf @require_retrieval class TFRagDPRBartTest(TFRagTestMixin, unittest.TestCase): @cached_property def config_and_inputs(self): question_encoder_tester = TFDPRModelTester(self) dpr_config_and_inputs = question_encoder_tester.prepare_config_and_inputs() generator_tester = TFBartModelTester(self) bart_config_and_inputs = generator_tester.prepare_config_and_inputs_for_common() (question_encoder_config, input_ids, _, input_mask, _, _, _) = dpr_config_and_inputs (generator_config, bart_inputs_dict) = bart_config_and_inputs decoder_input_ids, decoder_attention_mask = bart_inputs_dict["input_ids"], bart_inputs_dict["attention_mask"] config = RagConfig.from_question_encoder_generator_configs( question_encoder_config, generator_config, n_docs=self.n_docs, retrieval_vector_size=self.retrieval_vector_size, max_combined_length=self.max_combined_length, ) return { "config": config, "input_ids": input_ids, "attention_mask": input_mask, "decoder_input_ids": decoder_input_ids, "decoder_attention_mask": decoder_attention_mask, } @require_tf @require_retrieval @require_sentencepiece @require_tokenizers class TFRagModelIntegrationTests(unittest.TestCase): @cached_property def token_model(self): return TFRagTokenForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn" ) @cached_property def sequence_model(self): return TFRagSequenceForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn" ) def token_model_nq_checkpoint(self, retriever): return TFRagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) def get_rag_config(self): question_encoder_config = AutoConfig.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_config = AutoConfig.from_pretrained("facebook/bart-large-cnn") return RagConfig.from_question_encoder_generator_configs( question_encoder_config, generator_config, bos_token_id=0, decoder_start_token_id=2, eos_token_id=2, is_encoder_decoder=True, pad_token_id=1, vocab_size=50264, title_sep=" / ", doc_sep=" // ", n_docs=5, max_combined_length=300, dataset="wiki_dpr", dataset_split="train", index_name="exact", index_path=None, use_dummy_dataset=True, retrieval_vector_size=768, retrieval_batch_size=8, dataset_revision="b24a417", ) @slow def test_rag_sequence_inference(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_sequence = self.sequence_model rag_sequence.set_retriever(rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="tf" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids output = rag_sequence( input_ids, labels=decoder_input_ids, ) expected_shape = tf.TensorShape([5, 5, 50264]) self.assertEqual(output.logits.shape, expected_shape) expected_doc_scores = tf.convert_to_tensor([[75.0286, 74.4998, 74.0804, 74.0306, 73.9504]]) expected_loss = tf.convert_to_tensor([36.7368]) tf.debugging.assert_near(output.loss, expected_loss, atol=1e-3) tf.debugging.assert_near(output.doc_scores, expected_doc_scores, atol=1e-3) @slow def test_rag_token_inference(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_token = self.token_model rag_token.set_retriever(rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="tf" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids output = rag_token( input_ids, labels=decoder_input_ids, ) expected_shape = tf.TensorShape([5, 5, 50264]) self.assertEqual(output.logits.shape, expected_shape) expected_doc_scores = tf.convert_to_tensor([[75.0286, 74.4998, 74.0804, 74.0306, 73.9504]]) expected_loss = tf.convert_to_tensor([36.3557]) tf.debugging.assert_near(output.loss, expected_loss, atol=1e-3) tf.debugging.assert_near(output.doc_scores, expected_doc_scores, atol=1e-3) @slow def test_rag_token_inference_nq_checkpoint(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_token = self.token_model_nq_checkpoint(retriever=rag_retriever) # check that outputs after saving and loading are equal with tempfile.TemporaryDirectory() as tmpdirname: rag_token.save_pretrained(tmpdirname) rag_token = TFRagTokenForGeneration.from_pretrained(tmpdirname, retriever=rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="tf" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids output = rag_token( input_ids, labels=decoder_input_ids, ) expected_shape = tf.TensorShape([5, 5, 50265]) self.assertEqual(output.logits.shape, expected_shape) expected_doc_scores = tf.convert_to_tensor([[62.9402, 62.7107, 62.2382, 62.1194, 61.8578]]) expected_loss = tf.convert_to_tensor([32.521812]) tf.debugging.assert_near(output.loss, expected_loss, atol=1e-3) tf.debugging.assert_near(output.doc_scores, expected_doc_scores, atol=1e-3) @slow def test_rag_token_inference_save_pretrained(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_token = self.token_model rag_token.set_retriever(rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="tf" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids # model must run once to be functional before loading/saving works rag_token( input_ids, labels=decoder_input_ids, ) # check that outputs after saving and loading are equal with tempfile.TemporaryDirectory() as tmpdirname: rag_token.save_pretrained(tmpdirname) rag_token = TFRagTokenForGeneration.from_pretrained(tmpdirname, retriever=rag_retriever) output = rag_token( input_ids, labels=decoder_input_ids, ) expected_shape = tf.TensorShape([5, 5, 50264]) self.assertEqual(output.logits.shape, expected_shape) expected_doc_scores = tf.convert_to_tensor([[75.0286, 74.4998, 74.0804, 74.0306, 73.9504]]) expected_loss = tf.convert_to_tensor([36.3557]) tf.debugging.assert_near(output.loss, expected_loss, atol=1e-3) tf.debugging.assert_near(output.doc_scores, expected_doc_scores, atol=1e-3) @slow def test_init_and_from_pretrained(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_config = RagConfig.from_pretrained("facebook/rag-sequence-base") rag = TFRagTokenForGeneration(rag_config, retriever=rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="tf" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids rag( input_ids, decoder_input_ids=decoder_input_ids, ) # this should not give any warnings with tempfile.TemporaryDirectory() as tmpdirname: rag.save_pretrained(tmpdirname) rag = TFRagTokenForGeneration.from_pretrained(tmpdirname, retriever=rag_retriever) @property def test_data_questions(self): return [ "who got the first nobel prize in physics", "when is the next deadpool movie being released", "which mode is used for short wave broadcast service", "who is the owner of reading football club", "when is the next scandal episode coming out", "when is the last time the philadelphia won the superbowl", "what is the most current adobe flash player version", "how many episodes are there in dragon ball z", ] @slow def test_rag_token_greedy_search(self): tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained( "facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True, dataset_revision="b24a417" ) rag_token = TFRagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) # check first two questions input_dict = tokenizer( self.test_data_questions[:2], return_tensors="tf", padding=True, truncation=True, ) input_ids = input_dict.input_ids attention_mask = input_dict.attention_mask # make sure only 1 beam is used rag_token.config.num_beams = 1 output_ids = rag_token.generate( input_ids, attention_mask=attention_mask, ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) EXPECTED_OUTPUTS = [ " albert einstein", " september 22, 2017", ] self.assertListEqual(outputs, EXPECTED_OUTPUTS) @slow def test_rag_token_generate_batch(self): # NOTE: gold labels comes from num_beam=4, so this is effectively beam-search test tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained( "facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True, dataset_revision="b24a417" ) rag_token = TFRagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) input_dict = tokenizer( self.test_data_questions, return_tensors="tf", padding=True, truncation=True, ) input_ids = input_dict.input_ids attention_mask = input_dict.attention_mask EXPECTED_OUTPUTS = [ " albert einstein", " september 22, 2017", " amplitude modulation", " stefan persson", " april 20, 2018", " the 1970s", " 7.1. 2", " 13", ] # Split into 2 batches of 4 examples to avoid GPU OOM. output_ids = rag_token.generate( input_ids[:4], attention_mask=attention_mask[:4], ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) self.assertListEqual(outputs, EXPECTED_OUTPUTS[:4]) output_ids = rag_token.generate( input_ids[4:], attention_mask=attention_mask[4:], ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) self.assertListEqual(outputs, EXPECTED_OUTPUTS[4:]) @slow def test_rag_sequence_generate_batch(self): tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained( "facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True, dataset_revision="b24a417", ) rag_sequence = TFRagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever) input_dict = tokenizer( self.test_data_questions, return_tensors="tf", padding=True, truncation=True, ) input_ids = input_dict.input_ids attention_mask = input_dict.attention_mask output_ids = rag_sequence.generate( input_ids, attention_mask=attention_mask, ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) EXPECTED_OUTPUTS = [ " albert einstein", " june 22, 2018", " amplitude modulation", " tim besley ( chairman )", " june 20, 2018", " 1980", " 7.0", " 8", ] self.assertListEqual(outputs, EXPECTED_OUTPUTS) @slow def test_rag_sequence_generate_batch_from_context_input_ids(self): tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained( "facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True, dataset_revision="b24a417" ) rag_sequence = TFRagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever) input_dict = tokenizer( self.test_data_questions, return_tensors="tf", padding=True, truncation=True, ) input_ids = input_dict.input_ids question_hidden_states = rag_sequence.question_encoder(input_ids)[0] docs_dict = retriever(input_ids.numpy(), question_hidden_states.numpy(), return_tensors="tf") doc_scores = tf.squeeze( tf.matmul( tf.expand_dims(question_hidden_states, axis=[1]), docs_dict["retrieved_doc_embeds"], transpose_b=True ), axis=[1], ) output_ids = rag_sequence.generate( context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores, do_deduplication=True, ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) EXPECTED_OUTPUTS = [ " albert einstein", " june 22, 2018", " amplitude modulation", " tim besley ( chairman )", " june 20, 2018", " 1980", " 7.0", " 8", ] self.assertListEqual(outputs, EXPECTED_OUTPUTS) @require_tf @require_retrieval class TFRagModelSaveLoadTests(unittest.TestCase): def get_rag_config(self): question_encoder_config = AutoConfig.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_config = AutoConfig.from_pretrained("facebook/bart-large-cnn") return RagConfig.from_question_encoder_generator_configs( question_encoder_config, generator_config, bos_token_id=0, decoder_start_token_id=2, eos_token_id=2, is_encoder_decoder=True, pad_token_id=1, vocab_size=50264, title_sep=" / ", doc_sep=" // ", n_docs=5, max_combined_length=300, dataset="wiki_dpr", dataset_split="train", index_name="exact", index_path=None, use_dummy_dataset=True, retrieval_vector_size=768, retrieval_batch_size=8, dataset_revision="b24a417", ) @slow def test_rag_sequence_from_pretrained(self): load_weight_prefix = "tf_rag_model_1" rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="tf" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids with tempfile.TemporaryDirectory() as tmp_dirname: rag_sequence = TFRagSequenceForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn", retriever=rag_retriever, config=rag_config, ) rag_sequence.build_in_name_scope() # check that the from pretrained methods work rag_sequence.save_pretrained(tmp_dirname) rag_sequence.from_pretrained(tmp_dirname, retriever=rag_retriever) output = rag_sequence(input_ids, labels=decoder_input_ids) loss_pretrained = output.loss del rag_sequence question_encoder = TFAutoModel.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator = TFAutoModelForSeq2SeqLM.from_pretrained( "facebook/bart-large-cnn", load_weight_prefix=load_weight_prefix, name="generator" ) rag_sequence = TFRagSequenceForGeneration( config=rag_config, question_encoder=question_encoder, generator=generator, retriever=rag_retriever ) output = rag_sequence(input_ids, labels=decoder_input_ids) loss_init = output.loss self.assertAlmostEqual(loss_pretrained, loss_init, places=4) @slow def test_rag_token_from_pretrained(self): load_weight_prefix = "tf_rag_model_1" rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="tf" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids with tempfile.TemporaryDirectory() as tmp_dirname: rag_token = TFRagTokenForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn", retriever=rag_retriever, config=rag_config, ) rag_token.build_in_name_scope() # check that the from pretrained methods work rag_token.save_pretrained(tmp_dirname) rag_token.from_pretrained(tmp_dirname, retriever=rag_retriever) output = rag_token(input_ids, labels=decoder_input_ids) loss_pretrained = output.loss del rag_token question_encoder = TFAutoModel.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator = TFAutoModelForSeq2SeqLM.from_pretrained( "facebook/bart-large-cnn", load_weight_prefix=load_weight_prefix, name="generator" ) rag_token = TFRagTokenForGeneration( config=rag_config, question_encoder=question_encoder, generator=generator, retriever=rag_retriever ) output = rag_token(input_ids, labels=decoder_input_ids) loss_init = output.loss self.assertAlmostEqual(loss_pretrained, loss_init, places=4)
transformers/tests/models/rag/test_modeling_tf_rag.py/0
{ "file_path": "transformers/tests/models/rag/test_modeling_tf_rag.py", "repo_id": "transformers", "token_count": 19806 }
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shutil import tempfile import unittest import numpy as np from transformers.testing_utils import ( is_pt_tf_cross_test, require_tf, require_torch, require_torchvision, require_vision, ) from transformers.utils import is_tf_available, is_torch_available, is_vision_available from ...test_processing_common import ProcessorTesterMixin, prepare_image_inputs if is_vision_available(): from PIL import Image from transformers import AutoProcessor, SamImageProcessor, SamProcessor if is_torch_available(): import torch from transformers.models.sam.image_processing_sam import _mask_to_rle_pytorch if is_tf_available(): import tensorflow as tf from transformers.models.sam.image_processing_sam import _mask_to_rle_tf @require_vision @require_torchvision class SamProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = SamProcessor def setUp(self): self.tmpdirname = tempfile.mkdtemp() image_processor = SamImageProcessor() processor = SamProcessor(image_processor) processor.save_pretrained(self.tmpdirname) def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor def tearDown(self): shutil.rmtree(self.tmpdirname) def prepare_mask_inputs(self): """This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True, or a list of PyTorch tensors if one specifies torchify=True. """ mask_inputs = [np.random.randint(255, size=(30, 400), dtype=np.uint8)] mask_inputs = [Image.fromarray(x) for x in mask_inputs] return mask_inputs def test_chat_template_save_loading(self): self.skipTest("SamProcessor does not have a tokenizer") def test_image_processor_defaults_preserved_by_image_kwargs(self): self.skipTest("SamProcessor does not have a tokenizer") def test_kwargs_overrides_default_image_processor_kwargs(self): self.skipTest("SamProcessor does not have a tokenizer") def test_kwargs_overrides_default_tokenizer_kwargs(self): self.skipTest("SamProcessor does not have a tokenizer") def test_tokenizer_defaults_preserved_by_kwargs(self): self.skipTest("SamProcessor does not have a tokenizer") def test_save_load_pretrained_additional_features(self): processor = SamProcessor(image_processor=self.get_image_processor()) processor.save_pretrained(self.tmpdirname) image_processor_add_kwargs = self.get_image_processor(do_normalize=False, padding_value=1.0) processor = SamProcessor.from_pretrained(self.tmpdirname, do_normalize=False, padding_value=1.0) self.assertEqual(processor.image_processor.to_json_string(), image_processor_add_kwargs.to_json_string()) self.assertIsInstance(processor.image_processor, SamImageProcessor) def test_image_processor_no_masks(self): image_processor = self.get_image_processor() processor = SamProcessor(image_processor=image_processor) image_input = self.prepare_image_inputs() input_feat_extract = image_processor(image_input, return_tensors="np") input_processor = processor(images=image_input, return_tensors="np") for key in input_feat_extract.keys(): self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) for image in input_feat_extract.pixel_values: self.assertEqual(image.shape, (3, 1024, 1024)) for original_size in input_feat_extract.original_sizes: np.testing.assert_array_equal(original_size, np.array([30, 400])) for reshaped_input_size in input_feat_extract.reshaped_input_sizes: np.testing.assert_array_equal( reshaped_input_size, np.array([77, 1024]) ) # reshaped_input_size value is before padding def test_image_processor_with_masks(self): image_processor = self.get_image_processor() processor = SamProcessor(image_processor=image_processor) image_input = self.prepare_image_inputs() mask_input = self.prepare_mask_inputs() input_feat_extract = image_processor(images=image_input, segmentation_maps=mask_input, return_tensors="np") input_processor = processor(images=image_input, segmentation_maps=mask_input, return_tensors="np") for key in input_feat_extract.keys(): self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) for label in input_feat_extract.labels: self.assertEqual(label.shape, (256, 256)) @require_torch def test_post_process_masks(self): image_processor = self.get_image_processor() processor = SamProcessor(image_processor=image_processor) dummy_masks = [torch.ones((1, 3, 5, 5))] original_sizes = [[1764, 2646]] reshaped_input_size = [[683, 1024]] masks = processor.post_process_masks(dummy_masks, original_sizes, reshaped_input_size) self.assertEqual(masks[0].shape, (1, 3, 1764, 2646)) masks = processor.post_process_masks( dummy_masks, torch.tensor(original_sizes), torch.tensor(reshaped_input_size) ) self.assertEqual(masks[0].shape, (1, 3, 1764, 2646)) # should also work with np dummy_masks = [np.ones((1, 3, 5, 5))] masks = processor.post_process_masks(dummy_masks, np.array(original_sizes), np.array(reshaped_input_size)) self.assertEqual(masks[0].shape, (1, 3, 1764, 2646)) dummy_masks = [[1, 0], [0, 1]] with self.assertRaises(ValueError): masks = processor.post_process_masks(dummy_masks, np.array(original_sizes), np.array(reshaped_input_size)) def test_rle_encoding(self): """ Test the run-length encoding function. """ # Test that a mask of all zeros returns a single run [height * width]. input_mask = torch.zeros((1, 2, 2), dtype=torch.long) # shape: 1 x 2 x 2 rle = _mask_to_rle_pytorch(input_mask) self.assertEqual(len(rle), 1) self.assertEqual(rle[0]["size"], [2, 2]) # For a 2x2 all-zero mask, we expect a single run of length 4: self.assertEqual(rle[0]["counts"], [4]) # Test that a mask of all ones returns [0, height * width]. input_mask = torch.ones((1, 2, 2), dtype=torch.long) # shape: 1 x 2 x 2 rle = _mask_to_rle_pytorch(input_mask) self.assertEqual(len(rle), 1) self.assertEqual(rle[0]["size"], [2, 2]) # For a 2x2 all-one mask, we expect two runs: [0, 4]. self.assertEqual(rle[0]["counts"], [0, 4]) # Test a mask with mixed 0s and 1s to ensure the run-length encoding is correct. # Example mask: # Row 0: [0, 1] # Row 1: [1, 1] # This is shape (1, 2, 2). # Flattened in Fortran order -> [0, 1, 1, 1]. # The RLE for [0,1,1,1] is [1, 3]. input_mask = torch.tensor([[[0, 1], [1, 1]]], dtype=torch.long) rle = _mask_to_rle_pytorch(input_mask) self.assertEqual(len(rle), 1) self.assertEqual(rle[0]["size"], [2, 2]) self.assertEqual(rle[0]["counts"], [1, 3]) # 1 zero, followed by 3 ones @require_vision @require_tf class TFSamProcessorTest(unittest.TestCase): def setUp(self): self.tmpdirname = tempfile.mkdtemp() image_processor = SamImageProcessor() processor = SamProcessor(image_processor) processor.save_pretrained(self.tmpdirname) def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor def tearDown(self): shutil.rmtree(self.tmpdirname) # This is to avoid repeating the skipping of the common tests def prepare_image_inputs(self): """This function prepares a list of PIL images.""" return prepare_image_inputs() def test_save_load_pretrained_additional_features(self): processor = SamProcessor(image_processor=self.get_image_processor()) processor.save_pretrained(self.tmpdirname) image_processor_add_kwargs = self.get_image_processor(do_normalize=False, padding_value=1.0) processor = SamProcessor.from_pretrained(self.tmpdirname, do_normalize=False, padding_value=1.0) self.assertEqual(processor.image_processor.to_json_string(), image_processor_add_kwargs.to_json_string()) self.assertIsInstance(processor.image_processor, SamImageProcessor) def test_image_processor(self): image_processor = self.get_image_processor() processor = SamProcessor(image_processor=image_processor) image_input = self.prepare_image_inputs() input_feat_extract = image_processor(image_input, return_tensors="np") input_processor = processor(images=image_input, return_tensors="np") input_feat_extract.pop("original_sizes") # pop original_sizes as it is popped in the processor input_feat_extract.pop("reshaped_input_sizes") # pop reshaped_input_sizes as it is popped in the processor for key in input_feat_extract.keys(): self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) @require_tf def test_post_process_masks(self): image_processor = self.get_image_processor() processor = SamProcessor(image_processor=image_processor) dummy_masks = [tf.ones((1, 3, 5, 5))] original_sizes = [[1764, 2646]] reshaped_input_size = [[683, 1024]] masks = processor.post_process_masks(dummy_masks, original_sizes, reshaped_input_size, return_tensors="tf") self.assertEqual(masks[0].shape, (1, 3, 1764, 2646)) masks = processor.post_process_masks( dummy_masks, tf.convert_to_tensor(original_sizes), tf.convert_to_tensor(reshaped_input_size), return_tensors="tf", ) self.assertEqual(masks[0].shape, (1, 3, 1764, 2646)) # should also work with np dummy_masks = [np.ones((1, 3, 5, 5))] masks = processor.post_process_masks( dummy_masks, np.array(original_sizes), np.array(reshaped_input_size), return_tensors="tf" ) self.assertEqual(masks[0].shape, (1, 3, 1764, 2646)) dummy_masks = [[1, 0], [0, 1]] with self.assertRaises(tf.errors.InvalidArgumentError): masks = processor.post_process_masks( dummy_masks, np.array(original_sizes), np.array(reshaped_input_size), return_tensors="tf" ) def test_rle_encoding(self): """ Test the run-length encoding function. """ # Test that a mask of all zeros returns a single run [height * width]. input_mask = tf.zeros((1, 2, 2), dtype=tf.int64) # shape: 1 x 2 x 2 rle = _mask_to_rle_tf(input_mask) self.assertEqual(len(rle), 1) self.assertEqual(rle[0]["size"], [2, 2]) # For a 2x2 all-zero mask, we expect a single run of length 4: self.assertEqual(rle[0]["counts"], [4]) # Test that a mask of all ones returns [0, height * width]. input_mask = tf.ones((1, 2, 2), dtype=tf.int64) # shape: 1 x 2 x 2 rle = _mask_to_rle_tf(input_mask) self.assertEqual(len(rle), 1) self.assertEqual(rle[0]["size"], [2, 2]) # For a 2x2 all-one mask, we expect two runs: [0, 4]. self.assertEqual(rle[0]["counts"], [0, 4]) # Test a mask with mixed 0s and 1s to ensure the run-length encoding is correct. # Example mask: # Row 0: [0, 1] # Row 1: [1, 1] # This is shape (1, 2, 2). # Flattened in Fortran order -> [0, 1, 1, 1]. # The RLE for [0,1,1,1] is [1, 3]. input_mask = tf.tensor([[[0, 1], [1, 1]]], dtype=tf.int64) rle = _mask_to_rle_tf(input_mask) self.assertEqual(len(rle), 1) self.assertEqual(rle[0]["size"], [2, 2]) self.assertEqual(rle[0]["counts"], [1, 3]) # 1 zero, followed by 3 ones @require_vision @require_torchvision class SamProcessorEquivalenceTest(unittest.TestCase): def setUp(self): self.tmpdirname = tempfile.mkdtemp() image_processor = SamImageProcessor() processor = SamProcessor(image_processor) processor.save_pretrained(self.tmpdirname) def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor def tearDown(self): shutil.rmtree(self.tmpdirname) # This is to avoid repeating the skipping of the common tests def prepare_image_inputs(self): """This function prepares a list of PIL images.""" return prepare_image_inputs() @is_pt_tf_cross_test def test_post_process_masks_equivalence(self): image_processor = self.get_image_processor() processor = SamProcessor(image_processor=image_processor) dummy_masks = np.random.randint(0, 2, size=(1, 3, 5, 5)).astype(np.float32) tf_dummy_masks = [tf.convert_to_tensor(dummy_masks)] pt_dummy_masks = [torch.tensor(dummy_masks)] original_sizes = [[1764, 2646]] reshaped_input_size = [[683, 1024]] tf_masks = processor.post_process_masks( tf_dummy_masks, original_sizes, reshaped_input_size, return_tensors="tf" ) pt_masks = processor.post_process_masks( pt_dummy_masks, original_sizes, reshaped_input_size, return_tensors="pt" ) self.assertTrue(np.all(tf_masks[0].numpy() == pt_masks[0].numpy())) @is_pt_tf_cross_test def test_image_processor_equivalence(self): image_processor = self.get_image_processor() processor = SamProcessor(image_processor=image_processor) image_input = self.prepare_image_inputs() pt_input_feat_extract = image_processor(image_input, return_tensors="pt")["pixel_values"].numpy() pt_input_processor = processor(images=image_input, return_tensors="pt")["pixel_values"].numpy() tf_input_feat_extract = image_processor(image_input, return_tensors="tf")["pixel_values"].numpy() tf_input_processor = processor(images=image_input, return_tensors="tf")["pixel_values"].numpy() self.assertTrue(np.allclose(pt_input_feat_extract, pt_input_processor)) self.assertTrue(np.allclose(pt_input_feat_extract, tf_input_feat_extract)) self.assertTrue(np.allclose(pt_input_feat_extract, tf_input_processor))
transformers/tests/models/sam/test_processor_sam.py/0
{ "file_path": "transformers/tests/models/sam/test_processor_sam.py", "repo_id": "transformers", "token_count": 6344 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from parameterized import parameterized from transformers.testing_utils import require_torch, require_vision from transformers.utils import is_torch_available, is_vision_available from ...test_image_processing_common import ( ImageProcessingTestMixin, prepare_image_inputs, ) if is_torch_available(): import numpy as np import torch from transformers.models.superglue.modeling_superglue import KeypointMatchingOutput if is_vision_available(): from transformers import SuperGlueImageProcessor def random_array(size): return np.random.randint(255, size=size) def random_tensor(size): return torch.rand(size) class SuperGlueImageProcessingTester: def __init__( self, parent, batch_size=6, num_channels=3, image_size=18, min_resolution=30, max_resolution=400, do_resize=True, size=None, do_grayscale=True, ): size = size if size is not None else {"height": 480, "width": 640} self.parent = parent self.batch_size = batch_size self.num_channels = num_channels self.image_size = image_size self.min_resolution = min_resolution self.max_resolution = max_resolution self.do_resize = do_resize self.size = size self.do_grayscale = do_grayscale def prepare_image_processor_dict(self): return { "do_resize": self.do_resize, "size": self.size, "do_grayscale": self.do_grayscale, } def expected_output_image_shape(self, images): return 2, self.num_channels, self.size["height"], self.size["width"] def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False, pairs=True, batch_size=None): batch_size = batch_size if batch_size is not None else self.batch_size image_inputs = prepare_image_inputs( batch_size=batch_size, num_channels=self.num_channels, min_resolution=self.min_resolution, max_resolution=self.max_resolution, equal_resolution=equal_resolution, numpify=numpify, torchify=torchify, ) if pairs: image_inputs = [image_inputs[i : i + 2] for i in range(0, len(image_inputs), 2)] return image_inputs def prepare_keypoint_matching_output(self, pixel_values): max_number_keypoints = 50 batch_size = len(pixel_values) mask = torch.zeros((batch_size, 2, max_number_keypoints), dtype=torch.int) keypoints = torch.zeros((batch_size, 2, max_number_keypoints, 2)) matches = torch.full((batch_size, 2, max_number_keypoints), -1, dtype=torch.int) scores = torch.zeros((batch_size, 2, max_number_keypoints)) for i in range(batch_size): random_number_keypoints0 = np.random.randint(10, max_number_keypoints) random_number_keypoints1 = np.random.randint(10, max_number_keypoints) random_number_matches = np.random.randint(5, min(random_number_keypoints0, random_number_keypoints1)) mask[i, 0, :random_number_keypoints0] = 1 mask[i, 1, :random_number_keypoints1] = 1 keypoints[i, 0, :random_number_keypoints0] = torch.rand((random_number_keypoints0, 2)) keypoints[i, 1, :random_number_keypoints1] = torch.rand((random_number_keypoints1, 2)) random_matches_indices0 = torch.randperm(random_number_keypoints1, dtype=torch.int)[:random_number_matches] random_matches_indices1 = torch.randperm(random_number_keypoints0, dtype=torch.int)[:random_number_matches] matches[i, 0, random_matches_indices1] = random_matches_indices0 matches[i, 1, random_matches_indices0] = random_matches_indices1 scores[i, 0, random_matches_indices1] = torch.rand((random_number_matches,)) scores[i, 1, random_matches_indices0] = torch.rand((random_number_matches,)) return KeypointMatchingOutput(mask=mask, keypoints=keypoints, matches=matches, matching_scores=scores) @require_torch @require_vision class SuperGlueImageProcessingTest(ImageProcessingTestMixin, unittest.TestCase): image_processing_class = SuperGlueImageProcessor if is_vision_available() else None def setUp(self) -> None: super().setUp() self.image_processor_tester = SuperGlueImageProcessingTester(self) @property def image_processor_dict(self): return self.image_processor_tester.prepare_image_processor_dict() def test_image_processing(self): image_processing = self.image_processing_class(**self.image_processor_dict) self.assertTrue(hasattr(image_processing, "do_resize")) self.assertTrue(hasattr(image_processing, "size")) self.assertTrue(hasattr(image_processing, "do_rescale")) self.assertTrue(hasattr(image_processing, "rescale_factor")) self.assertTrue(hasattr(image_processing, "do_grayscale")) def test_image_processor_from_dict_with_kwargs(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) self.assertEqual(image_processor.size, {"height": 480, "width": 640}) image_processor = self.image_processing_class.from_dict( self.image_processor_dict, size={"height": 42, "width": 42} ) self.assertEqual(image_processor.size, {"height": 42, "width": 42}) @unittest.skip(reason="SuperPointImageProcessor is always supposed to return a grayscaled image") def test_call_numpy_4_channels(self): pass def test_number_and_format_of_images_in_input(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) # Cases where the number of images and the format of lists in the input is correct image_input = self.image_processor_tester.prepare_image_inputs(pairs=False, batch_size=2) image_processed = image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual((1, 2, 3, 480, 640), tuple(image_processed["pixel_values"].shape)) image_input = self.image_processor_tester.prepare_image_inputs(pairs=True, batch_size=2) image_processed = image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual((1, 2, 3, 480, 640), tuple(image_processed["pixel_values"].shape)) image_input = self.image_processor_tester.prepare_image_inputs(pairs=True, batch_size=4) image_processed = image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual((2, 2, 3, 480, 640), tuple(image_processed["pixel_values"].shape)) image_input = self.image_processor_tester.prepare_image_inputs(pairs=True, batch_size=6) image_processed = image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual((3, 2, 3, 480, 640), tuple(image_processed["pixel_values"].shape)) # Cases where the number of images or the format of lists in the input is incorrect ## List of 4 images image_input = self.image_processor_tester.prepare_image_inputs(pairs=False, batch_size=4) with self.assertRaises(ValueError) as cm: image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual(ValueError, cm.exception.__class__) ## List of 3 images image_input = self.image_processor_tester.prepare_image_inputs(pairs=False, batch_size=3) with self.assertRaises(ValueError) as cm: image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual(ValueError, cm.exception.__class__) ## List of 2 pairs and 1 image image_input = self.image_processor_tester.prepare_image_inputs(pairs=True, batch_size=3) with self.assertRaises(ValueError) as cm: image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual(ValueError, cm.exception.__class__) @parameterized.expand( [ ([random_array((3, 100, 200)), random_array((3, 100, 200))], (1, 2, 3, 480, 640)), ([[random_array((3, 100, 200)), random_array((3, 100, 200))]], (1, 2, 3, 480, 640)), ([random_tensor((3, 100, 200)), random_tensor((3, 100, 200))], (1, 2, 3, 480, 640)), ([random_tensor((3, 100, 200)), random_tensor((3, 100, 200))], (1, 2, 3, 480, 640)), ], ) def test_valid_image_shape_in_input(self, image_input, output): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) image_processed = image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual(output, tuple(image_processed["pixel_values"].shape)) @parameterized.expand( [ (random_array((3, 100, 200)),), ([random_array((3, 100, 200))],), (random_array((1, 3, 100, 200)),), ([[random_array((3, 100, 200))]],), ([[random_array((3, 100, 200))], [random_array((3, 100, 200))]],), ([random_array((1, 3, 100, 200)), random_array((1, 3, 100, 200))],), (random_array((1, 1, 3, 100, 200)),), ], ) def test_invalid_image_shape_in_input(self, image_input): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) with self.assertRaises(ValueError) as cm: image_processor.preprocess(image_input, return_tensors="pt") self.assertEqual(ValueError, cm.exception.__class__) def test_input_images_properly_paired(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) image_inputs = self.image_processor_tester.prepare_image_inputs() pre_processed_images = image_processor.preprocess(image_inputs, return_tensors="np") self.assertEqual(len(pre_processed_images["pixel_values"].shape), 5) self.assertEqual(pre_processed_images["pixel_values"].shape[1], 2) def test_input_not_paired_images_raises_error(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) image_inputs = self.image_processor_tester.prepare_image_inputs(pairs=False) with self.assertRaises(ValueError): image_processor.preprocess(image_inputs[0]) def test_input_image_properly_converted_to_grayscale(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) image_inputs = self.image_processor_tester.prepare_image_inputs() pre_processed_images = image_processor.preprocess(image_inputs) for image_pair in pre_processed_images["pixel_values"]: for image in image_pair: self.assertTrue(np.all(image[0, ...] == image[1, ...]) and np.all(image[1, ...] == image[2, ...])) def test_call_numpy(self): # Test overwritten because SuperGlueImageProcessor combines images by pair to feed it into SuperGlue # Initialize image_processing image_processing = self.image_processing_class(**self.image_processor_dict) # create random numpy tensors image_pairs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, numpify=True) for image_pair in image_pairs: self.assertEqual(len(image_pair), 2) expected_batch_size = int(self.image_processor_tester.batch_size / 2) # Test with 2 images encoded_images = image_processing(image_pairs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_pairs[0]) self.assertEqual(tuple(encoded_images.shape), (1, *expected_output_image_shape)) # Test with list of pairs encoded_images = image_processing(image_pairs, return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_pairs) self.assertEqual(tuple(encoded_images.shape), (expected_batch_size, *expected_output_image_shape)) # Test without paired images image_pairs = self.image_processor_tester.prepare_image_inputs( equal_resolution=False, numpify=True, pairs=False ) with self.assertRaises(ValueError): image_processing(image_pairs, return_tensors="pt").pixel_values def test_call_pil(self): # Test overwritten because SuperGlueImageProcessor combines images by pair to feed it into SuperGlue # Initialize image_processing image_processing = self.image_processing_class(**self.image_processor_dict) # create random PIL images image_pairs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False) for image_pair in image_pairs: self.assertEqual(len(image_pair), 2) expected_batch_size = int(self.image_processor_tester.batch_size / 2) # Test with 2 images encoded_images = image_processing(image_pairs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_pairs[0]) self.assertEqual(tuple(encoded_images.shape), (1, *expected_output_image_shape)) # Test with list of pairs encoded_images = image_processing(image_pairs, return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_pairs) self.assertEqual(tuple(encoded_images.shape), (expected_batch_size, *expected_output_image_shape)) # Test without paired images image_pairs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, pairs=False) with self.assertRaises(ValueError): image_processing(image_pairs, return_tensors="pt").pixel_values def test_call_pytorch(self): # Test overwritten because SuperGlueImageProcessor combines images by pair to feed it into SuperGlue # Initialize image_processing image_processing = self.image_processing_class(**self.image_processor_dict) # create random PyTorch tensors image_pairs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, torchify=True) for image_pair in image_pairs: self.assertEqual(len(image_pair), 2) expected_batch_size = int(self.image_processor_tester.batch_size / 2) # Test with 2 images encoded_images = image_processing(image_pairs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_pairs[0]) self.assertEqual(tuple(encoded_images.shape), (1, *expected_output_image_shape)) # Test with list of pairs encoded_images = image_processing(image_pairs, return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_pairs) self.assertEqual(tuple(encoded_images.shape), (expected_batch_size, *expected_output_image_shape)) # Test without paired images image_pairs = self.image_processor_tester.prepare_image_inputs( equal_resolution=False, torchify=True, pairs=False ) with self.assertRaises(ValueError): image_processing(image_pairs, return_tensors="pt").pixel_values def test_image_processor_with_list_of_two_images(self): image_processing = self.image_processing_class(**self.image_processor_dict) image_pairs = self.image_processor_tester.prepare_image_inputs( equal_resolution=False, numpify=True, batch_size=2, pairs=False ) self.assertEqual(len(image_pairs), 2) self.assertTrue(isinstance(image_pairs[0], np.ndarray)) self.assertTrue(isinstance(image_pairs[1], np.ndarray)) expected_batch_size = 1 encoded_images = image_processing(image_pairs, return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_pairs[0]) self.assertEqual(tuple(encoded_images.shape), (expected_batch_size, *expected_output_image_shape)) @require_torch def test_post_processing_keypoint_matching(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) image_inputs = self.image_processor_tester.prepare_image_inputs() pre_processed_images = image_processor.preprocess(image_inputs, return_tensors="pt") outputs = self.image_processor_tester.prepare_keypoint_matching_output(**pre_processed_images) def check_post_processed_output(post_processed_output, image_pair_size): for post_processed_output, (image_size0, image_size1) in zip(post_processed_output, image_pair_size): self.assertTrue("keypoints0" in post_processed_output) self.assertTrue("keypoints1" in post_processed_output) self.assertTrue("matching_scores" in post_processed_output) keypoints0 = post_processed_output["keypoints0"] keypoints1 = post_processed_output["keypoints1"] all_below_image_size0 = torch.all(keypoints0[:, 0] <= image_size0[1]) and torch.all( keypoints0[:, 1] <= image_size0[0] ) all_below_image_size1 = torch.all(keypoints1[:, 0] <= image_size1[1]) and torch.all( keypoints1[:, 1] <= image_size1[0] ) all_above_zero0 = torch.all(keypoints0[:, 0] >= 0) and torch.all(keypoints0[:, 1] >= 0) all_above_zero1 = torch.all(keypoints0[:, 0] >= 0) and torch.all(keypoints0[:, 1] >= 0) self.assertTrue(all_below_image_size0) self.assertTrue(all_below_image_size1) self.assertTrue(all_above_zero0) self.assertTrue(all_above_zero1) all_scores_different_from_minus_one = torch.all(post_processed_output["matching_scores"] != -1) self.assertTrue(all_scores_different_from_minus_one) tuple_image_sizes = [ ((image_pair[0].size[0], image_pair[0].size[1]), (image_pair[1].size[0], image_pair[1].size[1])) for image_pair in image_inputs ] tuple_post_processed_outputs = image_processor.post_process_keypoint_matching(outputs, tuple_image_sizes) check_post_processed_output(tuple_post_processed_outputs, tuple_image_sizes) tensor_image_sizes = torch.tensor( [(image_pair[0].size, image_pair[1].size) for image_pair in image_inputs] ).flip(2) tensor_post_processed_outputs = image_processor.post_process_keypoint_matching(outputs, tensor_image_sizes) check_post_processed_output(tensor_post_processed_outputs, tensor_image_sizes)
transformers/tests/models/superglue/test_image_processing_superglue.py/0
{ "file_path": "transformers/tests/models/superglue/test_image_processing_superglue.py", "repo_id": "transformers", "token_count": 8009 }
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import inspect import unittest from huggingface_hub import hf_hub_download from transformers import UdopConfig, is_torch_available, is_vision_available from transformers.testing_utils import ( require_sentencepiece, require_tokenizers, require_torch, require_vision, slow, torch_device, ) from transformers.utils import cached_property from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch import torch.nn.functional as F from transformers import UdopEncoderModel, UdopForConditionalGeneration, UdopModel, UdopProcessor if is_vision_available(): from PIL import Image class UdopModelTester: def __init__( self, parent, vocab_size=99, batch_size=13, encoder_seq_length=7, decoder_seq_length=9, # For common tests is_training=True, use_attention_mask=True, use_labels=True, hidden_size=32, num_hidden_layers=5, num_attention_heads=4, d_ff=37, relative_attention_num_buckets=32, dropout_rate=0.1, initializer_factor=0.002, eos_token_id=1, pad_token_id=0, scope=None, decoder_layers=None, range_bbox=1000, decoder_start_token_id=0, ): self.parent = parent self.batch_size = batch_size self.encoder_seq_length = encoder_seq_length self.decoder_seq_length = decoder_seq_length # For common tests self.seq_length = self.decoder_seq_length self.is_training = is_training self.use_attention_mask = use_attention_mask self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.d_ff = d_ff self.relative_attention_num_buckets = relative_attention_num_buckets self.dropout_rate = dropout_rate self.initializer_factor = initializer_factor self.eos_token_id = eos_token_id self.pad_token_id = pad_token_id self.scope = None self.decoder_layers = decoder_layers self.range_bbox = range_bbox self.decoder_start_token_id = decoder_start_token_id def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.encoder_seq_length], self.vocab_size) bbox = ids_tensor([self.batch_size, self.encoder_seq_length, 4], self.range_bbox).float() # Ensure that bbox is legal for i in range(bbox.shape[0]): for j in range(bbox.shape[1]): if bbox[i, j, 3] < bbox[i, j, 1]: t = bbox[i, j, 3] bbox[i, j, 3] = bbox[i, j, 1] bbox[i, j, 1] = t if bbox[i, j, 2] < bbox[i, j, 0]: t = bbox[i, j, 2] bbox[i, j, 2] = bbox[i, j, 0] bbox[i, j, 0] = t decoder_input_ids = ids_tensor([self.batch_size, self.decoder_seq_length], self.vocab_size) attention_mask = None decoder_attention_mask = None if self.use_attention_mask: attention_mask = ids_tensor([self.batch_size, self.encoder_seq_length], vocab_size=2) decoder_attention_mask = ids_tensor([self.batch_size, self.decoder_seq_length], vocab_size=2) lm_labels = None if self.use_labels: lm_labels = ids_tensor([self.batch_size, self.decoder_seq_length], self.vocab_size) config = self.get_config() return ( config, input_ids, bbox, decoder_input_ids, attention_mask, decoder_attention_mask, lm_labels, ) def get_config(self): return UdopConfig( vocab_size=self.vocab_size, d_model=self.hidden_size, d_ff=self.d_ff, d_kv=self.hidden_size // self.num_attention_heads, num_layers=self.num_hidden_layers, num_decoder_layers=self.decoder_layers, num_heads=self.num_attention_heads, relative_attention_num_buckets=self.relative_attention_num_buckets, dropout_rate=self.dropout_rate, initializer_factor=self.initializer_factor, eos_token_id=self.eos_token_id, bos_token_id=self.pad_token_id, pad_token_id=self.pad_token_id, decoder_start_token_id=self.decoder_start_token_id, ) def create_and_check_model( self, config, input_ids, bbox, decoder_input_ids, attention_mask, decoder_attention_mask, lm_labels, ): model = UdopModel(config=config) model.to(torch_device) model.eval() result = model( input_ids=input_ids, bbox=bbox, decoder_input_ids=decoder_input_ids, attention_mask=attention_mask, decoder_attention_mask=decoder_attention_mask, ) result = model(input_ids=input_ids, bbox=bbox, decoder_input_ids=decoder_input_ids) decoder_output = result.last_hidden_state decoder_past = result.past_key_values encoder_output = result.encoder_last_hidden_state self.parent.assertEqual(encoder_output.size(), (self.batch_size, self.encoder_seq_length, self.hidden_size)) self.parent.assertEqual(decoder_output.size(), (self.batch_size, self.decoder_seq_length, self.hidden_size)) # There should be `num_layers` key value embeddings stored in decoder_past self.parent.assertEqual(len(decoder_past), config.num_layers) # There should be a self attn key, a self attn value, a cross attn key and a cross attn value stored in each decoder_past tuple self.parent.assertEqual(len(decoder_past[0]), 4) def create_and_check_with_lm_head( self, config, input_ids, bbox, decoder_input_ids, attention_mask, decoder_attention_mask, lm_labels, ): model = UdopForConditionalGeneration(config=config).to(torch_device).eval() outputs = model( input_ids=input_ids, bbox=bbox, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=lm_labels, ) self.parent.assertEqual(len(outputs), 4) self.parent.assertEqual(outputs["logits"].size(), (self.batch_size, self.decoder_seq_length, self.vocab_size)) self.parent.assertEqual(outputs["loss"].size(), ()) def create_and_check_generate_with_past_key_values( self, config, input_ids, bbox, decoder_input_ids, attention_mask, decoder_attention_mask, lm_labels, ): model = UdopForConditionalGeneration(config=config).to(torch_device).eval() torch.manual_seed(0) output_without_past_cache = model.generate( input_ids[:1], bbox=bbox[:1, :, :], num_beams=2, max_length=5, do_sample=True, use_cache=False ) torch.manual_seed(0) output_with_past_cache = model.generate( input_ids[:1], bbox=bbox[:1, :, :], num_beams=2, max_length=5, do_sample=True ) self.parent.assertTrue(torch.all(output_with_past_cache == output_without_past_cache)) def create_and_check_model_fp16_forward( self, config, input_ids, bbox, decoder_input_ids, attention_mask, decoder_attention_mask, lm_labels, ): model = UdopForConditionalGeneration(config=config).to(torch_device).half().eval() output = model(input_ids, bbox=bbox, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids).logits self.parent.assertFalse(torch.isnan(output).any().item()) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, bbox, decoder_input_ids, attention_mask, decoder_attention_mask, lm_labels, ) = config_and_inputs inputs_dict = { "input_ids": input_ids, "attention_mask": attention_mask, "bbox": bbox, "decoder_input_ids": decoder_input_ids, "decoder_attention_mask": decoder_attention_mask, "use_cache": False, } return config, inputs_dict @require_torch class UdopModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( ( UdopModel, UdopForConditionalGeneration, ) if is_torch_available() else () ) all_generative_model_classes = (UdopForConditionalGeneration,) if is_torch_available() else () pipeline_model_mapping = ( {"feature-extraction": UdopModel, "image-text-to-text": UdopForConditionalGeneration} if is_torch_available() else {} ) fx_compatible = False test_pruning = False test_torchscript = False test_head_masking = False test_resize_embeddings = True test_model_parallel = False is_encoder_decoder = True test_cpu_offload = False # The small UDOP model needs higher percentages for CPU/MP tests model_split_percents = [0.8, 0.9] def setUp(self): self.model_tester = UdopModelTester(self) self.config_tester = ConfigTester(self, config_class=UdopConfig, d_model=37) def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = copy.deepcopy(inputs_dict) if model_class.__name__ == "UdopForConditionalGeneration": if return_labels: inputs_dict["labels"] = torch.zeros( (self.model_tester.batch_size, self.model_tester.seq_length), dtype=torch.long, device=torch_device ) return inputs_dict def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_with_lm_head(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_with_lm_head(*config_and_inputs) def test_generate_with_past_key_values(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_generate_with_past_key_values(*config_and_inputs) @unittest.skipIf(torch_device == "cpu", "Cant do half precision") def test_model_fp16_forward(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_fp16_forward(*config_and_inputs) @unittest.skip(reason="Gradient checkpointing is not supported by this model") def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = sorted([*signature.parameters.keys()]) expected_arg_names = [ "attention_mask", "bbox", "cache_position", "cross_attn_head_mask", "decoder_attention_mask", "decoder_head_mask", "decoder_input_ids", "decoder_inputs_embeds", "encoder_outputs", "head_mask", "input_ids", "inputs_embeds", ] if model_class in self.all_generative_model_classes: expected_arg_names.append( "labels", ) expected_arg_names = sorted(expected_arg_names) self.assertListEqual(sorted(arg_names[: len(expected_arg_names)]), expected_arg_names) # overwrite because T5 doesn't accept position ids as input and expects `decoder_input_ids` def test_custom_4d_attention_mask(self): for model_class in self.all_generative_model_classes: config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config).to(device=torch_device, dtype=torch.float32) ( input_ids, _, input_ids_shared_prefix, mask_shared_prefix, _, ) = self._get_custom_4d_mask_test_data() logits = model.forward( decoder_input_ids=input_ids, input_ids=input_dict["input_ids"][:3], bbox=input_dict["bbox"][:3], ).logits # logits.shape == torch.Size([3, 4, ...]) logits_shared_prefix = model( input_ids=input_dict["input_ids"][:1], bbox=input_dict["bbox"][:1], decoder_input_ids=input_ids_shared_prefix, decoder_attention_mask=mask_shared_prefix, )[0] # logits_shared_prefix.shape == torch.Size([1, 6, ...]) out_last_tokens = logits[:, -1, :] # last tokens in each batch line out_shared_prefix_last_tokens = logits_shared_prefix[0, -3:, :] # last three tokens # comparing softmax-normalized logits: normalized_0 = F.softmax(out_last_tokens) normalized_1 = F.softmax(out_shared_prefix_last_tokens) torch.testing.assert_close(normalized_0, normalized_1, rtol=1e-3, atol=1e-4) @unittest.skip( "Not currently compatible. Fails with - NotImplementedError: Cannot copy out of meta tensor; no data!" ) def test_save_load_low_cpu_mem_usage(self): pass @slow def test_model_from_pretrained(self): model_name = "microsoft/udop-large" model = UdopForConditionalGeneration.from_pretrained(model_name) self.assertIsNotNone(model) class UdopEncoderOnlyModelTester: def __init__( self, parent, vocab_size=99, batch_size=13, seq_length=7, # For common tests is_training=False, use_attention_mask=True, hidden_size=32, num_hidden_layers=5, decoder_layers=2, num_attention_heads=4, d_ff=37, relative_attention_num_buckets=32, dropout_rate=0.1, initializer_factor=0.002, eos_token_id=1, pad_token_id=0, scope=None, range_bbox=1000, ): self.parent = parent self.batch_size = batch_size # For common tests self.seq_length = seq_length self.is_training = is_training self.use_attention_mask = use_attention_mask self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.decoder_layers = decoder_layers self.num_attention_heads = num_attention_heads self.d_ff = d_ff self.relative_attention_num_buckets = relative_attention_num_buckets self.dropout_rate = dropout_rate self.initializer_factor = initializer_factor self.eos_token_id = eos_token_id self.pad_token_id = pad_token_id self.scope = None self.range_bbox = range_bbox def get_config(self): return UdopConfig( vocab_size=self.vocab_size, d_model=self.hidden_size, d_ff=self.d_ff, d_kv=self.hidden_size // self.num_attention_heads, num_layers=self.num_hidden_layers, num_decoder_layers=self.decoder_layers, num_heads=self.num_attention_heads, relative_attention_num_buckets=self.relative_attention_num_buckets, dropout_rate=self.dropout_rate, initializer_factor=self.initializer_factor, eos_token_id=self.eos_token_id, bos_token_id=self.pad_token_id, pad_token_id=self.pad_token_id, is_encoder_decoder=False, ) def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) bbox = ids_tensor([self.batch_size, self.seq_length, 4], self.range_bbox).float() # Ensure that bbox is legal for i in range(bbox.shape[0]): for j in range(bbox.shape[1]): if bbox[i, j, 3] < bbox[i, j, 1]: t = bbox[i, j, 3] bbox[i, j, 3] = bbox[i, j, 1] bbox[i, j, 1] = t if bbox[i, j, 2] < bbox[i, j, 0]: t = bbox[i, j, 2] bbox[i, j, 2] = bbox[i, j, 0] bbox[i, j, 0] = t attention_mask = None if self.use_attention_mask: attention_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2) config = self.get_config() return ( config, input_ids, bbox, attention_mask, ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, bbox, attention_mask, ) = config_and_inputs inputs_dict = { "input_ids": input_ids, "bbox": bbox, "attention_mask": attention_mask, } return config, inputs_dict def create_and_check_model( self, config, input_ids, bbox, attention_mask, ): model = UdopEncoderModel(config=config) model.to(torch_device) model.eval() result = model( input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, ) encoder_output = result.last_hidden_state self.parent.assertEqual(encoder_output.size(), (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_model_fp16_forward( self, config, input_ids, bbox, attention_mask, ): model = UdopEncoderModel(config=config).to(torch_device).half().eval() output = model(input_ids, bbox=bbox, attention_mask=attention_mask)["last_hidden_state"] self.parent.assertFalse(torch.isnan(output).any().item()) class UdopEncoderOnlyModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (UdopEncoderModel,) if is_torch_available() else () test_pruning = False test_torchscript = False test_head_masking = False test_resize_embeddings = False test_model_parallel = False all_parallelizable_model_classes = (UdopEncoderModel,) if is_torch_available() else () def setUp(self): self.model_tester = UdopEncoderOnlyModelTester(self) self.config_tester = ConfigTester(self, config_class=UdopConfig, d_model=37) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) # overwrite because T5 doesn't accept position ids as input and expects `decoder_input_ids` def test_custom_4d_attention_mask(self): for model_class in self.all_generative_model_classes: config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config).to(device=torch_device, dtype=torch.float32) ( input_ids, _, input_ids_shared_prefix, mask_shared_prefix, _, ) = self._get_custom_4d_mask_test_data() logits = model.forward( decoder_input_ids=input_ids, input_ids=input_dict["input_ids"][:3], ).logits # logits.shape == torch.Size([3, 4, ...]) logits_shared_prefix = model( input_ids=input_dict["input_ids"][:1], decoder_input_ids=input_ids_shared_prefix, decoder_attention_mask=mask_shared_prefix, )[0] # logits_shared_prefix.shape == torch.Size([1, 6, ...]) out_last_tokens = logits[:, -1, :] # last tokens in each batch line out_shared_prefix_last_tokens = logits_shared_prefix[0, -3:, :] # last three tokens # comparing softmax-normalized logits: normalized_0 = F.softmax(out_last_tokens) normalized_1 = F.softmax(out_shared_prefix_last_tokens) torch.testing.assert_close(normalized_0, normalized_1, rtol=1e-3, atol=1e-4) @unittest.skip( "Not currently compatible. Fails with - NotImplementedError: Cannot copy out of meta tensor; no data!" ) def test_save_load_low_cpu_mem_usage(self): pass @require_torch @require_sentencepiece @require_tokenizers @require_vision @slow class UdopModelIntegrationTests(unittest.TestCase): @cached_property def image(self): filepath = hf_hub_download( repo_id="hf-internal-testing/fixtures_docvqa", filename="document_2.png", repo_type="dataset" ) image = Image.open(filepath).convert("RGB") return image @cached_property def processor(self): return UdopProcessor.from_pretrained("microsoft/udop-large") @cached_property def model(self): return UdopForConditionalGeneration.from_pretrained("microsoft/udop-large").to(torch_device) def test_conditional_generation(self): processor = self.processor model = self.model prompt = "Question answering. In which year is the report made?" encoding = processor(images=self.image, text=prompt, return_tensors="pt").to(torch_device) predicted_ids = model.generate(**encoding) predicted_text = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] self.assertEqual(predicted_text, "2013")
transformers/tests/models/udop/test_modeling_udop.py/0
{ "file_path": "transformers/tests/models/udop/test_modeling_udop.py", "repo_id": "transformers", "token_count": 11402 }
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch VideoLlava model.""" import unittest import numpy as np import requests from huggingface_hub import hf_hub_download from parameterized import parameterized from transformers import ( VideoLlavaConfig, VideoLlavaForConditionalGeneration, VideoLlavaProcessor, is_torch_available, is_vision_available, ) from transformers.testing_utils import ( cleanup, require_bitsandbytes, require_torch, run_test_using_subprocess, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor if is_torch_available(): import torch if is_vision_available(): from PIL import Image class VideoLlavaVisionText2TextModelTester: def __init__( self, parent, ignore_index=-100, image_token_index=0, video_token_index=1, projector_hidden_act="gelu", seq_length=3, num_frames=2, vision_feature_select_strategy="default", vision_feature_layer=-1, text_config={ "model_type": "llama", "seq_length": 13, "is_training": True, "use_input_mask": True, "use_token_type_ids": False, "use_labels": True, "vocab_size": 99, "hidden_size": 32, "num_hidden_layers": 2, "num_attention_heads": 4, "intermediate_size": 37, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "attention_probs_dropout_prob": 0.1, "max_position_embeddings": 2048, # we need it high because videos are 8 frames "type_vocab_size": 16, "type_sequence_label_size": 2, "initializer_range": 0.02, "num_labels": 3, "num_choices": 4, "pad_token_id": 3, }, is_training=True, vision_config={ "model_type": "clip_vision_model", "batch_size": 12, "image_size": 8, "patch_size": 6, "num_channels": 3, "is_training": True, "hidden_size": 32, "projection_dim": 32, "num_hidden_layers": 2, "num_attention_heads": 4, "intermediate_size": 37, "dropout": 0.1, "attention_dropout": 0.1, "initializer_range": 0.02, }, ): self.parent = parent self.ignore_index = ignore_index self.image_token_index = image_token_index self.video_token_index = video_token_index self.projector_hidden_act = projector_hidden_act self.vision_feature_select_strategy = vision_feature_select_strategy self.vision_feature_layer = vision_feature_layer self.text_config = text_config self.vision_config = vision_config self.num_frames = num_frames self.pad_token_id = text_config["pad_token_id"] self.num_hidden_layers = text_config["num_hidden_layers"] self.vocab_size = text_config["vocab_size"] self.hidden_size = text_config["hidden_size"] self.num_attention_heads = text_config["num_attention_heads"] self.is_training = is_training self.batch_size = 5 self.num_channels = 3 self.image_size = 224 self.num_image_tokens = (vision_config["image_size"] // vision_config["patch_size"]) ** 2 self.num_video_tokens = (self.num_image_tokens + 1) * self.num_frames self.seq_length = seq_length + self.num_image_tokens + self.num_video_tokens def get_config(self): return VideoLlavaConfig( text_config=self.text_config, vision_config=self.vision_config, ignore_index=self.ignore_index, image_token_index=self.image_token_index, video_token_index=self.video_token_index, projector_hidden_act=self.projector_hidden_act, vision_feature_select_strategy=self.vision_feature_select_strategy, vision_feature_layer=self.vision_feature_layer, image_seq_length=self.num_image_tokens, video_seq_length=self.num_video_tokens, ) def prepare_config_and_inputs(self): pixel_values_videos = floats_tensor( [ self.batch_size, self.num_frames, self.vision_config["num_channels"], self.vision_config["image_size"], self.vision_config["image_size"], ] ) pixel_values_images = floats_tensor( [ self.batch_size, self.vision_config["num_channels"], self.vision_config["image_size"], self.vision_config["image_size"], ] ) config = self.get_config() return config, pixel_values_images, pixel_values_videos def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values_images, pixel_values_videos = config_and_inputs input_ids = ids_tensor([self.batch_size, self.seq_length], config.text_config.vocab_size - 1) + 1 attention_mask = input_ids.ne(1).to(torch_device) input_ids[(input_ids == config.image_token_index) | (input_ids == config.video_token_index)] = ( self.pad_token_id ) input_ids[:, : self.num_image_tokens] = config.image_token_index input_ids[:, self.num_image_tokens : self.num_video_tokens + self.num_image_tokens] = config.video_token_index inputs_dict = { "pixel_values_videos": pixel_values_videos, "pixel_values_images": pixel_values_images, "input_ids": input_ids, "attention_mask": attention_mask, } return config, inputs_dict @require_torch class VideoLlavaForConditionalGenerationModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): """ Model tester for `VideoLlavaForConditionalGeneration`. """ all_model_classes = (VideoLlavaForConditionalGeneration,) if is_torch_available() else () all_generative_model_classes = (VideoLlavaForConditionalGeneration,) if is_torch_available() else () fx_compatible = False test_pruning = False test_resize_embeddings = True test_head_masking = False _is_composite = True def setUp(self): self.model_tester = VideoLlavaVisionText2TextModelTester(self) common_properties = ["image_token_index", "video_token_index", "vision_feature_layer", "image_seq_length"] self.config_tester = ConfigTester( self, config_class=VideoLlavaConfig, has_text_modality=False, common_properties=common_properties ) def test_config(self): self.config_tester.run_common_tests() @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecure seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass @unittest.skip(reason="Pass because video-LLava requires `attention_mask is not None`") def test_sdpa_can_compile_dynamic(self): pass @unittest.skip(reason="Pass because video-LLava requires `attention_mask is not None`") def test_sdpa_can_dispatch_on_flash(self): pass @unittest.skip("FlashAttention only support fp16 and bf16 data type") def test_flash_attn_2_fp32_ln(self): pass @unittest.skip( "VLMs need lots of steps to prepare images/mask correctly to get pad-free inputs. Can be tested as part of LLM test" ) def test_flash_attention_2_padding_matches_padding_free_with_position_ids(self): pass @run_test_using_subprocess def test_mixed_input(self): config, inputs = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config).to(torch_device).eval() # test that the forward does not fail with torch.no_grad(): _ = model(**inputs) # if we remove some images from inputs leaving only one # image number mismatch error should raise inputs["pixel_values_images"] = inputs["pixel_values_images"][:1] with self.assertRaises(ValueError): _ = model(**inputs) def test_video_only_input(self): config, inputs = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config).to(torch_device).eval() # replace image token id with dummy id # Error will be raised as num-image-tokens and num-of-image-embeds mismatch inputs["input_ids"][:, : self.model_tester.num_image_tokens] = 2 with self.assertRaises(ValueError): _ = model(**inputs) inputs["pixel_values_images"] = None _ = model(**inputs) def test_image_only_input(self): config, inputs = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config).to(torch_device).eval() # set dummy id, which is not video token id # Error will be raised as num-video-tokens and num-of-video-embeds mismatch inputs["input_ids"][ :, self.model_tester.num_image_tokens : self.model_tester.num_image_tokens + self.model_tester.num_video_tokens, ] = 2 with self.assertRaises(ValueError): _ = model(**inputs) inputs["pixel_values_videos"] = None _ = model(**inputs) def test_batching_equivalence(self): def recursive_check(batched_object, single_row_object, model_name, key): if isinstance(batched_object, (list, tuple)): for batched_object_value, single_row_object_value in zip(batched_object, single_row_object): recursive_check(batched_object_value, single_row_object_value, model_name, key) # do not compare returned loss (0-dim tensor) / codebook ids (int) / caching objects elif batched_object is None or not isinstance(batched_object, torch.Tensor): return elif batched_object.dim() == 0: return else: batched_row = batched_object[:1] self.assertFalse( torch.isnan(batched_row).any(), f"Batched output has `nan` in {model_name} for key={key}" ) self.assertFalse( torch.isinf(batched_row).any(), f"Batched output has `inf` in {model_name} for key={key}" ) self.assertFalse( torch.isnan(single_row_object).any(), f"Single row output has `nan` in {model_name} for key={key}" ) self.assertFalse( torch.isinf(single_row_object).any(), f"Single row output has `inf` in {model_name} for key={key}" ) self.assertTrue( (torch.max(torch.abs(batched_row - single_row_object))) <= 1e-03, msg=( f"Batched and Single row outputs are not equal in {model_name} for key={key}. " f"Difference={torch.max(torch.abs(batched_row - single_row_object))}." ), ) config, batched_input = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: config.output_hidden_states = True model_name = model_class.__name__ batched_input_prepared = self._prepare_for_class(batched_input, model_class) model = model_class(config).to(torch_device).eval() single_row_input = {} for key, value in batched_input_prepared.items(): single_row_input[key] = value[:1] with torch.no_grad(): model_batched_output = model(**batched_input_prepared) model_row_output = model(**single_row_input) for key in model_batched_output: # we can't test videos as their output shapes are linked to number of frames # and we don't have to as it is a CLIP model and can be tested from `ClipModelTester` class if key == "video_hidden_states": continue recursive_check(model_batched_output[key], model_row_output[key], model_name, key) # overwrite inputs_embeds tests because we need to delete "pixel values" for LVLMs def test_inputs_embeds(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) input_ids = inputs["input_ids"] del inputs["input_ids"] del inputs["pixel_values_images"] del inputs["pixel_values_videos"] wte = model.get_input_embeddings() inputs["inputs_embeds"] = wte(input_ids) with torch.no_grad(): model(**inputs) # overwrite inputs_embeds tests because we need to delete "pixel values" for LVLMs # while some other models require pixel_values to be present def test_inputs_embeds_matches_input_ids(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) input_ids = inputs["input_ids"] del inputs["input_ids"] del inputs["pixel_values_images"] del inputs["pixel_values_videos"] inputs_embeds = model.get_input_embeddings()(input_ids) with torch.no_grad(): out_ids = model(input_ids=input_ids, **inputs)[0] out_embeds = model(inputs_embeds=inputs_embeds, **inputs)[0] torch.testing.assert_close(out_embeds, out_ids) def test_mismatching_num_image_tokens(self): """ Tests that VLMs through an error with explicit message saying what is wrong when number of images don't match number of image tokens in the text. Also we need to test multi-image cases when one prompr has multiple image tokens. """ config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config).to(torch_device) _ = model(**input_dict) # successfull forward with no modifications # remove one image but leave the image token in text input_dict["pixel_values_images"] = input_dict["pixel_values_images"][-1:, ...] with self.assertRaises(ValueError): _ = model(**input_dict) # simulate multi-image case by concatenating inputs where each has exactly one image/image-token input_ids = input_dict["input_ids"][:1] pixel_values = input_dict["pixel_values_images"][:1] input_ids = torch.cat([input_ids, input_ids], dim=0) # one image and two image tokens raise an error with self.assertRaises(ValueError): _ = model(input_ids=input_ids, pixel_values_images=pixel_values) # two images and two image tokens don't raise an error pixel_values = torch.cat([pixel_values, pixel_values], dim=0) _ = model(input_ids=input_ids, pixel_values_images=pixel_values) @parameterized.expand( [ (-1,), ([-1],), ([-1, -2],), ], ) def test_vision_feature_layers(self, vision_feature_layer): """ Test that we can use either one vision feature layer, or a list of vision feature layers. """ config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() config.vision_feature_layer = vision_feature_layer num_feature_layers = 1 if isinstance(vision_feature_layer, int) else len(vision_feature_layer) hidden_size = config.vision_config.hidden_size expected_features = hidden_size * num_feature_layers for model_class in self.all_model_classes: model = model_class(config).to(torch_device) # We should have the right number of input features, # and should be able to run a forward pass without exploding assert model.multi_modal_projector.linear_1.in_features == expected_features model(**input_dict) @require_torch class VideoLlavaForConditionalGenerationIntegrationTest(unittest.TestCase): def setUp(self): self.processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf") def tearDown(self): cleanup(torch_device, gc_collect=True) @slow @require_bitsandbytes def test_small_model_integration_test(self): # Let' s make sure we test the preprocessing to replace what is used model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", load_in_4bit=True) prompt = "USER: <video>\nWhy is this video funny? ASSISTANT:" video_file = hf_hub_download( repo_id="raushan-testing-hf/videos-test", filename="video_demo.npy", repo_type="dataset" ) video_file = np.load(video_file) inputs = self.processor(prompt, videos=video_file, return_tensors="pt").to(torch_device) EXPECTED_INPUT_IDS = torch.tensor([1, 3148, 1001, 29901, 29871, 13, 11008, 338, 445, 4863, 2090, 1460, 29973, 319, 1799, 9047, 13566, 29901], device=torch_device) # fmt: skip non_video_inputs = inputs["input_ids"][inputs["input_ids"] != 32001] self.assertTrue(torch.equal(non_video_inputs, EXPECTED_INPUT_IDS)) output = model.generate(**inputs, do_sample=False, max_new_tokens=20) EXPECTED_DECODED_TEXT = "USER: \nWhy is this video funny? ASSISTANT: The video is funny because it shows a baby sitting on a bed and reading a book, which" # fmt: skip self.assertEqual( self.processor.decode(output[0], skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_bitsandbytes def test_small_model_integration_test_mixed_inputs(self): model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", load_in_4bit=True) prompts = [ "USER: <image>\nWhat are the cats in the image doing? ASSISTANT:", "USER: <video>\nWhy is this video funny? ASSISTANT:", ] video_file = hf_hub_download( repo_id="raushan-testing-hf/videos-test", filename="video_demo.npy", repo_type="dataset" ) video_file = np.load(video_file) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = self.processor(prompts, images=[image], videos=[video_file], padding=True, return_tensors="pt").to( torch_device ) output = model.generate(**inputs, do_sample=False, max_new_tokens=20) EXPECTED_DECODED_TEXT = [ 'USER: \nWhat are the cats in the image doing? ASSISTANT: The cats in the image are sleeping or resting on a couch.', 'USER: \nWhy is this video funny? ASSISTANT: The video is funny because it shows a baby sitting on a bed and reading a book, which' ] # fmt: skip self.assertEqual( self.processor.batch_decode(output, skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_bitsandbytes def test_small_model_integration_test_llama(self): model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", load_in_4bit=True) processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf") prompt = "USER: <video>\nDescribe the video in details. ASSISTANT:" video_file = hf_hub_download( repo_id="raushan-testing-hf/videos-test", filename="video_demo.npy", repo_type="dataset" ) video_file = np.load(video_file) inputs = self.processor(prompt, videos=video_file, return_tensors="pt").to(torch_device, torch.float16) output = model.generate(**inputs, max_new_tokens=900, do_sample=False) EXPECTED_DECODED_TEXT = "USER: \nDescribe the video in details. ASSISTANT: The video features a young child sitting on a bed, holding a book and reading it. " \ "The child appears to be enjoying the book, as they are fully engaged in the activity. The bed is located in a bedroom, and there is a chair nearby. The " \ "child is wearing a blue shirt and glasses, which suggests that they might have a visual impairment. The room is well-lit, and there is a clock on the wall, " \ "indicating the time. The child's focus on the book indicates that they are interested in the content and are actively participating in the reading process. " \ "Overall, the video captures a heartwarming moment of a child engaging in a simple yet essential activity, which is reading." # fmt: skip self.assertEqual( processor.decode(output[0], skip_special_tokens=True), EXPECTED_DECODED_TEXT, ) @slow @require_bitsandbytes def test_small_model_integration_test_llama_batched(self): model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", load_in_4bit=True) processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf") processor.tokenizer.padding_side = "left" prompts = [ "USER: <video>\nWhat is the baby doing? ASSISTANT:", "USER: <video>\nWho is sitting next to the woman? ASSISTANT:", ] video_1 = np.load( hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="video_demo.npy", repo_type="dataset") ) video_2 = np.load( hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="video_demo_2.npy", repo_type="dataset") ) inputs = processor(prompts, videos=[video_1, video_2], return_tensors="pt", padding=True).to(torch_device) output = model.generate(**inputs, max_new_tokens=20) EXPECTED_DECODED_TEXT = [ 'USER: \nWhat is the baby doing? ASSISTANT: The baby is sitting on a bed and reading a book.', 'USER: \nWho is sitting next to the woman? ASSISTANT: A small dog is sitting next to the woman.' ] # fmt: skip self.assertEqual(processor.batch_decode(output, skip_special_tokens=True), EXPECTED_DECODED_TEXT)
transformers/tests/models/video_llava/test_modeling_video_llava.py/0
{ "file_path": "transformers/tests/models/video_llava/test_modeling_video_llava.py", "repo_id": "transformers", "token_count": 10804 }
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch VisionTextDualEncoder model.""" from __future__ import annotations import collections import tempfile import unittest import numpy as np from transformers.testing_utils import require_tf, require_vision, slow from transformers.utils import is_tf_available, is_vision_available from ...test_modeling_tf_common import floats_tensor, ids_tensor, random_attention_mask from ..bert.test_modeling_tf_bert import TFBertModelTester from ..clip.test_modeling_tf_clip import TFCLIPVisionModelTester from ..deit.test_modeling_tf_deit import TFDeiTModelTester from ..roberta.test_modeling_tf_roberta import TFRobertaModelTester from ..vit.test_modeling_tf_vit import TFViTModelTester if is_tf_available(): from transformers import ( TFBertModel, TFCLIPVisionModel, TFDeiTModel, TFRobertaModel, TFVisionTextDualEncoderModel, TFViTModel, VisionTextDualEncoderConfig, ) if is_vision_available(): from PIL import Image from transformers import VisionTextDualEncoderProcessor # Inspired by # https://github.com/rwightman/pytorch-image-models/blob/b9bd960a032c75ca6b808ddeed76bee5f3ed4972/timm/models/layers/helpers.py # From PyTorch internals def to_2tuple(x): if isinstance(x, collections.abc.Iterable): return x return (x, x) @require_tf class TFVisionTextDualEncoderMixin: def get_vision_text_model(self, config, text_config): pass def prepare_config_and_inputs(self): pass def get_pretrained_model_and_inputs(self): pass def check_model_from_pretrained_configs( self, text_config, input_ids, attention_mask, vision_config, pixel_values=None, **kwargs ): config = VisionTextDualEncoderConfig.from_vision_text_configs(vision_config, text_config) model = TFVisionTextDualEncoderModel(config) output = model(input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask) self.assertEqual(output["text_embeds"].shape, (input_ids.shape[0], config.projection_dim)) self.assertEqual(output["image_embeds"].shape, (pixel_values.shape[0], config.projection_dim)) def check_vision_text_dual_encoder_model( self, text_config, input_ids, attention_mask, vision_config, pixel_values=None, **kwargs ): vision_model, text_model = self.get_vision_text_model(vision_config, text_config) model = TFVisionTextDualEncoderModel(vision_model=vision_model, text_model=text_model) output = model(input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask) self.assertEqual(output["text_embeds"].shape, (input_ids.shape[0], model.config.projection_dim)) self.assertEqual(output["image_embeds"].shape, (pixel_values.shape[0], model.config.projection_dim)) def check_vision_text_dual_encoder_from_pretrained( self, text_config, input_ids, attention_mask, vision_config, pixel_values=None, **kwargs ): vision_model, text_model = self.get_vision_text_model(vision_config, text_config) kwargs = {"vision_model": vision_model, "text_model": text_model} model = TFVisionTextDualEncoderModel.from_vision_text_pretrained(**kwargs) output = model(input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask) self.assertEqual(output["text_embeds"].shape, (input_ids.shape[0], model.config.projection_dim)) self.assertEqual(output["image_embeds"].shape, (pixel_values.shape[0], model.config.projection_dim)) def check_save_load(self, text_config, input_ids, attention_mask, vision_config, pixel_values=None, **kwargs): vision_model, text_model = self.get_vision_text_model(vision_config, text_config) model = TFVisionTextDualEncoderModel(vision_model=vision_model, text_model=text_model) output = model(input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask) out_1 = output[0].numpy() with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model = TFVisionTextDualEncoderModel.from_pretrained(tmpdirname) after_output = model(input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask) out_2 = after_output[0].numpy() max_diff = np.amax(np.abs(out_2 - out_1)) self.assertLessEqual(max_diff, 1e-5) def check_vision_text_output_attention( self, text_config, input_ids, attention_mask, vision_config, pixel_values=None, **kwargs ): vision_model, text_model = self.get_vision_text_model(vision_config, text_config) model = TFVisionTextDualEncoderModel(vision_model=vision_model, text_model=text_model) output = model( input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask, output_attentions=True ) vision_attentions = output.vision_model_output.attentions self.assertEqual(len(vision_attentions), vision_config.num_hidden_layers) # in ViT, the seq_len equals the number of patches + 1 (we add 1 for the [CLS] token) image_size = to_2tuple(vision_model.config.image_size) patch_size = to_2tuple(vision_model.config.patch_size) num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) seq_len = num_patches + 1 self.assertEqual(vision_attentions[0].shape[-3:], (vision_config.num_attention_heads, seq_len, seq_len)) text_attentions = output.text_model_output.attentions self.assertEqual(len(text_attentions), text_config.num_hidden_layers) self.assertEqual( text_attentions[0].shape[-3:], (text_config.num_attention_heads, input_ids.shape[-1], input_ids.shape[-1]), ) def assert_almost_equals(self, a: np.ndarray, b: np.ndarray, tol: float): diff = np.abs((a - b)).max() self.assertLessEqual(diff, tol, f"Difference between torch and flax is {diff} (>= {tol}).") def test_vision_text_dual_encoder_model(self): inputs_dict = self.prepare_config_and_inputs() self.check_vision_text_dual_encoder_model(**inputs_dict) def test_model_from_pretrained_configs(self): inputs_dict = self.prepare_config_and_inputs() self.check_model_from_pretrained_configs(**inputs_dict) def test_vision_text_dual_encoder_from_pretrained(self): inputs_dict = self.prepare_config_and_inputs() self.check_vision_text_dual_encoder_from_pretrained(**inputs_dict) def test_save_load(self): inputs_dict = self.prepare_config_and_inputs() self.check_save_load(**inputs_dict) def test_vision_text_output_attention(self): inputs_dict = self.prepare_config_and_inputs() self.check_vision_text_output_attention(**inputs_dict) @slow def test_real_model_save_load_from_pretrained(self): model_2, inputs = self.get_pretrained_model_and_inputs() outputs = model_2(**inputs) out_2 = outputs[0].numpy() with tempfile.TemporaryDirectory() as tmp_dirname: model_2.save_pretrained(tmp_dirname) model_1 = TFVisionTextDualEncoderModel.from_pretrained(tmp_dirname) after_outputs = model_1(**inputs) out_1 = after_outputs[0].numpy() max_diff = np.amax(np.abs(out_1 - out_2)) self.assertLessEqual(max_diff, 1e-5) @require_tf class TFViTBertModelTest(TFVisionTextDualEncoderMixin, unittest.TestCase): def get_pretrained_model_and_inputs(self): model = TFVisionTextDualEncoderModel.from_vision_text_pretrained( "hf-internal-testing/tiny-random-vit", "hf-internal-testing/tiny-random-bert" ) batch_size = 13 pixel_values = floats_tensor( [ batch_size, model.vision_model.config.num_channels, model.vision_model.config.image_size, model.vision_model.config.image_size, ] ) input_ids = ids_tensor([batch_size, 4], model.text_model.config.vocab_size) attention_mask = random_attention_mask([batch_size, 4]) inputs = {"pixel_values": pixel_values, "input_ids": input_ids, "attention_mask": attention_mask} return model, inputs def get_vision_text_model(self, vision_config, text_config): vision_model = TFViTModel(vision_config, name="vision_model") text_model = TFBertModel(text_config, name="text_model") return vision_model, text_model def prepare_config_and_inputs(self): vit_model_tester = TFViTModelTester(self) bert_model_tester = TFBertModelTester(self) vision_config_and_inputs = vit_model_tester.prepare_config_and_inputs() text_config_and_inputs = bert_model_tester.prepare_config_and_inputs() vision_config, pixel_values, _ = vision_config_and_inputs ( text_config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, ) = text_config_and_inputs return { "text_config": text_config, "vision_config": vision_config, "pixel_values": pixel_values, "attention_mask": input_mask, "input_ids": input_ids, "text_token_type_ids": token_type_ids, "text_sequence_labels": sequence_labels, "text_token_labels": token_labels, "text_choice_labels": choice_labels, } @require_tf class TFDeiTRobertaModelTest(TFVisionTextDualEncoderMixin, unittest.TestCase): def get_pretrained_model_and_inputs(self): # DeiT repo doesn't have TF weights, but we don't actually use the weights at all so let's # just reinitialize it. model = TFVisionTextDualEncoderModel.from_vision_text_pretrained( "Rocketknight1/tiny-random-deit-tf", "hf-internal-testing/tiny-random-roberta" ) batch_size = 13 pixel_values = floats_tensor( [ batch_size, model.vision_model.config.num_channels, model.vision_model.config.image_size, model.vision_model.config.image_size, ] ) input_ids = ids_tensor([batch_size, 4], model.text_model.config.vocab_size) attention_mask = random_attention_mask([batch_size, 4]) inputs = {"pixel_values": pixel_values, "input_ids": input_ids, "attention_mask": attention_mask} return model, inputs def check_vision_text_output_attention( self, text_config, input_ids, attention_mask, vision_config, pixel_values=None, **kwargs ): vision_model, text_model = self.get_vision_text_model(vision_config, text_config) model = TFVisionTextDualEncoderModel(vision_model=vision_model, text_model=text_model) output = model( input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask, output_attentions=True ) vision_attentions = output.vision_model_output.attentions self.assertEqual(len(vision_attentions), vision_config.num_hidden_layers) # in DEiT, the seq_len equals the number of patches + 2 (we add 2 for the [CLS] and distillation tokens) image_size = to_2tuple(vision_model.config.image_size) patch_size = to_2tuple(vision_model.config.patch_size) num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) seq_len = num_patches + 2 self.assertEqual(vision_attentions[0].shape[-3:], (vision_config.num_attention_heads, seq_len, seq_len)) text_attentions = output.text_model_output.attentions self.assertEqual(len(text_attentions), text_config.num_hidden_layers) self.assertEqual( text_attentions[0].shape[-3:], (text_config.num_attention_heads, input_ids.shape[-1], input_ids.shape[-1]), ) def get_vision_text_model(self, vision_config, text_config): vision_model = TFDeiTModel(vision_config, name="vision_model") text_model = TFRobertaModel(text_config, name="text_model") return vision_model, text_model def prepare_config_and_inputs(self): vit_model_tester = TFDeiTModelTester(self) bert_model_tester = TFRobertaModelTester(self) vision_config_and_inputs = vit_model_tester.prepare_config_and_inputs() text_config_and_inputs = bert_model_tester.prepare_config_and_inputs() vision_config, pixel_values, _ = vision_config_and_inputs ( text_config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, ) = text_config_and_inputs return { "text_config": text_config, "vision_config": vision_config, "pixel_values": pixel_values, "attention_mask": input_mask, "input_ids": input_ids, "text_token_type_ids": token_type_ids, "text_sequence_labels": sequence_labels, "text_token_labels": token_labels, "text_choice_labels": choice_labels, } @require_tf class TFCLIPVisionBertModelTest(TFVisionTextDualEncoderMixin, unittest.TestCase): def get_pretrained_model_and_inputs(self): model = TFVisionTextDualEncoderModel.from_vision_text_pretrained( "Rocketknight1/tiny-random-clip-tf", "hf-internal-testing/tiny-random-bert" ) batch_size = 13 pixel_values = floats_tensor( [ batch_size, model.vision_model.config.num_channels, model.vision_model.config.image_size, model.vision_model.config.image_size, ] ) input_ids = ids_tensor([batch_size, 4], model.text_model.config.vocab_size) attention_mask = random_attention_mask([batch_size, 4]) inputs = {"pixel_values": pixel_values, "input_ids": input_ids, "attention_mask": attention_mask} return model, inputs def get_vision_text_model(self, vision_config, text_config): vision_model = TFCLIPVisionModel(vision_config, name="vision_model") text_model = TFBertModel(text_config, name="text_model") return vision_model, text_model def prepare_config_and_inputs(self): clip_model_tester = TFCLIPVisionModelTester(self) bert_model_tester = TFBertModelTester(self) vision_config_and_inputs = clip_model_tester.prepare_config_and_inputs() text_config_and_inputs = bert_model_tester.prepare_config_and_inputs() vision_config, pixel_values = vision_config_and_inputs ( text_config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, ) = text_config_and_inputs return { "text_config": text_config, "vision_config": vision_config, "pixel_values": pixel_values, "attention_mask": input_mask, "input_ids": input_ids, "text_token_type_ids": token_type_ids, "text_sequence_labels": sequence_labels, "text_token_labels": token_labels, "text_choice_labels": choice_labels, } @require_vision @require_tf class TFVisionTextDualEncoderIntegrationTest(unittest.TestCase): @slow def test_inference(self): model = TFVisionTextDualEncoderModel.from_pretrained( "clip-italian/clip-italian", logit_scale_init_value=1.0, from_pt=True ) processor = VisionTextDualEncoderProcessor.from_pretrained("clip-italian/clip-italian") image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") inputs = processor( text=["una foto di un gatto", "una foto di un cane"], images=image, padding=True, return_tensors="np" ) outputs = model(**inputs) # verify the logits self.assertEqual(outputs.logits_per_image.shape, (inputs.pixel_values.shape[0], inputs.input_ids.shape[0])) self.assertEqual( outputs.logits_per_text.shape, (inputs.input_ids.shape[0], inputs.pixel_values.shape[0]), ) expected_logits = np.array([[1.2284727, 0.3104122]]) self.assertTrue(np.allclose(outputs.logits_per_image.numpy(), expected_logits, atol=1e-3))
transformers/tests/models/vision_text_dual_encoder/test_modeling_tf_vision_text_dual_encoder.py/0
{ "file_path": "transformers/tests/models/vision_text_dual_encoder/test_modeling_tf_vision_text_dual_encoder.py", "repo_id": "transformers", "token_count": 7464 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch ViTDet model.""" import unittest from transformers import VitDetConfig from transformers.testing_utils import is_flaky, require_torch, torch_device from transformers.utils import is_torch_available from ...test_backbone_common import BackboneTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from torch import nn from transformers import VitDetBackbone, VitDetModel class VitDetModelTester: def __init__( self, parent, batch_size=13, image_size=30, patch_size=2, num_channels=3, is_training=True, use_labels=True, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, type_sequence_label_size=10, initializer_range=0.02, scope=None, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.is_training = is_training self.use_labels = use_labels self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range self.scope = scope self.num_patches_one_direction = self.image_size // self.patch_size self.seq_length = (self.image_size // self.patch_size) ** 2 def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) labels = None if self.use_labels: labels = ids_tensor([self.batch_size], self.type_sequence_label_size) config = self.get_config() return config, pixel_values, labels def get_config(self): return VitDetConfig( image_size=self.image_size, pretrain_image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, is_decoder=False, initializer_range=self.initializer_range, ) def create_and_check_model(self, config, pixel_values, labels): model = VitDetModel(config=config) model.to(torch_device) model.eval() result = model(pixel_values) self.parent.assertEqual( result.last_hidden_state.shape, (self.batch_size, self.hidden_size, self.num_patches_one_direction, self.num_patches_one_direction), ) def create_and_check_backbone(self, config, pixel_values, labels): model = VitDetBackbone(config=config) model.to(torch_device) model.eval() result = model(pixel_values) # verify hidden states self.parent.assertEqual(len(result.feature_maps), len(config.out_features)) self.parent.assertListEqual( list(result.feature_maps[0].shape), [self.batch_size, self.hidden_size, self.num_patches_one_direction, self.num_patches_one_direction], ) # verify channels self.parent.assertEqual(len(model.channels), len(config.out_features)) self.parent.assertListEqual(model.channels, [config.hidden_size]) # verify backbone works with out_features=None config.out_features = None model = VitDetBackbone(config=config) model.to(torch_device) model.eval() result = model(pixel_values) # verify feature maps self.parent.assertEqual(len(result.feature_maps), 1) self.parent.assertListEqual( list(result.feature_maps[0].shape), [self.batch_size, self.hidden_size, self.num_patches_one_direction, self.num_patches_one_direction], ) # verify channels self.parent.assertEqual(len(model.channels), 1) self.parent.assertListEqual(model.channels, [config.hidden_size]) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values, labels = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class VitDetModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as VitDet does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = (VitDetModel, VitDetBackbone) if is_torch_available() else () pipeline_model_mapping = {"feature-extraction": VitDetModel} if is_torch_available() else {} fx_compatible = False test_pruning = False test_resize_embeddings = False test_head_masking = False def setUp(self): self.model_tester = VitDetModelTester(self) self.config_tester = ConfigTester(self, config_class=VitDetConfig, has_text_modality=False, hidden_size=37) @is_flaky(max_attempts=3, description="`torch.nn.init.trunc_normal_` is flaky.") def test_initialization(self): super().test_initialization() # TODO: Fix me (once this model gets more usage) @unittest.skip(reason="Does not work on the tiny model as we keep hitting edge cases.") def test_cpu_offload(self): super().test_cpu_offload() # TODO: Fix me (once this model gets more usage) @unittest.skip(reason="Does not work on the tiny model as we keep hitting edge cases.") def test_disk_offload_bin(self): super().test_disk_offload() @unittest.skip(reason="Does not work on the tiny model as we keep hitting edge cases.") def test_disk_offload_safetensors(self): super().test_disk_offload() # TODO: Fix me (once this model gets more usage) @unittest.skip(reason="Does not work on the tiny model as we keep hitting edge cases.") def test_model_parallelism(self): super().test_model_parallelism() def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="VitDet does not use inputs_embeds") def test_inputs_embeds(self): pass def test_model_get_set_embeddings(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_backbone(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_backbone(*config_and_inputs) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_stages = self.model_tester.num_hidden_layers self.assertEqual(len(hidden_states), expected_num_stages + 1) # VitDet's feature maps are of shape (batch_size, num_channels, height, width) self.assertListEqual( list(hidden_states[0].shape[-2:]), [ self.model_tester.num_patches_one_direction, self.model_tester.num_patches_one_direction, ], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) # overwrite since VitDet only supports retraining gradients of hidden states def test_retain_grad_hidden_states_attentions(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = True config.output_attentions = self.has_attentions # no need to test all models as different heads yield the same functionality model_class = self.all_model_classes[0] model = model_class(config) model.to(torch_device) inputs = self._prepare_for_class(inputs_dict, model_class) outputs = model(**inputs) output = outputs[0] # Encoder-/Decoder-only models hidden_states = outputs.hidden_states[0] hidden_states.retain_grad() output.flatten()[0].backward(retain_graph=True) self.assertIsNotNone(hidden_states.grad) @unittest.skip(reason="VitDet does not support feedforward chunking") def test_feed_forward_chunking(self): pass @unittest.skip(reason="VitDet does not have standalone checkpoints since it used as backbone in other models") def test_model_from_pretrained(self): pass @require_torch class VitDetBackboneTest(unittest.TestCase, BackboneTesterMixin): all_model_classes = (VitDetBackbone,) if is_torch_available() else () config_class = VitDetConfig has_attentions = False def setUp(self): self.model_tester = VitDetModelTester(self)
transformers/tests/models/vitdet/test_modeling_vitdet.py/0
{ "file_path": "transformers/tests/models/vitdet/test_modeling_vitdet.py", "repo_id": "transformers", "token_count": 4697 }
# coding=utf-8 # Copyright 2021 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import random import unittest import numpy as np from transformers import Wav2Vec2Config, Wav2Vec2FeatureExtractor from transformers.testing_utils import require_torch, slow from ...test_sequence_feature_extraction_common import SequenceFeatureExtractionTestMixin global_rng = random.Random() # Copied from tests.models.whisper.test_feature_extraction_whisper.floats_list def floats_list(shape, scale=1.0, rng=None, name=None): """Creates a random float32 tensor""" if rng is None: rng = global_rng values = [] for batch_idx in range(shape[0]): values.append([]) for _ in range(shape[1]): values[-1].append(rng.random() * scale) return values class Wav2Vec2FeatureExtractionTester: def __init__( self, parent, batch_size=7, min_seq_length=400, max_seq_length=2000, feature_size=1, padding_value=0.0, sampling_rate=16000, return_attention_mask=True, do_normalize=True, ): self.parent = parent self.batch_size = batch_size self.min_seq_length = min_seq_length self.max_seq_length = max_seq_length self.seq_length_diff = (self.max_seq_length - self.min_seq_length) // (self.batch_size - 1) self.feature_size = feature_size self.padding_value = padding_value self.sampling_rate = sampling_rate self.return_attention_mask = return_attention_mask self.do_normalize = do_normalize def prepare_feat_extract_dict(self): return { "feature_size": self.feature_size, "padding_value": self.padding_value, "sampling_rate": self.sampling_rate, "return_attention_mask": self.return_attention_mask, "do_normalize": self.do_normalize, } def prepare_inputs_for_common(self, equal_length=False, numpify=False): def _flatten(list_of_lists): return list(itertools.chain(*list_of_lists)) if equal_length: speech_inputs = floats_list((self.batch_size, self.max_seq_length)) else: # make sure that inputs increase in size speech_inputs = [ _flatten(floats_list((x, self.feature_size))) for x in range(self.min_seq_length, self.max_seq_length, self.seq_length_diff) ] if numpify: speech_inputs = [np.asarray(x) for x in speech_inputs] return speech_inputs class Wav2Vec2FeatureExtractionTest(SequenceFeatureExtractionTestMixin, unittest.TestCase): feature_extraction_class = Wav2Vec2FeatureExtractor def setUp(self): self.feat_extract_tester = Wav2Vec2FeatureExtractionTester(self) def _check_zero_mean_unit_variance(self, input_vector): self.assertTrue(np.all(np.mean(input_vector, axis=0) < 1e-3)) self.assertTrue(np.all(np.abs(np.var(input_vector, axis=0) - 1) < 1e-3)) def test_call(self): # Tests that all call wrap to encode_plus and batch_encode_plus feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) # create three inputs of length 800, 1000, and 1200 speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)] np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs] # Test not batched input encoded_sequences_1 = feat_extract(speech_inputs[0], return_tensors="np").input_values encoded_sequences_2 = feat_extract(np_speech_inputs[0], return_tensors="np").input_values self.assertTrue(np.allclose(encoded_sequences_1, encoded_sequences_2, atol=1e-3)) # Test batched encoded_sequences_1 = feat_extract(speech_inputs, return_tensors="np").input_values encoded_sequences_2 = feat_extract(np_speech_inputs, return_tensors="np").input_values for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2): self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3)) # Test 2-D numpy arrays are batched. speech_inputs = [floats_list((1, x))[0] for x in (800, 800, 800)] np_speech_inputs = np.asarray(speech_inputs) encoded_sequences_1 = feat_extract(speech_inputs, return_tensors="np").input_values encoded_sequences_2 = feat_extract(np_speech_inputs, return_tensors="np").input_values for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2): self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3)) def test_zero_mean_unit_variance_normalization_np(self): feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)] paddings = ["longest", "max_length", "do_not_pad"] max_lengths = [None, 1600, None] for max_length, padding in zip(max_lengths, paddings): processed = feat_extract(speech_inputs, padding=padding, max_length=max_length, return_tensors="np") input_values = processed.input_values self._check_zero_mean_unit_variance(input_values[0][:800]) self.assertTrue(input_values[0][800:].sum() < 1e-6) self._check_zero_mean_unit_variance(input_values[1][:1000]) self.assertTrue(input_values[0][1000:].sum() < 1e-6) self._check_zero_mean_unit_variance(input_values[2][:1200]) def test_zero_mean_unit_variance_normalization(self): feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) lengths = range(800, 1400, 200) speech_inputs = [floats_list((1, x))[0] for x in lengths] paddings = ["longest", "max_length", "do_not_pad"] max_lengths = [None, 1600, None] for max_length, padding in zip(max_lengths, paddings): processed = feat_extract(speech_inputs, max_length=max_length, padding=padding) input_values = processed.input_values self._check_zero_mean_unit_variance(input_values[0][:800]) self._check_zero_mean_unit_variance(input_values[1][:1000]) self._check_zero_mean_unit_variance(input_values[2][:1200]) def test_zero_mean_unit_variance_normalization_trunc_np_max_length(self): feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)] processed = feat_extract( speech_inputs, truncation=True, max_length=1000, padding="max_length", return_tensors="np" ) input_values = processed.input_values self._check_zero_mean_unit_variance(input_values[0, :800]) self._check_zero_mean_unit_variance(input_values[1]) self._check_zero_mean_unit_variance(input_values[2]) def test_zero_mean_unit_variance_normalization_trunc_np_longest(self): feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)] processed = feat_extract( speech_inputs, truncation=True, max_length=1000, padding="longest", return_tensors="np" ) input_values = processed.input_values self._check_zero_mean_unit_variance(input_values[0, :800]) self._check_zero_mean_unit_variance(input_values[1, :1000]) self._check_zero_mean_unit_variance(input_values[2]) # make sure that if max_length < longest -> then pad to max_length self.assertTrue(input_values.shape == (3, 1000)) speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)] processed = feat_extract( speech_inputs, truncation=True, max_length=2000, padding="longest", return_tensors="np" ) input_values = processed.input_values self._check_zero_mean_unit_variance(input_values[0, :800]) self._check_zero_mean_unit_variance(input_values[1, :1000]) self._check_zero_mean_unit_variance(input_values[2]) # make sure that if max_length > longest -> then pad to longest self.assertTrue(input_values.shape == (3, 1200)) @require_torch def test_double_precision_pad(self): import torch feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) np_speech_inputs = np.random.rand(100).astype(np.float64) py_speech_inputs = np_speech_inputs.tolist() for inputs in [py_speech_inputs, np_speech_inputs]: np_processed = feature_extractor.pad([{"input_values": inputs}], return_tensors="np") self.assertTrue(np_processed.input_values.dtype == np.float32) pt_processed = feature_extractor.pad([{"input_values": inputs}], return_tensors="pt") self.assertTrue(pt_processed.input_values.dtype == torch.float32) @slow @require_torch def test_pretrained_checkpoints_are_set_correctly(self): # this test makes sure that models that are using # group norm don't have their feature extractor return the # attention_mask model_id = "facebook/wav2vec2-base-960h" config = Wav2Vec2Config.from_pretrained(model_id) feat_extract = Wav2Vec2FeatureExtractor.from_pretrained(model_id) # only "layer" feature extraction norm should make use of # attention_mask self.assertEqual(feat_extract.return_attention_mask, config.feat_extract_norm == "layer")
transformers/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py/0
{ "file_path": "transformers/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py", "repo_id": "transformers", "token_count": 4348 }
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch WavLM model.""" import math import unittest import pytest from datasets import load_dataset from transformers import WavLMConfig, is_torch_available from transformers.testing_utils import require_torch, require_torchaudio, slow, torch_device from ...test_configuration_common import ConfigTester from ...test_modeling_common import ( ModelTesterMixin, _config_zero_init, floats_tensor, ids_tensor, random_attention_mask, ) from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( Wav2Vec2FeatureExtractor, WavLMForAudioFrameClassification, WavLMForCTC, WavLMForSequenceClassification, WavLMForXVector, WavLMModel, ) class WavLMModelTester: def __init__( self, parent, batch_size=13, seq_length=1024, # speech is longer is_training=False, hidden_size=16, feat_extract_norm="group", feat_extract_dropout=0.0, feat_extract_activation="gelu", conv_dim=(32, 32, 32), conv_stride=(4, 4, 4), conv_kernel=(8, 8, 8), conv_bias=False, num_conv_pos_embeddings=16, num_conv_pos_embedding_groups=2, num_hidden_layers=2, num_attention_heads=2, hidden_dropout_prob=0.1, # this is most likely not correctly set yet intermediate_size=20, layer_norm_eps=1e-5, hidden_act="gelu", initializer_range=0.02, vocab_size=32, do_stable_layer_norm=False, tdnn_dim=(32, 32), tdnn_kernel=(3, 3), tdnn_dilation=(1, 1), xvector_output_dim=32, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.hidden_size = hidden_size self.feat_extract_norm = feat_extract_norm self.feat_extract_dropout = feat_extract_dropout self.feat_extract_activation = feat_extract_activation self.conv_dim = conv_dim self.conv_stride = conv_stride self.conv_kernel = conv_kernel self.conv_bias = conv_bias self.num_conv_pos_embeddings = num_conv_pos_embeddings self.num_conv_pos_embedding_groups = num_conv_pos_embedding_groups self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.hidden_dropout_prob = hidden_dropout_prob self.intermediate_size = intermediate_size self.layer_norm_eps = layer_norm_eps self.hidden_act = hidden_act self.initializer_range = initializer_range self.vocab_size = vocab_size self.do_stable_layer_norm = do_stable_layer_norm self.tdnn_dim = tdnn_dim self.tdnn_kernel = tdnn_kernel self.tdnn_dilation = tdnn_dilation self.xvector_output_dim = xvector_output_dim self.scope = scope output_seq_length = self.seq_length for kernel, stride in zip(self.conv_kernel, self.conv_stride): output_seq_length = (output_seq_length - (kernel - 1)) / stride self.output_seq_length = int(math.ceil(output_seq_length)) self.encoder_seq_length = self.output_seq_length def prepare_config_and_inputs(self): input_values = floats_tensor([self.batch_size, self.seq_length], scale=1.0) attention_mask = random_attention_mask([self.batch_size, self.seq_length]) config = self.get_config() return config, input_values, attention_mask def get_config(self): return WavLMConfig( hidden_size=self.hidden_size, feat_extract_norm=self.feat_extract_norm, feat_extract_dropout=self.feat_extract_dropout, feat_extract_activation=self.feat_extract_activation, conv_dim=self.conv_dim, conv_stride=self.conv_stride, conv_kernel=self.conv_kernel, conv_bias=self.conv_bias, num_conv_pos_embeddings=self.num_conv_pos_embeddings, num_conv_pos_embedding_groups=self.num_conv_pos_embedding_groups, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, hidden_dropout_prob=self.hidden_dropout_prob, intermediate_size=self.intermediate_size, layer_norm_eps=self.layer_norm_eps, hidden_act=self.hidden_act, initializer_range=self.initializer_range, vocab_size=self.vocab_size, tdnn_dim=self.tdnn_dim, tdnn_kernel=self.tdnn_kernel, tdnn_dilation=self.tdnn_dilation, xvector_output_dim=self.xvector_output_dim, ) def create_and_check_model(self, config, input_values, attention_mask): model = WavLMModel(config=config) model.to(torch_device) model.eval() result = model(input_values, attention_mask=attention_mask) self.parent.assertEqual( result.last_hidden_state.shape, (self.batch_size, self.output_seq_length, self.hidden_size) ) def create_and_check_batch_inference(self, config, input_values, *args): # test does not pass for models making use of `group_norm` # check: https://github.com/pytorch/fairseq/issues/3227 model = WavLMModel(config=config) model.to(torch_device) model.eval() input_values = input_values[:3] attention_mask = torch.ones(input_values.shape, device=torch_device, dtype=torch.bool) input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 attention_mask[i, input_lengths[i] :] = 0.0 batch_outputs = model(input_values, attention_mask=attention_mask).last_hidden_state for i in range(input_values.shape[0]): input_slice = input_values[i : i + 1, : input_lengths[i]] output = model(input_slice).last_hidden_state batch_output = batch_outputs[i : i + 1, : output.shape[1]] self.parent.assertTrue(torch.allclose(output, batch_output, atol=1e-3)) def check_ctc_loss(self, config, input_values, *args): model = WavLMForCTC(config=config) model.to(torch_device) # make sure that dropout is disabled model.eval() input_values = input_values[:3] attention_mask = torch.ones(input_values.shape, device=torch_device, dtype=torch.long) input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] max_length_labels = model._get_feat_extract_output_lengths(torch.tensor(input_lengths)) labels = ids_tensor((input_values.shape[0], min(max_length_labels) - 1), model.config.vocab_size) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 attention_mask[i, input_lengths[i] :] = 0 model.config.ctc_loss_reduction = "sum" sum_loss = model(input_values, attention_mask=attention_mask, labels=labels).loss.item() model.config.ctc_loss_reduction = "mean" mean_loss = model(input_values, attention_mask=attention_mask, labels=labels).loss.item() self.parent.assertTrue(isinstance(sum_loss, float)) self.parent.assertTrue(isinstance(mean_loss, float)) def check_seq_classifier_loss(self, config, input_values, *args): model = WavLMForSequenceClassification(config=config) model.to(torch_device) # make sure that dropout is disabled model.eval() input_values = input_values[:3] attention_mask = torch.ones(input_values.shape, device=torch_device, dtype=torch.long) input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] labels = ids_tensor((input_values.shape[0], 1), len(model.config.id2label)) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 attention_mask[i, input_lengths[i] :] = 0 masked_loss = model(input_values, attention_mask=attention_mask, labels=labels).loss.item() unmasked_loss = model(input_values, labels=labels).loss.item() self.parent.assertTrue(isinstance(masked_loss, float)) self.parent.assertTrue(isinstance(unmasked_loss, float)) self.parent.assertTrue(masked_loss != unmasked_loss) def check_ctc_training(self, config, input_values, *args): config.ctc_zero_infinity = True model = WavLMForCTC(config=config) model.to(torch_device) model.train() # freeze feature encoder model.freeze_feature_encoder() input_values = input_values[:3] input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] max_length_labels = model._get_feat_extract_output_lengths(torch.tensor(input_lengths)) labels = ids_tensor((input_values.shape[0], max(max_length_labels) - 2), model.config.vocab_size) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 if max_length_labels[i] < labels.shape[-1]: # it's important that we make sure that target lengths are at least # one shorter than logit lengths to prevent -inf labels[i, max_length_labels[i] - 1 :] = -100 loss = model(input_values, labels=labels).loss self.parent.assertFalse(torch.isinf(loss).item()) loss.backward() def check_seq_classifier_training(self, config, input_values, *args): config.ctc_zero_infinity = True model = WavLMForSequenceClassification(config=config) model.to(torch_device) model.train() # freeze everything but the classification head model.freeze_base_model() input_values = input_values[:3] input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] labels = ids_tensor((input_values.shape[0], 1), len(model.config.id2label)) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 loss = model(input_values, labels=labels).loss self.parent.assertFalse(torch.isinf(loss).item()) loss.backward() def check_output_attentions(self, config, input_values, attention_mask): model = WavLMModel(config=config) model.config.layerdrop = 1.0 model.to(torch_device) model.train() outputs = model(input_values, attention_mask=attention_mask, output_attentions=True) self.parent.assertTrue(len(outputs.attentions) > 0) def check_labels_out_of_vocab(self, config, input_values, *args): model = WavLMForCTC(config) model.to(torch_device) model.train() input_values = input_values[:3] input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] max_length_labels = model._get_feat_extract_output_lengths(torch.tensor(input_lengths)) labels = ids_tensor((input_values.shape[0], max(max_length_labels) - 2), model.config.vocab_size + 100) with pytest.raises(ValueError): model(input_values, labels=labels) def prepare_config_and_inputs_for_common(self): config, input_values, attention_mask = self.prepare_config_and_inputs() inputs_dict = {"input_values": input_values, "attention_mask": attention_mask} return config, inputs_dict @require_torch class WavLMModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( (WavLMForCTC, WavLMModel, WavLMForAudioFrameClassification, WavLMForSequenceClassification, WavLMForXVector) if is_torch_available() else () ) pipeline_model_mapping = ( { "audio-classification": WavLMForSequenceClassification, "automatic-speech-recognition": WavLMForCTC, "feature-extraction": WavLMModel, } if is_torch_available() else {} ) test_pruning = False test_headmasking = False def setUp(self): self.model_tester = WavLMModelTester(self) self.config_tester = ConfigTester(self, config_class=WavLMConfig, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_ctc_loss_inference(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_ctc_loss(*config_and_inputs) def test_seq_classifier_loss_inference(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_seq_classifier_loss(*config_and_inputs) def test_ctc_train(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_ctc_training(*config_and_inputs) def test_seq_classifier_train(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_seq_classifier_training(*config_and_inputs) def test_output_attentions(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_output_attentions(*config_and_inputs) def test_labels_out_of_vocab(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_labels_out_of_vocab(*config_and_inputs) @unittest.skip(reason="WavLM has no inputs_embeds") def test_inputs_embeds(self): pass # `input_ids` is renamed to `input_values` @unittest.skip(reason="WavLM has no input_ids") def test_forward_signature(self): pass @unittest.skip(reason="WavLM has no token embeddings") def test_resize_tokens_embeddings(self): pass def test_model_get_set_embeddings(self): pass # WavLM uses PyTorch's multi-head-attention class # and thus can't retain gradients on attentions def test_retain_grad_hidden_states_attentions(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = True config.output_attentions = True # no need to test all models as different heads yield the same functionality model_class = self.all_model_classes[0] model = model_class(config) model.to(torch_device) # set layer drop to 0 model.config.layerdrop = 0.0 input_values = inputs_dict["input_values"] input_lengths = torch.tensor( [input_values.shape[1] for _ in range(input_values.shape[0])], dtype=torch.long, device=torch_device ) output_lengths = model._get_feat_extract_output_lengths(input_lengths) labels = ids_tensor((input_values.shape[0], output_lengths[0] - 2), self.model_tester.vocab_size) inputs_dict["attention_mask"] = torch.ones_like(inputs_dict["attention_mask"]) inputs_dict["labels"] = labels outputs = model(**inputs_dict) output = outputs[0] # Encoder-/Decoder-only models hidden_states = outputs.hidden_states[0] hidden_states.retain_grad() output.flatten()[0].backward(retain_graph=True) self.assertIsNotNone(hidden_states.grad) def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): uniform_init_parms = [ "conv.weight", "conv.parametrizations.weight", "masked_spec_embed", "codevectors", "quantizer.weight_proj.weight", "project_hid.weight", "project_hid.bias", "project_q.weight", "project_q.bias", "feature_projection.projection.weight", "feature_projection.projection.bias", "label_embeddings_concat", "rel_attn_embed", "objective.weight", ] if param.requires_grad: if any(x in name for x in uniform_init_parms): self.assertTrue( -1.0 <= ((param.data.mean() * 1e9).round() / 1e9).item() <= 1.0, msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) else: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) # overwrite from test_modeling_common def _mock_init_weights(self, module): if hasattr(module, "weight") and module.weight is not None: module.weight.data.fill_(3) if hasattr(module, "weight_g") and module.weight_g is not None: module.weight_g.data.fill_(3) if hasattr(module, "weight_v") and module.weight_v is not None: module.weight_v.data.fill_(3) if hasattr(module, "bias") and module.bias is not None: module.bias.data.fill_(3) if hasattr(module, "codevectors") and module.codevectors is not None: module.codevectors.data.fill_(3) if hasattr(module, "masked_spec_embed") and module.masked_spec_embed is not None: module.masked_spec_embed.data.fill_(3) @unittest.skip(reason="Feed forward chunking is not implemented for WavLM") def test_feed_forward_chunking(self): pass @slow def test_model_from_pretrained(self): model = WavLMModel.from_pretrained("microsoft/wavlm-base-plus") self.assertIsNotNone(model) @require_torch @require_torchaudio @slow class WavLMModelIntegrationTest(unittest.TestCase): def _load_datasamples(self, num_samples): ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # automatic decoding with librispeech speech_samples = ds.sort("id").filter( lambda x: x["id"] in [f"1272-141231-000{i}" for i in range(num_samples)] )[:num_samples]["audio"] return [x["array"] for x in speech_samples] def _load_superb(self, task, num_samples): ds = load_dataset("anton-l/superb_dummy", task, split="test", trust_remote_code=True) return ds[:num_samples] def test_inference_base(self): model = WavLMModel.from_pretrained("microsoft/wavlm-base-plus").to(torch_device) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained( "microsoft/wavlm-base-plus", return_attention_mask=True ) input_speech = self._load_datasamples(2) inputs = feature_extractor(input_speech, return_tensors="pt", padding=True) input_values = inputs.input_values.to(torch_device) attention_mask = inputs.attention_mask.to(torch_device) with torch.no_grad(): hidden_states_slice = ( model(input_values, attention_mask=attention_mask).last_hidden_state[:, -2:, -2:].cpu() ) EXPECTED_HIDDEN_STATES_SLICE = torch.tensor( [[[0.0577, 0.1161], [0.0579, 0.1165]], [[0.0199, 0.1237], [0.0059, 0.0605]]] ) torch.testing.assert_close(hidden_states_slice, EXPECTED_HIDDEN_STATES_SLICE, rtol=5e-2, atol=5e-2) def test_inference_large(self): model = WavLMModel.from_pretrained("microsoft/wavlm-large").to(torch_device) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained( "microsoft/wavlm-large", return_attention_mask=True ) input_speech = self._load_datasamples(2) inputs = feature_extractor(input_speech, return_tensors="pt", padding=True) input_values = inputs.input_values.to(torch_device) attention_mask = inputs.attention_mask.to(torch_device) with torch.no_grad(): hidden_states_slice = ( model(input_values, attention_mask=attention_mask).last_hidden_state[:, -2:, -2:].cpu() ) EXPECTED_HIDDEN_STATES_SLICE = torch.tensor( [[[0.2122, 0.0500], [0.2118, 0.0563]], [[0.1353, 0.1818], [0.2453, 0.0595]]] ) torch.testing.assert_close(hidden_states_slice, EXPECTED_HIDDEN_STATES_SLICE, rtol=5e-2) def test_inference_diarization(self): model = WavLMForAudioFrameClassification.from_pretrained("microsoft/wavlm-base-plus-sd").to(torch_device) processor = Wav2Vec2FeatureExtractor.from_pretrained("microsoft/wavlm-base-plus-sd") input_data = self._load_superb("sd", 4) inputs = processor(input_data["speech"], return_tensors="pt", padding=True, sampling_rate=16_000) input_values = inputs.input_values.to(torch_device) attention_mask = inputs.attention_mask.to(torch_device) with torch.no_grad(): outputs = model(input_values, attention_mask=attention_mask) # labels is a one-hot array of shape (num_frames, num_speakers) labels = (outputs.logits > 0).long() # s3prl logits for the same batch expected_logits = torch.tensor( [ [[-5.9566, -8.6554], [-5.7137, -8.9386], [-5.7906, -7.0973], [-5.7829, -5.9999]], [[-5.2086, -7.7878], [-4.8890, -7.9312], [-4.2004, -3.9101], [-5.4480, -4.6932]], [[-4.6105, -6.7178], [-5.1930, -6.1635], [-2.6228, -4.1123], [-2.7646, -3.1576]], [[-4.4477, -7.9206], [-3.9339, -7.3707], [-4.9528, -4.8242], [-3.6921, -2.9687]], ], device=torch_device, ) self.assertEqual(labels[0, :, 0].sum(), 258) self.assertEqual(labels[0, :, 1].sum(), 647) torch.testing.assert_close(outputs.logits[:, :4], expected_logits, rtol=1e-2, atol=1e-2) def test_inference_speaker_verification(self): model = WavLMForXVector.from_pretrained("microsoft/wavlm-base-plus-sv").to(torch_device) processor = Wav2Vec2FeatureExtractor.from_pretrained("microsoft/wavlm-base-plus-sv") input_data = self._load_superb("si", 4) inputs = processor(input_data["speech"], return_tensors="pt", padding=True) labels = torch.tensor([5, 1, 1, 3], device=torch_device).T with torch.no_grad(): input_values = inputs.input_values.to(torch_device) attention_mask = inputs.attention_mask.to(torch_device) outputs = model(input_values, attention_mask=attention_mask, labels=labels) embeddings = torch.nn.functional.normalize(outputs.embeddings, dim=-1) cosine_sim = torch.nn.CosineSimilarity(dim=-1) # id10002 vs id10002 self.assertAlmostEqual(cosine_sim(embeddings[1], embeddings[2]).item(), 0.9787, 3) # id10006 vs id10002 self.assertAlmostEqual(cosine_sim(embeddings[0], embeddings[1]).item(), 0.5064, 3) # id10002 vs id10004 self.assertAlmostEqual(cosine_sim(embeddings[2], embeddings[3]).item(), 0.4780, 3) self.assertAlmostEqual(outputs.loss.item(), 18.4154, 2)
transformers/tests/models/wavlm/test_modeling_wavlm.py/0
{ "file_path": "transformers/tests/models/wavlm/test_modeling_wavlm.py", "repo_id": "transformers", "token_count": 11115 }
# coding=utf-8 # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import unittest from transformers import is_tf_available from transformers.testing_utils import require_tf, slow from ...test_configuration_common import ConfigTester from ...test_modeling_tf_common import TFModelTesterMixin, ids_tensor, random_attention_mask from ...test_pipeline_mixin import PipelineTesterMixin if is_tf_available(): import tensorflow as tf from transformers import ( TFXLMForMultipleChoice, TFXLMForQuestionAnsweringSimple, TFXLMForSequenceClassification, TFXLMForTokenClassification, TFXLMModel, TFXLMWithLMHeadModel, XLMConfig, ) class TFXLMModelTester: def __init__( self, parent, ): self.parent = parent self.batch_size = 13 self.seq_length = 7 self.is_training = True self.use_input_lengths = True self.use_token_type_ids = True self.use_labels = True self.gelu_activation = True self.sinusoidal_embeddings = False self.causal = False self.asm = False self.n_langs = 2 self.vocab_size = 99 self.n_special = 0 self.hidden_size = 32 self.num_hidden_layers = 2 self.num_attention_heads = 4 self.hidden_dropout_prob = 0.1 self.attention_probs_dropout_prob = 0.1 self.max_position_embeddings = 512 self.type_vocab_size = 16 self.type_sequence_label_size = 2 self.initializer_range = 0.02 self.num_labels = 3 self.num_choices = 4 self.summary_type = "last" self.use_proj = True self.scope = None self.bos_token_id = 0 def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = random_attention_mask([self.batch_size, self.seq_length], dtype=tf.float32) input_lengths = None if self.use_input_lengths: input_lengths = ( ids_tensor([self.batch_size], vocab_size=2) + self.seq_length - 2 ) # small variation of seq_length token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.n_langs) sequence_labels = None token_labels = None is_impossible_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) is_impossible_labels = ids_tensor([self.batch_size], 2, dtype=tf.float32) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = XLMConfig( vocab_size=self.vocab_size, n_special=self.n_special, emb_dim=self.hidden_size, n_layers=self.num_hidden_layers, n_heads=self.num_attention_heads, dropout=self.hidden_dropout_prob, attention_dropout=self.attention_probs_dropout_prob, gelu_activation=self.gelu_activation, sinusoidal_embeddings=self.sinusoidal_embeddings, asm=self.asm, causal=self.causal, n_langs=self.n_langs, max_position_embeddings=self.max_position_embeddings, initializer_range=self.initializer_range, summary_type=self.summary_type, use_proj=self.use_proj, bos_token_id=self.bos_token_id, ) return ( config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ) def create_and_check_xlm_model( self, config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ): model = TFXLMModel(config=config) inputs = {"input_ids": input_ids, "lengths": input_lengths, "langs": token_type_ids} result = model(inputs) inputs = [input_ids, input_mask] result = model(inputs) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_xlm_lm_head( self, config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ): model = TFXLMWithLMHeadModel(config) inputs = {"input_ids": input_ids, "lengths": input_lengths, "langs": token_type_ids} outputs = model(inputs) result = outputs self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_xlm_qa( self, config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ): model = TFXLMForQuestionAnsweringSimple(config) inputs = {"input_ids": input_ids, "lengths": input_lengths} result = model(inputs) self.parent.assertEqual(result.start_logits.shape, (self.batch_size, self.seq_length)) self.parent.assertEqual(result.end_logits.shape, (self.batch_size, self.seq_length)) def create_and_check_xlm_sequence_classif( self, config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ): model = TFXLMForSequenceClassification(config) inputs = {"input_ids": input_ids, "lengths": input_lengths} result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.type_sequence_label_size)) def create_and_check_xlm_for_token_classification( self, config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ): config.num_labels = self.num_labels model = TFXLMForTokenClassification(config=config) inputs = {"input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids} result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.num_labels)) def create_and_check_xlm_for_multiple_choice( self, config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ): config.num_choices = self.num_choices model = TFXLMForMultipleChoice(config=config) multiple_choice_inputs_ids = tf.tile(tf.expand_dims(input_ids, 1), (1, self.num_choices, 1)) multiple_choice_input_mask = tf.tile(tf.expand_dims(input_mask, 1), (1, self.num_choices, 1)) multiple_choice_token_type_ids = tf.tile(tf.expand_dims(token_type_ids, 1), (1, self.num_choices, 1)) inputs = { "input_ids": multiple_choice_inputs_ids, "attention_mask": multiple_choice_input_mask, "token_type_ids": multiple_choice_token_type_ids, } result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_choices)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, token_type_ids, input_lengths, sequence_labels, token_labels, is_impossible_labels, choice_labels, input_mask, ) = config_and_inputs inputs_dict = { "input_ids": input_ids, "token_type_ids": token_type_ids, "langs": token_type_ids, "lengths": input_lengths, } return config, inputs_dict @require_tf class TFXLMModelTest(TFModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( ( TFXLMModel, TFXLMWithLMHeadModel, TFXLMForSequenceClassification, TFXLMForQuestionAnsweringSimple, TFXLMForTokenClassification, TFXLMForMultipleChoice, ) if is_tf_available() else () ) all_generative_model_classes = ( (TFXLMWithLMHeadModel,) if is_tf_available() else () ) # TODO (PVP): Check other models whether language generation is also applicable pipeline_model_mapping = ( { "feature-extraction": TFXLMModel, "fill-mask": TFXLMWithLMHeadModel, "question-answering": TFXLMForQuestionAnsweringSimple, "text-classification": TFXLMForSequenceClassification, "text-generation": TFXLMWithLMHeadModel, "token-classification": TFXLMForTokenClassification, "zero-shot": TFXLMForSequenceClassification, } if is_tf_available() else {} ) test_head_masking = False test_onnx = False # TODO: Fix the failed tests def is_pipeline_test_to_skip( self, pipeline_test_case_name, config_class, model_architecture, tokenizer_name, image_processor_name, feature_extractor_name, processor_name, ): if ( pipeline_test_case_name == "QAPipelineTests" and tokenizer_name is not None and not tokenizer_name.endswith("Fast") ): # `QAPipelineTests` fails for a few models when the slower tokenizer are used. # (The slower tokenizers were never used for pipeline tests before the pipeline testing rework) # TODO: check (and possibly fix) the `QAPipelineTests` with slower tokenizer return True return False def setUp(self): self.model_tester = TFXLMModelTester(self) self.config_tester = ConfigTester(self, config_class=XLMConfig, emb_dim=37) def test_config(self): self.config_tester.run_common_tests() def test_xlm_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_xlm_model(*config_and_inputs) def test_xlm_lm_head(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_xlm_lm_head(*config_and_inputs) def test_xlm_qa(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_xlm_qa(*config_and_inputs) def test_xlm_sequence_classif(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_xlm_sequence_classif(*config_and_inputs) def test_for_token_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_xlm_for_token_classification(*config_and_inputs) def test_for_multiple_choice(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_xlm_for_multiple_choice(*config_and_inputs) @slow def test_model_from_pretrained(self): model_name = "FacebookAI/xlm-mlm-en-2048" model = TFXLMModel.from_pretrained(model_name) self.assertIsNotNone(model) @require_tf class TFXLMModelLanguageGenerationTest(unittest.TestCase): @slow def test_lm_generate_xlm_mlm_en_2048(self): model = TFXLMWithLMHeadModel.from_pretrained("FacebookAI/xlm-mlm-en-2048") input_ids = tf.convert_to_tensor([[14, 447]], dtype=tf.int32) # the president expected_output_ids = [ 14, 447, 14, 447, 14, 447, 14, 447, 14, 447, 14, 447, 14, 447, 14, 447, 14, 447, 14, 447, ] # the president the president the president the president the president the president the president the president the president the president # TODO(PVP): this and other input_ids I tried for generation give pretty bad results. Not sure why. Model might just not be made for auto-regressive inference output_ids = model.generate(input_ids, do_sample=False) self.assertListEqual(output_ids[0].numpy().tolist(), expected_output_ids)
transformers/tests/models/xlm/test_modeling_tf_xlm.py/0
{ "file_path": "transformers/tests/models/xlm/test_modeling_tf_xlm.py", "repo_id": "transformers", "token_count": 6477 }
# coding=utf-8 # Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import importlib import tempfile import unittest from unittest import skip from packaging import version from transformers import AqlmConfig, AutoConfig, AutoModelForCausalLM, AutoTokenizer, OPTForCausalLM, StaticCache from transformers.testing_utils import ( require_accelerate, require_aqlm, require_torch_gpu, require_torch_multi_gpu, slow, torch_device, ) from transformers.utils import is_accelerate_available, is_aqlm_available, is_torch_available if is_torch_available(): import torch if is_accelerate_available(): from accelerate import init_empty_weights @require_torch_gpu class AqlmConfigTest(unittest.TestCase): def test_to_dict(self): """ Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object """ quantization_config = AqlmConfig() config_to_dict = quantization_config.to_dict() for key in config_to_dict: self.assertEqual(getattr(quantization_config, key), config_to_dict[key]) def test_from_dict(self): """ Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict """ dict = { "in_group_size": 32, "num_codebooks": 8, "nbits_per_codebook": 8, "linear_weights_not_to_quantize": ["lm_head.weight"], } quantization_config = AqlmConfig.from_dict(dict) self.assertEqual(dict["in_group_size"], quantization_config.in_group_size) self.assertEqual(dict["num_codebooks"], quantization_config.num_codebooks) self.assertEqual(dict["nbits_per_codebook"], quantization_config.nbits_per_codebook) self.assertEqual(dict["linear_weights_not_to_quantize"], quantization_config.linear_weights_not_to_quantize) @slow @require_torch_gpu @require_aqlm @require_accelerate class AqlmTest(unittest.TestCase): model_name = "BlackSamorez/Llama-2-7b-AQLM-2Bit-1x16-hf" input_text = "Hello my name is" max_new_tokens = 32 EXPECTED_OUTPUT = "Hello my name is Katie. I am a 20 year old college student. I am a very outgoing person. I love to have fun and be active. I" device_map = "cuda" # called only once for all test in this class @classmethod def setUpClass(cls): """ Setup quantized model """ cls.tokenizer = AutoTokenizer.from_pretrained(cls.model_name) cls.quantized_model = AutoModelForCausalLM.from_pretrained( cls.model_name, device_map=cls.device_map, ) def tearDown(self): gc.collect() torch.cuda.empty_cache() gc.collect() def test_quantized_model_conversion(self): """ Simple test that checks if the quantized model has been converted properly """ from aqlm import QuantizedLinear from transformers.integrations import replace_with_aqlm_linear model_id = "facebook/opt-350m" config = AutoConfig.from_pretrained(model_id, revision="cb32f77e905cccbca1d970436fb0f5e6b58ee3c5") quantization_config = AqlmConfig() with init_empty_weights(): model = OPTForCausalLM(config) nb_linears = 0 for module in model.modules(): if isinstance(module, torch.nn.Linear): nb_linears += 1 model, _ = replace_with_aqlm_linear(model, quantization_config=quantization_config) nb_aqlm_linear = 0 for module in model.modules(): if isinstance(module, QuantizedLinear): nb_aqlm_linear += 1 self.assertEqual(nb_linears, nb_aqlm_linear) # Try with `linear_weights_not_to_quantize` with init_empty_weights(): model = OPTForCausalLM(config) model, _ = replace_with_aqlm_linear( model, quantization_config=quantization_config, linear_weights_not_to_quantize=["lm_head.weight"] ) nb_aqlm_linear = 0 for module in model.modules(): if isinstance(module, QuantizedLinear): nb_aqlm_linear += 1 self.assertEqual(nb_linears - 1, nb_aqlm_linear) @skip( "inference doesn't work with quantized aqlm models using torch.Any type with recent torch versions. Waiting for the fix from AQLM side" ) def test_quantized_model(self): """ Simple test that checks if the quantized model is working properly """ input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) def test_raise_if_non_quantized(self): model_id = "facebook/opt-125m" quantization_config = AqlmConfig(bits=4) with self.assertRaises(ValueError): _ = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config) @skip( "inference doesn't work with quantized aqlm models using torch.Any type with recent torch versions. Waiting for the fix from AQLM side" ) def test_save_pretrained(self): """ Simple test that checks if the quantized model is working properly after being saved and loaded """ with tempfile.TemporaryDirectory() as tmpdirname: self.quantized_model.save_pretrained(tmpdirname) model = AutoModelForCausalLM.from_pretrained(tmpdirname, device_map=self.device_map) input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) output = model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) @skip( "inference doesn't work with quantized aqlm models using torch.Any type with recent torch versions. Waiting for the fix from AQLM side" ) @require_torch_multi_gpu def test_quantized_model_multi_gpu(self): """ Simple test that checks if the quantized model is working properly with multiple GPUs """ input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) quantized_model = AutoModelForCausalLM.from_pretrained(self.model_name, device_map="auto") self.assertTrue(set(quantized_model.hf_device_map.values()) == {0, 1}) output = quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) @unittest.skipUnless( is_aqlm_available() and version.parse(importlib.metadata.version("aqlm")) >= version.parse("1.0.3"), "test requires `aqlm>=1.0.3`", ) def test_quantized_model_compile(self): """ Simple test that checks if the quantized model is working properly """ # Sample tokens greedily def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values): logits = model( cur_token, position_ids=input_pos, cache_position=cache_position, past_key_values=past_key_values, return_dict=False, use_cache=True, )[0] new_token = torch.argmax(logits[:, [-1]], dim=-1).to(torch.int) return new_token # Tokenize the test input input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)["input_ids"] seq_length = input_ids.shape[1] # Setup static KV cache for generation past_key_values = StaticCache( config=self.quantized_model.config, batch_size=1, max_cache_len=seq_length + self.max_new_tokens + 1, device=torch_device, dtype=self.quantized_model.config._pre_quantization_dtype, ) # Allocate token ids to be generated and copy prefix ids cache_position = torch.arange(seq_length, device=torch_device) generated_ids = torch.zeros(1, seq_length + self.max_new_tokens, dtype=torch.int, device=torch_device) generated_ids[:, cache_position] = input_ids.to(torch_device).to(torch.int) # Do a forward pass to fill the prefix cache and compile the kernels if necessary logits = self.quantized_model( input_ids, cache_position=cache_position, past_key_values=past_key_values, return_dict=False, use_cache=True, )[0] next_token = torch.argmax(logits[:, [-1]], dim=-1).to(torch.int) generated_ids[:, [seq_length]] = next_token with torch.no_grad(): # Compile the CUDA graph decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True) # Generate tokens one by one cache_position = torch.tensor([seq_length + 1], device=torch_device) for _ in range(1, self.max_new_tokens): with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_mem_efficient=False, enable_math=True): next_token = decode_one_tokens( self.quantized_model, next_token.clone(), None, cache_position, past_key_values ) generated_ids.index_copy_(1, cache_position, next_token) cache_position += 1 # Check generated text self.assertEqual(self.tokenizer.decode(generated_ids[0], skip_special_tokens=True), self.EXPECTED_OUTPUT)
transformers/tests/quantization/aqlm_integration/test_aqlm.py/0
{ "file_path": "transformers/tests/quantization/aqlm_integration/test_aqlm.py", "repo_id": "transformers", "token_count": 4417 }
# coding=utf-8 # Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import tempfile import unittest from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, FbgemmFp8Config, OPTForCausalLM from transformers.testing_utils import ( require_accelerate, require_fbgemm_gpu, require_read_token, require_torch_gpu, require_torch_multi_gpu, slow, torch_device, ) from transformers.utils import is_accelerate_available, is_torch_available if is_torch_available(): import torch if is_accelerate_available(): from accelerate import init_empty_weights @require_torch_gpu class FbgemmFp8ConfigTest(unittest.TestCase): def test_to_dict(self): """ Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object """ quantization_config = FbgemmFp8Config() config_to_dict = quantization_config.to_dict() for key in config_to_dict: self.assertEqual(getattr(quantization_config, key), config_to_dict[key]) def test_from_dict(self): """ Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict """ dict = {"modules_to_not_convert": ["lm_head.weight"], "quant_method": "fbgemm_fp8"} quantization_config = FbgemmFp8Config.from_dict(dict) self.assertEqual(dict["modules_to_not_convert"], quantization_config.modules_to_not_convert) self.assertEqual(dict["quant_method"], quantization_config.quant_method) @slow @require_torch_gpu @require_fbgemm_gpu @require_accelerate @require_read_token class FbgemmFp8Test(unittest.TestCase): model_name = "meta-llama/Meta-Llama-3-8B" input_text = "What are we having for dinner?" max_new_tokens = 9 EXPECTED_OUTPUT = "What are we having for dinner?\nI'm having a steak and a salad" device_map = "cuda" offload_device_map = { "model.embed_tokens": 0, "model.layers.0": 0, "model.layers.1": 0, "model.layers.2": 0, "model.layers.3": 0, "model.layers.4": 0, "model.layers.5": 0, "model.layers.6": 0, "model.layers.7": 0, "model.layers.8": 0, "model.layers.9": 0, "model.layers.10": 0, "model.layers.11": 0, "model.layers.12": 0, "model.layers.13": 0, "model.layers.14": 0, "model.layers.15": 0, "model.layers.16": "cpu", "model.layers.17": "cpu", "model.layers.18": "cpu", "model.layers.19": "cpu", "model.layers.20": "disk", "model.layers.21": "disk", "model.layers.22": "disk", "model.layers.23": "disk", "model.layers.24": "disk", "model.layers.25": "disk", "model.layers.26": "disk", "model.layers.27": "disk", "model.layers.28": "disk", "model.layers.29": "disk", "model.layers.30": "disk", "model.layers.31": "disk", "model.norm": "disk", "lm_head": "disk", } # called only once for all test in this class @classmethod def setUpClass(cls): """ Setup quantized model """ quantization_config = FbgemmFp8Config() cls.tokenizer = AutoTokenizer.from_pretrained(cls.model_name) cls.quantized_model = AutoModelForCausalLM.from_pretrained( cls.model_name, device_map=cls.device_map, quantization_config=quantization_config ) def tearDown(self): gc.collect() torch.cuda.empty_cache() gc.collect() def test_quantized_model_conversion(self): """ Simple test that checks if the quantized model has been converted properly """ from transformers.integrations import FbgemmFp8Linear, replace_with_fbgemm_fp8_linear model_id = "facebook/opt-350m" config = AutoConfig.from_pretrained(model_id, revision="cb32f77e905cccbca1d970436fb0f5e6b58ee3c5") quantization_config = FbgemmFp8Config() with init_empty_weights(): model = OPTForCausalLM(config) nb_linears = 0 for module in model.modules(): if isinstance(module, torch.nn.Linear): nb_linears += 1 model = replace_with_fbgemm_fp8_linear(model, quantization_config=quantization_config) nb_fbgemm_linear = 0 for module in model.modules(): if isinstance(module, FbgemmFp8Linear): nb_fbgemm_linear += 1 self.assertEqual(nb_linears - 1, nb_fbgemm_linear) with init_empty_weights(): model = OPTForCausalLM(config) quantization_config = FbgemmFp8Config(modules_to_not_convert=["fc1"]) model = replace_with_fbgemm_fp8_linear(model, quantization_config=quantization_config) nb_fbgemm_linear = 0 for module in model.modules(): if isinstance(module, FbgemmFp8Linear): nb_fbgemm_linear += 1 self.assertEqual(nb_linears - 25, nb_fbgemm_linear) def test_quantized_model(self): """ Simple test that checks if the quantized model is working properly """ input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) def test_save_pretrained(self): """ Simple test that checks if the quantized model is working properly after being saved and loaded """ with tempfile.TemporaryDirectory() as tmpdirname: self.quantized_model.save_pretrained(tmpdirname) model = AutoModelForCausalLM.from_pretrained(tmpdirname, device_map=self.device_map) input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) output = model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) def test_change_loading_attributes(self): """ Simple test that checks if the quantized model is working properly after being saved and loaded """ with tempfile.TemporaryDirectory() as tmpdirname: self.quantized_model.save_pretrained(tmpdirname) quantization_config = FbgemmFp8Config(activation_scale_ub=1000.0) model = AutoModelForCausalLM.from_pretrained( tmpdirname, device_map=self.device_map, quantization_config=quantization_config ) self.assertEqual(model.model.layers[1].mlp.down_proj.input_scale_ub.item(), 1000.0) input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) output = model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) @require_torch_multi_gpu def test_quantized_model_multi_gpu(self): """ Simple test that checks if the quantized model is working properly with multiple GPUs set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUS """ input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) quantization_config = FbgemmFp8Config() quantized_model = AutoModelForCausalLM.from_pretrained( self.model_name, device_map="auto", quantization_config=quantization_config ) self.assertTrue(set(quantized_model.hf_device_map.values()) == {0, 1}) output = quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) def test_quantized_model_offload(self): """ Simple test that checks if the quantized model returns an error when loading with cpu/disk offloaded """ quantization_config = FbgemmFp8Config() with self.assertRaisesRegex( ValueError, "You are attempting to load an FP8 model with a device_map that contains a CPU or disk device." ): AutoModelForCausalLM.from_pretrained( self.model_name, device_map=self.offload_device_map, quantization_config=quantization_config ) def test_save_pretrained_offload(self): """ Simple test that checks if the saved quantized model is working properly cpu/disk offload """ with tempfile.TemporaryDirectory() as tmpdirname: self.quantized_model.save_pretrained(tmpdirname) input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) quantized_model = AutoModelForCausalLM.from_pretrained(tmpdirname, device_map=self.offload_device_map) output = quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) @require_torch_multi_gpu def test_save_pretrained_multi_gpu(self): """ Simple test that checks if the quantized model is working properly after being saved and loaded """ with tempfile.TemporaryDirectory() as tmpdirname: self.quantized_model.save_pretrained(tmpdirname) model = AutoModelForCausalLM.from_pretrained(tmpdirname, device_map="auto") self.assertTrue(set(model.hf_device_map.values()) == {0, 1}) input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device) output = model.generate(**input_ids, max_new_tokens=self.max_new_tokens) self.assertEqual(self.tokenizer.decode(output[0], skip_special_tokens=True), self.EXPECTED_OUTPUT) @require_torch_gpu @require_accelerate @require_fbgemm_gpu class FbgemmFp8LinearTest(unittest.TestCase): def test_linear_preserves_shape(self): """ Test that FbgemmFp8Linear preserves shape when in_features == out_features. """ from transformers.integrations import FbgemmFp8Linear with init_empty_weights(include_buffers=True): linear = FbgemmFp8Linear(1024, 1024, True) x = torch.rand((17, 23, 1024)) x_ = linear(x) self.assertEqual(x_.shape, x.shape) def test_linear_with_diff_feature_size_preserves_shape(self): """ Test that FbgemmFp8Linear generates the correct shape when in_features != out_features. """ from transformers.integrations import FbgemmFp8Linear with init_empty_weights(include_buffers=True): linear = FbgemmFp8Linear(1024, 2048, True) x = torch.rand((17, 23, 1024)) x_ = linear(x) self.assertEqual(x_.shape, (17, 23, 2048))
transformers/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py/0
{ "file_path": "transformers/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py", "repo_id": "transformers", "token_count": 5002 }
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import os import sys import unittest git_repo_path = os.path.abspath(os.path.dirname(os.path.dirname(os.path.dirname(__file__)))) sys.path.append(os.path.join(git_repo_path, "utils")) from check_docstrings import get_default_description, replace_default_in_arg_description # noqa: E402 class CheckDostringsTested(unittest.TestCase): def test_replace_default_in_arg_description(self): # Standard docstring with default. desc_with_default = "`float`, *optional*, defaults to 2.0" self.assertEqual( replace_default_in_arg_description(desc_with_default, 2.0), "`float`, *optional*, defaults to 2.0" ) self.assertEqual( replace_default_in_arg_description(desc_with_default, 1.0), "`float`, *optional*, defaults to 1.0" ) self.assertEqual(replace_default_in_arg_description(desc_with_default, inspect._empty), "`float`") # Standard docstring with default but optional is not using the stars. desc_with_default_typo = "`float`, `optional`, defaults to 2.0" self.assertEqual( replace_default_in_arg_description(desc_with_default_typo, 2.0), "`float`, *optional*, defaults to 2.0" ) self.assertEqual( replace_default_in_arg_description(desc_with_default_typo, 1.0), "`float`, *optional*, defaults to 1.0" ) # If the default is None we do not erase the value in the docstring. self.assertEqual( replace_default_in_arg_description(desc_with_default, None), "`float`, *optional*, defaults to 2.0" ) # If the default is None (and set as such in the docstring), we do not include it. desc_with_default = "`float`, *optional*, defaults to None" self.assertEqual(replace_default_in_arg_description(desc_with_default, None), "`float`, *optional*") desc_with_default = "`float`, *optional*, defaults to `None`" self.assertEqual(replace_default_in_arg_description(desc_with_default, None), "`float`, *optional*") # Operations are not replaced, but put in backtiks. desc_with_default = "`float`, *optional*, defaults to 1/255" self.assertEqual( replace_default_in_arg_description(desc_with_default, 1 / 255), "`float`, *optional*, defaults to `1/255`" ) desc_with_default = "`float`, *optional*, defaults to `1/255`" self.assertEqual( replace_default_in_arg_description(desc_with_default, 1 / 255), "`float`, *optional*, defaults to `1/255`" ) desc_with_optional = "`float`, *optional*" self.assertEqual( replace_default_in_arg_description(desc_with_optional, 2.0), "`float`, *optional*, defaults to 2.0" ) self.assertEqual( replace_default_in_arg_description(desc_with_optional, 1.0), "`float`, *optional*, defaults to 1.0" ) self.assertEqual(replace_default_in_arg_description(desc_with_optional, None), "`float`, *optional*") self.assertEqual(replace_default_in_arg_description(desc_with_optional, inspect._empty), "`float`") desc_with_no_optional = "`float`" self.assertEqual( replace_default_in_arg_description(desc_with_no_optional, 2.0), "`float`, *optional*, defaults to 2.0" ) self.assertEqual( replace_default_in_arg_description(desc_with_no_optional, 1.0), "`float`, *optional*, defaults to 1.0" ) self.assertEqual(replace_default_in_arg_description(desc_with_no_optional, None), "`float`, *optional*") self.assertEqual(replace_default_in_arg_description(desc_with_no_optional, inspect._empty), "`float`") def test_get_default_description(self): # Fake function to have arguments to test. def _fake_function(a, b: int, c=1, d: float = 2.0, e: str = "blob"): pass params = inspect.signature(_fake_function).parameters assert get_default_description(params["a"]) == "`<fill_type>`" assert get_default_description(params["b"]) == "`int`" assert get_default_description(params["c"]) == "`<fill_type>`, *optional*, defaults to 1" assert get_default_description(params["d"]) == "`float`, *optional*, defaults to 2.0" assert get_default_description(params["e"]) == '`str`, *optional*, defaults to `"blob"`'
transformers/tests/repo_utils/test_check_docstrings.py/0
{ "file_path": "transformers/tests/repo_utils/test_check_docstrings.py", "repo_id": "transformers", "token_count": 1935 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import inspect import tempfile from transformers.testing_utils import require_torch, torch_device from transformers.utils.backbone_utils import BackboneType @require_torch class BackboneTesterMixin: all_model_classes = () has_attentions = True def test_config(self): config_class = self.config_class # test default config config = config_class() self.assertIsNotNone(config) num_stages = len(config.depths) if hasattr(config, "depths") else config.num_hidden_layers expected_stage_names = ["stem"] + [f"stage{idx}" for idx in range(1, num_stages + 1)] self.assertEqual(config.stage_names, expected_stage_names) self.assertTrue(set(config.out_features).issubset(set(config.stage_names))) # Test out_features and out_indices are correctly set # out_features and out_indices both None config = config_class(out_features=None, out_indices=None) self.assertEqual(config.out_features, [config.stage_names[-1]]) self.assertEqual(config.out_indices, [len(config.stage_names) - 1]) # out_features and out_indices both set config = config_class(out_features=["stem", "stage1"], out_indices=[0, 1]) self.assertEqual(config.out_features, ["stem", "stage1"]) self.assertEqual(config.out_indices, [0, 1]) # Only out_features set config = config_class(out_features=["stage1", "stage3"]) self.assertEqual(config.out_features, ["stage1", "stage3"]) self.assertEqual(config.out_indices, [1, 3]) # Only out_indices set config = config_class(out_indices=[0, 2]) self.assertEqual(config.out_features, [config.stage_names[0], config.stage_names[2]]) self.assertEqual(config.out_indices, [0, 2]) # Error raised when out_indices do not correspond to out_features with self.assertRaises(ValueError): config = config_class(out_features=["stage1", "stage2"], out_indices=[0, 2]) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["pixel_values"] self.assertListEqual(arg_names[:1], expected_arg_names) def test_config_save_pretrained(self): config_class = self.config_class config_first = config_class(out_indices=[0, 1, 2, 3]) with tempfile.TemporaryDirectory() as tmpdirname: config_first.save_pretrained(tmpdirname) config_second = self.config_class.from_pretrained(tmpdirname) self.assertEqual(config_second.to_dict(), config_first.to_dict()) def test_channels(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertEqual(len(model.channels), len(config.out_features)) num_features = model.num_features out_indices = [config.stage_names.index(feat) for feat in config.out_features] out_channels = [num_features[idx] for idx in out_indices] self.assertListEqual(model.channels, out_channels) new_config = copy.deepcopy(config) new_config.out_features = None model = model_class(new_config) self.assertEqual(len(model.channels), 1) self.assertListEqual(model.channels, [num_features[-1]]) new_config = copy.deepcopy(config) new_config.out_indices = None model = model_class(new_config) self.assertEqual(len(model.channels), 1) self.assertListEqual(model.channels, [num_features[-1]]) def test_create_from_modified_config(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() result = model(**inputs_dict) self.assertEqual(len(result.feature_maps), len(config.out_features)) self.assertEqual(len(model.channels), len(config.out_features)) self.assertEqual(len(result.feature_maps), len(config.out_indices)) self.assertEqual(len(model.channels), len(config.out_indices)) # Check output of last stage is taken if out_features=None, out_indices=None modified_config = copy.deepcopy(config) modified_config.out_features = None model = model_class(modified_config) model.to(torch_device) model.eval() result = model(**inputs_dict) self.assertEqual(len(result.feature_maps), 1) self.assertEqual(len(model.channels), 1) modified_config = copy.deepcopy(config) modified_config.out_indices = None model = model_class(modified_config) model.to(torch_device) model.eval() result = model(**inputs_dict) self.assertEqual(len(result.feature_maps), 1) self.assertEqual(len(model.channels), 1) # Check backbone can be initialized with fresh weights modified_config = copy.deepcopy(config) modified_config.use_pretrained_backbone = False model = model_class(modified_config) model.to(torch_device) model.eval() result = model(**inputs_dict) def test_backbone_common_attributes(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for backbone_class in self.all_model_classes: backbone = backbone_class(config) self.assertTrue(hasattr(backbone, "backbone_type")) self.assertTrue(hasattr(backbone, "stage_names")) self.assertTrue(hasattr(backbone, "num_features")) self.assertTrue(hasattr(backbone, "out_indices")) self.assertTrue(hasattr(backbone, "out_features")) self.assertTrue(hasattr(backbone, "out_feature_channels")) self.assertTrue(hasattr(backbone, "channels")) self.assertIsInstance(backbone.backbone_type, BackboneType) # Verify num_features has been initialized in the backbone init self.assertIsNotNone(backbone.num_features) self.assertTrue(len(backbone.channels) == len(backbone.out_indices)) self.assertTrue(len(backbone.stage_names) == len(backbone.num_features)) self.assertTrue(len(backbone.channels) <= len(backbone.num_features)) self.assertTrue(len(backbone.out_feature_channels) == len(backbone.stage_names)) def test_backbone_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() batch_size = inputs_dict["pixel_values"].shape[0] for backbone_class in self.all_model_classes: backbone = backbone_class(config) backbone.to(torch_device) backbone.eval() outputs = backbone(**inputs_dict) # Test default outputs and verify feature maps self.assertIsInstance(outputs.feature_maps, tuple) self.assertTrue(len(outputs.feature_maps) == len(backbone.channels)) for feature_map, n_channels in zip(outputs.feature_maps, backbone.channels): self.assertTrue(feature_map.shape[:2], (batch_size, n_channels)) self.assertIsNone(outputs.hidden_states) self.assertIsNone(outputs.attentions) # Test output_hidden_states=True outputs = backbone(**inputs_dict, output_hidden_states=True) self.assertIsNotNone(outputs.hidden_states) self.assertTrue(len(outputs.hidden_states), len(backbone.stage_names)) for hidden_state, n_channels in zip(outputs.hidden_states, backbone.channels): self.assertTrue(hidden_state.shape[:2], (batch_size, n_channels)) # Test output_attentions=True if self.has_attentions: outputs = backbone(**inputs_dict, output_attentions=True) self.assertIsNotNone(outputs.attentions) def test_backbone_stage_selection(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() batch_size = inputs_dict["pixel_values"].shape[0] for backbone_class in self.all_model_classes: config.out_indices = [-2, -1] backbone = backbone_class(config) backbone.to(torch_device) backbone.eval() outputs = backbone(**inputs_dict) # Test number of feature maps returned self.assertIsInstance(outputs.feature_maps, tuple) self.assertTrue(len(outputs.feature_maps) == 2) # Order of channels returned is same as order of channels iterating over stage names channels_from_stage_names = [ backbone.out_feature_channels[name] for name in backbone.stage_names if name in backbone.out_features ] self.assertEqual(backbone.channels, channels_from_stage_names) for feature_map, n_channels in zip(outputs.feature_maps, backbone.channels): self.assertTrue(feature_map.shape[:2], (batch_size, n_channels))
transformers/tests/test_backbone_common.py/0
{ "file_path": "transformers/tests/test_backbone_common.py", "repo_id": "transformers", "token_count": 4375 }
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import re import tempfile import unittest from pathlib import Path import transformers from transformers.commands.add_new_model_like import ( ModelPatterns, _re_class_func, add_content_to_file, add_content_to_text, clean_frameworks_in_init, duplicate_doc_file, duplicate_module, filter_framework_files, find_base_model_checkpoint, get_model_files, get_module_from_file, parse_module_content, replace_model_patterns, retrieve_info_for_model, retrieve_model_classes, simplify_replacements, ) from transformers.testing_utils import require_flax, require_tf, require_torch BERT_MODEL_FILES = { "src/transformers/models/bert/__init__.py", "src/transformers/models/bert/configuration_bert.py", "src/transformers/models/bert/tokenization_bert.py", "src/transformers/models/bert/tokenization_bert_fast.py", "src/transformers/models/bert/tokenization_bert_tf.py", "src/transformers/models/bert/modeling_bert.py", "src/transformers/models/bert/modeling_flax_bert.py", "src/transformers/models/bert/modeling_tf_bert.py", "src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py", "src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py", "src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py", "src/transformers/models/bert/convert_bert_token_dropping_original_tf2_checkpoint_to_pytorch.py", } VIT_MODEL_FILES = { "src/transformers/models/vit/__init__.py", "src/transformers/models/vit/configuration_vit.py", "src/transformers/models/vit/convert_dino_to_pytorch.py", "src/transformers/models/vit/convert_vit_timm_to_pytorch.py", "src/transformers/models/vit/feature_extraction_vit.py", "src/transformers/models/vit/image_processing_vit.py", "src/transformers/models/vit/image_processing_vit_fast.py", "src/transformers/models/vit/modeling_vit.py", "src/transformers/models/vit/modeling_tf_vit.py", "src/transformers/models/vit/modeling_flax_vit.py", } WAV2VEC2_MODEL_FILES = { "src/transformers/models/wav2vec2/__init__.py", "src/transformers/models/wav2vec2/configuration_wav2vec2.py", "src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", "src/transformers/models/wav2vec2/convert_wav2vec2_original_s3prl_checkpoint_to_pytorch.py", "src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py", "src/transformers/models/wav2vec2/modeling_wav2vec2.py", "src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py", "src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py", "src/transformers/models/wav2vec2/processing_wav2vec2.py", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py", } REPO_PATH = Path(transformers.__path__[0]).parent.parent @require_torch @require_tf @require_flax class TestAddNewModelLike(unittest.TestCase): def init_file(self, file_name, content): with open(file_name, "w", encoding="utf-8") as f: f.write(content) def check_result(self, file_name, expected_result): with open(file_name, "r", encoding="utf-8") as f: result = f.read() self.assertEqual(result, expected_result) def test_re_class_func(self): self.assertEqual(_re_class_func.search("def my_function(x, y):").groups()[0], "my_function") self.assertEqual(_re_class_func.search("class MyClass:").groups()[0], "MyClass") self.assertEqual(_re_class_func.search("class MyClass(SuperClass):").groups()[0], "MyClass") def test_model_patterns_defaults(self): model_patterns = ModelPatterns("GPT-New new", "huggingface/gpt-new-base") self.assertEqual(model_patterns.model_type, "gpt-new-new") self.assertEqual(model_patterns.model_lower_cased, "gpt_new_new") self.assertEqual(model_patterns.model_camel_cased, "GPTNewNew") self.assertEqual(model_patterns.model_upper_cased, "GPT_NEW_NEW") self.assertEqual(model_patterns.config_class, "GPTNewNewConfig") self.assertIsNone(model_patterns.tokenizer_class) self.assertIsNone(model_patterns.feature_extractor_class) self.assertIsNone(model_patterns.processor_class) def test_parse_module_content(self): test_code = """SOME_CONSTANT = a constant CONSTANT_DEFINED_ON_SEVERAL_LINES = [ first_item, second_item ] def function(args): some code # Copied from transformers.some_module class SomeClass: some code """ expected_parts = [ "SOME_CONSTANT = a constant\n", "CONSTANT_DEFINED_ON_SEVERAL_LINES = [\n first_item,\n second_item\n]", "", "def function(args):\n some code\n", "# Copied from transformers.some_module\nclass SomeClass:\n some code\n", ] self.assertEqual(parse_module_content(test_code), expected_parts) def test_add_content_to_text(self): test_text = """all_configs = { "gpt": "GPTConfig", "bert": "BertConfig", "t5": "T5Config", }""" expected = """all_configs = { "gpt": "GPTConfig", "gpt2": "GPT2Config", "bert": "BertConfig", "t5": "T5Config", }""" line = ' "gpt2": "GPT2Config",' self.assertEqual(add_content_to_text(test_text, line, add_before="bert"), expected) self.assertEqual(add_content_to_text(test_text, line, add_before="bert", exact_match=True), test_text) self.assertEqual( add_content_to_text(test_text, line, add_before=' "bert": "BertConfig",', exact_match=True), expected ) self.assertEqual(add_content_to_text(test_text, line, add_before=re.compile(r'^\s*"bert":')), expected) self.assertEqual(add_content_to_text(test_text, line, add_after="gpt"), expected) self.assertEqual(add_content_to_text(test_text, line, add_after="gpt", exact_match=True), test_text) self.assertEqual( add_content_to_text(test_text, line, add_after=' "gpt": "GPTConfig",', exact_match=True), expected ) self.assertEqual(add_content_to_text(test_text, line, add_after=re.compile(r'^\s*"gpt":')), expected) def test_add_content_to_file(self): test_text = """all_configs = { "gpt": "GPTConfig", "bert": "BertConfig", "t5": "T5Config", }""" expected = """all_configs = { "gpt": "GPTConfig", "gpt2": "GPT2Config", "bert": "BertConfig", "t5": "T5Config", }""" line = ' "gpt2": "GPT2Config",' with tempfile.TemporaryDirectory() as tmp_dir: file_name = os.path.join(tmp_dir, "code.py") self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_before="bert") self.check_result(file_name, expected) self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_before="bert", exact_match=True) self.check_result(file_name, test_text) self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_before=' "bert": "BertConfig",', exact_match=True) self.check_result(file_name, expected) self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_before=re.compile(r'^\s*"bert":')) self.check_result(file_name, expected) self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_after="gpt") self.check_result(file_name, expected) self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_after="gpt", exact_match=True) self.check_result(file_name, test_text) self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_after=' "gpt": "GPTConfig",', exact_match=True) self.check_result(file_name, expected) self.init_file(file_name, test_text) add_content_to_file(file_name, line, add_after=re.compile(r'^\s*"gpt":')) self.check_result(file_name, expected) def test_simplify_replacements(self): self.assertEqual(simplify_replacements([("Bert", "NewBert")]), [("Bert", "NewBert")]) self.assertEqual( simplify_replacements([("Bert", "NewBert"), ("bert", "new-bert")]), [("Bert", "NewBert"), ("bert", "new-bert")], ) self.assertEqual( simplify_replacements([("BertConfig", "NewBertConfig"), ("Bert", "NewBert"), ("bert", "new-bert")]), [("Bert", "NewBert"), ("bert", "new-bert")], ) def test_replace_model_patterns(self): bert_model_patterns = ModelPatterns("Bert", "google-bert/bert-base-cased") new_bert_model_patterns = ModelPatterns("New Bert", "huggingface/bert-new-base") bert_test = '''class TFBertPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = BertConfig load_tf_weights = load_tf_weights_in_bert base_model_prefix = "bert" is_parallelizable = True supports_gradient_checkpointing = True model_type = "bert" BERT_CONSTANT = "value" ''' bert_expected = '''class TFNewBertPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = NewBertConfig load_tf_weights = load_tf_weights_in_new_bert base_model_prefix = "new_bert" is_parallelizable = True supports_gradient_checkpointing = True model_type = "new-bert" NEW_BERT_CONSTANT = "value" ''' bert_converted, replacements = replace_model_patterns(bert_test, bert_model_patterns, new_bert_model_patterns) self.assertEqual(bert_converted, bert_expected) # Replacements are empty here since bert as been replaced by bert_new in some instances and bert-new # in others. self.assertEqual(replacements, "") # If we remove the model type, we will get replacements bert_test = bert_test.replace(' model_type = "bert"\n', "") bert_expected = bert_expected.replace(' model_type = "new-bert"\n', "") bert_converted, replacements = replace_model_patterns(bert_test, bert_model_patterns, new_bert_model_patterns) self.assertEqual(bert_converted, bert_expected) self.assertEqual(replacements, "BERT->NEW_BERT,Bert->NewBert,bert->new_bert") gpt_model_patterns = ModelPatterns("GPT2", "gpt2") new_gpt_model_patterns = ModelPatterns("GPT-New new", "huggingface/gpt-new-base") gpt_test = '''class GPT2PreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = GPT2Config load_tf_weights = load_tf_weights_in_gpt2 base_model_prefix = "transformer" is_parallelizable = True supports_gradient_checkpointing = True GPT2_CONSTANT = "value" ''' gpt_expected = '''class GPTNewNewPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = GPTNewNewConfig load_tf_weights = load_tf_weights_in_gpt_new_new base_model_prefix = "transformer" is_parallelizable = True supports_gradient_checkpointing = True GPT_NEW_NEW_CONSTANT = "value" ''' gpt_converted, replacements = replace_model_patterns(gpt_test, gpt_model_patterns, new_gpt_model_patterns) self.assertEqual(gpt_converted, gpt_expected) # Replacements are empty here since GPT2 as been replaced by GPTNewNew in some instances and GPT_NEW_NEW # in others. self.assertEqual(replacements, "") roberta_model_patterns = ModelPatterns("RoBERTa", "FacebookAI/roberta-base", model_camel_cased="Roberta") new_roberta_model_patterns = ModelPatterns( "RoBERTa-New", "huggingface/roberta-new-base", model_camel_cased="RobertaNew" ) roberta_test = '''# Copied from transformers.models.bert.BertModel with Bert->Roberta class RobertaModel(RobertaPreTrainedModel): """ The base RoBERTa model. """ checkpoint = FacebookAI/roberta-base base_model_prefix = "roberta" ''' roberta_expected = '''# Copied from transformers.models.bert.BertModel with Bert->RobertaNew class RobertaNewModel(RobertaNewPreTrainedModel): """ The base RoBERTa-New model. """ checkpoint = huggingface/roberta-new-base base_model_prefix = "roberta_new" ''' roberta_converted, replacements = replace_model_patterns( roberta_test, roberta_model_patterns, new_roberta_model_patterns ) self.assertEqual(roberta_converted, roberta_expected) def test_get_module_from_file(self): self.assertEqual( get_module_from_file("/git/transformers/src/transformers/models/bert/modeling_tf_bert.py"), "transformers.models.bert.modeling_tf_bert", ) self.assertEqual( get_module_from_file("/transformers/models/gpt2/modeling_gpt2.py"), "transformers.models.gpt2.modeling_gpt2", ) with self.assertRaises(ValueError): get_module_from_file("/models/gpt2/modeling_gpt2.py") def test_duplicate_module(self): bert_model_patterns = ModelPatterns("Bert", "google-bert/bert-base-cased") new_bert_model_patterns = ModelPatterns("New Bert", "huggingface/bert-new-base") bert_test = '''class TFBertPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = BertConfig load_tf_weights = load_tf_weights_in_bert base_model_prefix = "bert" is_parallelizable = True supports_gradient_checkpointing = True BERT_CONSTANT = "value" ''' bert_expected = '''class TFNewBertPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = NewBertConfig load_tf_weights = load_tf_weights_in_new_bert base_model_prefix = "new_bert" is_parallelizable = True supports_gradient_checkpointing = True NEW_BERT_CONSTANT = "value" ''' bert_expected_with_copied_from = ( "# Copied from transformers.bert_module.TFBertPreTrainedModel with Bert->NewBert,bert->new_bert\n" + bert_expected ) with tempfile.TemporaryDirectory() as tmp_dir: work_dir = os.path.join(tmp_dir, "transformers") os.makedirs(work_dir) file_name = os.path.join(work_dir, "bert_module.py") dest_file_name = os.path.join(work_dir, "new_bert_module.py") self.init_file(file_name, bert_test) duplicate_module(file_name, bert_model_patterns, new_bert_model_patterns) self.check_result(dest_file_name, bert_expected_with_copied_from) self.init_file(file_name, bert_test) duplicate_module(file_name, bert_model_patterns, new_bert_model_patterns, add_copied_from=False) self.check_result(dest_file_name, bert_expected) def test_duplicate_module_with_copied_from(self): bert_model_patterns = ModelPatterns("Bert", "google-bert/bert-base-cased") new_bert_model_patterns = ModelPatterns("New Bert", "huggingface/bert-new-base") bert_test = '''# Copied from transformers.models.xxx.XxxModel with Xxx->Bert class TFBertPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = BertConfig load_tf_weights = load_tf_weights_in_bert base_model_prefix = "bert" is_parallelizable = True supports_gradient_checkpointing = True BERT_CONSTANT = "value" ''' bert_expected = '''# Copied from transformers.models.xxx.XxxModel with Xxx->NewBert class TFNewBertPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = NewBertConfig load_tf_weights = load_tf_weights_in_new_bert base_model_prefix = "new_bert" is_parallelizable = True supports_gradient_checkpointing = True NEW_BERT_CONSTANT = "value" ''' with tempfile.TemporaryDirectory() as tmp_dir: work_dir = os.path.join(tmp_dir, "transformers") os.makedirs(work_dir) file_name = os.path.join(work_dir, "bert_module.py") dest_file_name = os.path.join(work_dir, "new_bert_module.py") self.init_file(file_name, bert_test) duplicate_module(file_name, bert_model_patterns, new_bert_model_patterns) # There should not be a new Copied from statement, the old one should be adapated. self.check_result(dest_file_name, bert_expected) self.init_file(file_name, bert_test) duplicate_module(file_name, bert_model_patterns, new_bert_model_patterns, add_copied_from=False) self.check_result(dest_file_name, bert_expected) def test_filter_framework_files(self): files = ["modeling_bert.py", "modeling_tf_bert.py", "modeling_flax_bert.py", "configuration_bert.py"] self.assertEqual(filter_framework_files(files), files) self.assertEqual(set(filter_framework_files(files, ["pt", "tf", "flax"])), set(files)) self.assertEqual(set(filter_framework_files(files, ["pt"])), {"modeling_bert.py", "configuration_bert.py"}) self.assertEqual(set(filter_framework_files(files, ["tf"])), {"modeling_tf_bert.py", "configuration_bert.py"}) self.assertEqual( set(filter_framework_files(files, ["flax"])), {"modeling_flax_bert.py", "configuration_bert.py"} ) self.assertEqual( set(filter_framework_files(files, ["pt", "tf"])), {"modeling_tf_bert.py", "modeling_bert.py", "configuration_bert.py"}, ) self.assertEqual( set(filter_framework_files(files, ["tf", "flax"])), {"modeling_tf_bert.py", "modeling_flax_bert.py", "configuration_bert.py"}, ) self.assertEqual( set(filter_framework_files(files, ["pt", "flax"])), {"modeling_bert.py", "modeling_flax_bert.py", "configuration_bert.py"}, ) def test_get_model_files(self): # BERT bert_files = get_model_files("bert") doc_file = str(Path(bert_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/bert.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in bert_files["model_files"]} self.assertEqual(model_files, BERT_MODEL_FILES) self.assertEqual(bert_files["module_name"], "bert") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in bert_files["test_files"]} bert_test_files = { "tests/models/bert/test_tokenization_bert.py", "tests/models/bert/test_modeling_bert.py", "tests/models/bert/test_modeling_tf_bert.py", "tests/models/bert/test_modeling_flax_bert.py", } self.assertEqual(test_files, bert_test_files) # VIT vit_files = get_model_files("vit") doc_file = str(Path(vit_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/vit.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in vit_files["model_files"]} self.assertEqual(model_files, VIT_MODEL_FILES) self.assertEqual(vit_files["module_name"], "vit") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in vit_files["test_files"]} vit_test_files = { "tests/models/vit/test_image_processing_vit.py", "tests/models/vit/test_modeling_vit.py", "tests/models/vit/test_modeling_tf_vit.py", "tests/models/vit/test_modeling_flax_vit.py", } self.assertEqual(test_files, vit_test_files) # Wav2Vec2 wav2vec2_files = get_model_files("wav2vec2") doc_file = str(Path(wav2vec2_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/wav2vec2.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in wav2vec2_files["model_files"]} self.assertEqual(model_files, WAV2VEC2_MODEL_FILES) self.assertEqual(wav2vec2_files["module_name"], "wav2vec2") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in wav2vec2_files["test_files"]} wav2vec2_test_files = { "tests/models/wav2vec2/test_feature_extraction_wav2vec2.py", "tests/models/wav2vec2/test_modeling_wav2vec2.py", "tests/models/wav2vec2/test_modeling_tf_wav2vec2.py", "tests/models/wav2vec2/test_modeling_flax_wav2vec2.py", "tests/models/wav2vec2/test_processor_wav2vec2.py", "tests/models/wav2vec2/test_tokenization_wav2vec2.py", } self.assertEqual(test_files, wav2vec2_test_files) def test_get_model_files_only_pt(self): # BERT bert_files = get_model_files("bert", frameworks=["pt"]) doc_file = str(Path(bert_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/bert.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in bert_files["model_files"]} bert_model_files = BERT_MODEL_FILES - { "src/transformers/models/bert/modeling_tf_bert.py", "src/transformers/models/bert/modeling_flax_bert.py", } self.assertEqual(model_files, bert_model_files) self.assertEqual(bert_files["module_name"], "bert") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in bert_files["test_files"]} bert_test_files = { "tests/models/bert/test_tokenization_bert.py", "tests/models/bert/test_modeling_bert.py", } self.assertEqual(test_files, bert_test_files) # VIT vit_files = get_model_files("vit", frameworks=["pt"]) doc_file = str(Path(vit_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/vit.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in vit_files["model_files"]} vit_model_files = VIT_MODEL_FILES - { "src/transformers/models/vit/modeling_tf_vit.py", "src/transformers/models/vit/modeling_flax_vit.py", } self.assertEqual(model_files, vit_model_files) self.assertEqual(vit_files["module_name"], "vit") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in vit_files["test_files"]} vit_test_files = { "tests/models/vit/test_image_processing_vit.py", "tests/models/vit/test_modeling_vit.py", } self.assertEqual(test_files, vit_test_files) # Wav2Vec2 wav2vec2_files = get_model_files("wav2vec2", frameworks=["pt"]) doc_file = str(Path(wav2vec2_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/wav2vec2.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in wav2vec2_files["model_files"]} wav2vec2_model_files = WAV2VEC2_MODEL_FILES - { "src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py", "src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py", } self.assertEqual(model_files, wav2vec2_model_files) self.assertEqual(wav2vec2_files["module_name"], "wav2vec2") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in wav2vec2_files["test_files"]} wav2vec2_test_files = { "tests/models/wav2vec2/test_feature_extraction_wav2vec2.py", "tests/models/wav2vec2/test_modeling_wav2vec2.py", "tests/models/wav2vec2/test_processor_wav2vec2.py", "tests/models/wav2vec2/test_tokenization_wav2vec2.py", } self.assertEqual(test_files, wav2vec2_test_files) def test_get_model_files_tf_and_flax(self): # BERT bert_files = get_model_files("bert", frameworks=["tf", "flax"]) doc_file = str(Path(bert_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/bert.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in bert_files["model_files"]} bert_model_files = BERT_MODEL_FILES - {"src/transformers/models/bert/modeling_bert.py"} self.assertEqual(model_files, bert_model_files) self.assertEqual(bert_files["module_name"], "bert") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in bert_files["test_files"]} bert_test_files = { "tests/models/bert/test_tokenization_bert.py", "tests/models/bert/test_modeling_tf_bert.py", "tests/models/bert/test_modeling_flax_bert.py", } self.assertEqual(test_files, bert_test_files) # VIT vit_files = get_model_files("vit", frameworks=["tf", "flax"]) doc_file = str(Path(vit_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/vit.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in vit_files["model_files"]} vit_model_files = VIT_MODEL_FILES - {"src/transformers/models/vit/modeling_vit.py"} self.assertEqual(model_files, vit_model_files) self.assertEqual(vit_files["module_name"], "vit") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in vit_files["test_files"]} vit_test_files = { "tests/models/vit/test_image_processing_vit.py", "tests/models/vit/test_modeling_tf_vit.py", "tests/models/vit/test_modeling_flax_vit.py", } self.assertEqual(test_files, vit_test_files) # Wav2Vec2 wav2vec2_files = get_model_files("wav2vec2", frameworks=["tf", "flax"]) doc_file = str(Path(wav2vec2_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/wav2vec2.md") model_files = {str(Path(f).relative_to(REPO_PATH)) for f in wav2vec2_files["model_files"]} wav2vec2_model_files = WAV2VEC2_MODEL_FILES - {"src/transformers/models/wav2vec2/modeling_wav2vec2.py"} self.assertEqual(model_files, wav2vec2_model_files) self.assertEqual(wav2vec2_files["module_name"], "wav2vec2") test_files = {str(Path(f).relative_to(REPO_PATH)) for f in wav2vec2_files["test_files"]} wav2vec2_test_files = { "tests/models/wav2vec2/test_feature_extraction_wav2vec2.py", "tests/models/wav2vec2/test_modeling_tf_wav2vec2.py", "tests/models/wav2vec2/test_modeling_flax_wav2vec2.py", "tests/models/wav2vec2/test_processor_wav2vec2.py", "tests/models/wav2vec2/test_tokenization_wav2vec2.py", } self.assertEqual(test_files, wav2vec2_test_files) def test_find_base_model_checkpoint(self): self.assertEqual(find_base_model_checkpoint("bert"), "google-bert/bert-base-uncased") self.assertEqual(find_base_model_checkpoint("gpt2"), "openai-community/gpt2") def test_retrieve_model_classes(self): gpt_classes = {k: set(v) for k, v in retrieve_model_classes("gpt2").items()} expected_gpt_classes = { "pt": { "GPT2ForTokenClassification", "GPT2Model", "GPT2LMHeadModel", "GPT2ForSequenceClassification", "GPT2ForQuestionAnswering", }, "tf": {"TFGPT2Model", "TFGPT2ForSequenceClassification", "TFGPT2LMHeadModel"}, "flax": {"FlaxGPT2Model", "FlaxGPT2LMHeadModel"}, } self.assertEqual(gpt_classes, expected_gpt_classes) del expected_gpt_classes["flax"] gpt_classes = {k: set(v) for k, v in retrieve_model_classes("gpt2", frameworks=["pt", "tf"]).items()} self.assertEqual(gpt_classes, expected_gpt_classes) del expected_gpt_classes["pt"] gpt_classes = {k: set(v) for k, v in retrieve_model_classes("gpt2", frameworks=["tf"]).items()} self.assertEqual(gpt_classes, expected_gpt_classes) def test_retrieve_info_for_model_with_bert(self): bert_info = retrieve_info_for_model("bert") bert_classes = [ "BertForTokenClassification", "BertForQuestionAnswering", "BertForNextSentencePrediction", "BertForSequenceClassification", "BertForMaskedLM", "BertForMultipleChoice", "BertModel", "BertForPreTraining", "BertLMHeadModel", ] expected_model_classes = { "pt": set(bert_classes), "tf": {f"TF{m}" for m in bert_classes}, "flax": {f"Flax{m}" for m in bert_classes[:-1] + ["BertForCausalLM"]}, } self.assertEqual(set(bert_info["frameworks"]), {"pt", "tf", "flax"}) model_classes = {k: set(v) for k, v in bert_info["model_classes"].items()} self.assertEqual(model_classes, expected_model_classes) all_bert_files = bert_info["model_files"] model_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_bert_files["model_files"]} self.assertEqual(model_files, BERT_MODEL_FILES) test_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_bert_files["test_files"]} bert_test_files = { "tests/models/bert/test_tokenization_bert.py", "tests/models/bert/test_modeling_bert.py", "tests/models/bert/test_modeling_tf_bert.py", "tests/models/bert/test_modeling_flax_bert.py", } self.assertEqual(test_files, bert_test_files) doc_file = str(Path(all_bert_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/bert.md") self.assertEqual(all_bert_files["module_name"], "bert") bert_model_patterns = bert_info["model_patterns"] self.assertEqual(bert_model_patterns.model_name, "BERT") self.assertEqual(bert_model_patterns.checkpoint, "google-bert/bert-base-uncased") self.assertEqual(bert_model_patterns.model_type, "bert") self.assertEqual(bert_model_patterns.model_lower_cased, "bert") self.assertEqual(bert_model_patterns.model_camel_cased, "Bert") self.assertEqual(bert_model_patterns.model_upper_cased, "BERT") self.assertEqual(bert_model_patterns.config_class, "BertConfig") self.assertEqual(bert_model_patterns.tokenizer_class, "BertTokenizer") self.assertIsNone(bert_model_patterns.feature_extractor_class) self.assertIsNone(bert_model_patterns.processor_class) def test_retrieve_info_for_model_pt_tf_with_bert(self): bert_info = retrieve_info_for_model("bert", frameworks=["pt", "tf"]) bert_classes = [ "BertForTokenClassification", "BertForQuestionAnswering", "BertForNextSentencePrediction", "BertForSequenceClassification", "BertForMaskedLM", "BertForMultipleChoice", "BertModel", "BertForPreTraining", "BertLMHeadModel", ] expected_model_classes = {"pt": set(bert_classes), "tf": {f"TF{m}" for m in bert_classes}} self.assertEqual(set(bert_info["frameworks"]), {"pt", "tf"}) model_classes = {k: set(v) for k, v in bert_info["model_classes"].items()} self.assertEqual(model_classes, expected_model_classes) all_bert_files = bert_info["model_files"] model_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_bert_files["model_files"]} bert_model_files = BERT_MODEL_FILES - {"src/transformers/models/bert/modeling_flax_bert.py"} self.assertEqual(model_files, bert_model_files) test_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_bert_files["test_files"]} bert_test_files = { "tests/models/bert/test_tokenization_bert.py", "tests/models/bert/test_modeling_bert.py", "tests/models/bert/test_modeling_tf_bert.py", } self.assertEqual(test_files, bert_test_files) doc_file = str(Path(all_bert_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/bert.md") self.assertEqual(all_bert_files["module_name"], "bert") bert_model_patterns = bert_info["model_patterns"] self.assertEqual(bert_model_patterns.model_name, "BERT") self.assertEqual(bert_model_patterns.checkpoint, "google-bert/bert-base-uncased") self.assertEqual(bert_model_patterns.model_type, "bert") self.assertEqual(bert_model_patterns.model_lower_cased, "bert") self.assertEqual(bert_model_patterns.model_camel_cased, "Bert") self.assertEqual(bert_model_patterns.model_upper_cased, "BERT") self.assertEqual(bert_model_patterns.config_class, "BertConfig") self.assertEqual(bert_model_patterns.tokenizer_class, "BertTokenizer") self.assertIsNone(bert_model_patterns.feature_extractor_class) self.assertIsNone(bert_model_patterns.processor_class) def test_retrieve_info_for_model_with_vit(self): vit_info = retrieve_info_for_model("vit") vit_classes = ["ViTForImageClassification", "ViTModel"] pt_only_classes = ["ViTForMaskedImageModeling"] expected_model_classes = { "pt": set(vit_classes + pt_only_classes), "tf": {f"TF{m}" for m in vit_classes}, "flax": {f"Flax{m}" for m in vit_classes}, } self.assertEqual(set(vit_info["frameworks"]), {"pt", "tf", "flax"}) model_classes = {k: set(v) for k, v in vit_info["model_classes"].items()} self.assertEqual(model_classes, expected_model_classes) all_vit_files = vit_info["model_files"] model_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_vit_files["model_files"]} self.assertEqual(model_files, VIT_MODEL_FILES) test_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_vit_files["test_files"]} vit_test_files = { "tests/models/vit/test_image_processing_vit.py", "tests/models/vit/test_modeling_vit.py", "tests/models/vit/test_modeling_tf_vit.py", "tests/models/vit/test_modeling_flax_vit.py", } self.assertEqual(test_files, vit_test_files) doc_file = str(Path(all_vit_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/vit.md") self.assertEqual(all_vit_files["module_name"], "vit") vit_model_patterns = vit_info["model_patterns"] self.assertEqual(vit_model_patterns.model_name, "ViT") self.assertEqual(vit_model_patterns.checkpoint, "google/vit-base-patch16-224-in21k") self.assertEqual(vit_model_patterns.model_type, "vit") self.assertEqual(vit_model_patterns.model_lower_cased, "vit") self.assertEqual(vit_model_patterns.model_camel_cased, "ViT") self.assertEqual(vit_model_patterns.model_upper_cased, "VIT") self.assertEqual(vit_model_patterns.config_class, "ViTConfig") self.assertEqual(vit_model_patterns.feature_extractor_class, "ViTFeatureExtractor") self.assertEqual(vit_model_patterns.image_processor_class, "ViTImageProcessor") self.assertIsNone(vit_model_patterns.tokenizer_class) self.assertIsNone(vit_model_patterns.processor_class) def test_retrieve_info_for_model_with_wav2vec2(self): wav2vec2_info = retrieve_info_for_model("wav2vec2") wav2vec2_classes = [ "Wav2Vec2Model", "Wav2Vec2ForPreTraining", "Wav2Vec2ForAudioFrameClassification", "Wav2Vec2ForCTC", "Wav2Vec2ForMaskedLM", "Wav2Vec2ForSequenceClassification", "Wav2Vec2ForXVector", ] expected_model_classes = { "pt": set(wav2vec2_classes), "tf": {f"TF{m}" for m in [wav2vec2_classes[0], wav2vec2_classes[-2]]}, "flax": {f"Flax{m}" for m in wav2vec2_classes[:2]}, } self.assertEqual(set(wav2vec2_info["frameworks"]), {"pt", "tf", "flax"}) model_classes = {k: set(v) for k, v in wav2vec2_info["model_classes"].items()} self.assertEqual(model_classes, expected_model_classes) all_wav2vec2_files = wav2vec2_info["model_files"] model_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_wav2vec2_files["model_files"]} self.assertEqual(model_files, WAV2VEC2_MODEL_FILES) test_files = {str(Path(f).relative_to(REPO_PATH)) for f in all_wav2vec2_files["test_files"]} wav2vec2_test_files = { "tests/models/wav2vec2/test_feature_extraction_wav2vec2.py", "tests/models/wav2vec2/test_modeling_wav2vec2.py", "tests/models/wav2vec2/test_modeling_tf_wav2vec2.py", "tests/models/wav2vec2/test_modeling_flax_wav2vec2.py", "tests/models/wav2vec2/test_processor_wav2vec2.py", "tests/models/wav2vec2/test_tokenization_wav2vec2.py", } self.assertEqual(test_files, wav2vec2_test_files) doc_file = str(Path(all_wav2vec2_files["doc_file"]).relative_to(REPO_PATH)) self.assertEqual(doc_file, "docs/source/en/model_doc/wav2vec2.md") self.assertEqual(all_wav2vec2_files["module_name"], "wav2vec2") wav2vec2_model_patterns = wav2vec2_info["model_patterns"] self.assertEqual(wav2vec2_model_patterns.model_name, "Wav2Vec2") self.assertEqual(wav2vec2_model_patterns.checkpoint, "facebook/wav2vec2-base-960h") self.assertEqual(wav2vec2_model_patterns.model_type, "wav2vec2") self.assertEqual(wav2vec2_model_patterns.model_lower_cased, "wav2vec2") self.assertEqual(wav2vec2_model_patterns.model_camel_cased, "Wav2Vec2") self.assertEqual(wav2vec2_model_patterns.model_upper_cased, "WAV2VEC2") self.assertEqual(wav2vec2_model_patterns.config_class, "Wav2Vec2Config") self.assertEqual(wav2vec2_model_patterns.feature_extractor_class, "Wav2Vec2FeatureExtractor") self.assertEqual(wav2vec2_model_patterns.processor_class, "Wav2Vec2Processor") self.assertEqual(wav2vec2_model_patterns.tokenizer_class, "Wav2Vec2CTCTokenizer") def test_clean_frameworks_in_init_with_gpt(self): test_init = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_flax_available, is_tf_available, is_tokenizers_available, is_torch_available _import_structure = { "configuration_gpt2": ["GPT2Config", "GPT2OnnxConfig"], "tokenization_gpt2": ["GPT2Tokenizer"], } try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["tokenization_gpt2_fast"] = ["GPT2TokenizerFast"] try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_gpt2"] = ["GPT2Model"] try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_tf_gpt2"] = ["TFGPT2Model"] try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_flax_gpt2"] = ["FlaxGPT2Model"] if TYPE_CHECKING: from .configuration_gpt2 import GPT2Config, GPT2OnnxConfig from .tokenization_gpt2 import GPT2Tokenizer try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .tokenization_gpt2_fast import GPT2TokenizerFast try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_gpt2 import GPT2Model try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_tf_gpt2 import TFGPT2Model try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_flax_gpt2 import FlaxGPT2Model else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ init_no_tokenizer = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_flax_available, is_tf_available, is_torch_available _import_structure = { "configuration_gpt2": ["GPT2Config", "GPT2OnnxConfig"], } try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_gpt2"] = ["GPT2Model"] try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_tf_gpt2"] = ["TFGPT2Model"] try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_flax_gpt2"] = ["FlaxGPT2Model"] if TYPE_CHECKING: from .configuration_gpt2 import GPT2Config, GPT2OnnxConfig try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_gpt2 import GPT2Model try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_tf_gpt2 import TFGPT2Model try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_flax_gpt2 import FlaxGPT2Model else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ init_pt_only = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_tokenizers_available, is_torch_available _import_structure = { "configuration_gpt2": ["GPT2Config", "GPT2OnnxConfig"], "tokenization_gpt2": ["GPT2Tokenizer"], } try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["tokenization_gpt2_fast"] = ["GPT2TokenizerFast"] try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_gpt2"] = ["GPT2Model"] if TYPE_CHECKING: from .configuration_gpt2 import GPT2Config, GPT2OnnxConfig from .tokenization_gpt2 import GPT2Tokenizer try: if not is_tokenizers_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .tokenization_gpt2_fast import GPT2TokenizerFast try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_gpt2 import GPT2Model else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ init_pt_only_no_tokenizer = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_torch_available _import_structure = { "configuration_gpt2": ["GPT2Config", "GPT2OnnxConfig"], } try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_gpt2"] = ["GPT2Model"] if TYPE_CHECKING: from .configuration_gpt2 import GPT2Config, GPT2OnnxConfig try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_gpt2 import GPT2Model else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ with tempfile.TemporaryDirectory() as tmp_dir: file_name = os.path.join(tmp_dir, "../__init__.py") self.init_file(file_name, test_init) clean_frameworks_in_init(file_name, keep_processing=False) self.check_result(file_name, init_no_tokenizer) self.init_file(file_name, test_init) clean_frameworks_in_init(file_name, frameworks=["pt"]) self.check_result(file_name, init_pt_only) self.init_file(file_name, test_init) clean_frameworks_in_init(file_name, frameworks=["pt"], keep_processing=False) self.check_result(file_name, init_pt_only_no_tokenizer) def test_clean_frameworks_in_init_with_vit(self): test_init = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_flax_available, is_tf_available, is_torch_available, is_vision_available _import_structure = { "configuration_vit": ["ViTConfig"], } try: if not is_vision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["image_processing_vit"] = ["ViTImageProcessor"] try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_vit"] = ["ViTModel"] try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_tf_vit"] = ["TFViTModel"] try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_flax_vit"] = ["FlaxViTModel"] if TYPE_CHECKING: from .configuration_vit import ViTConfig try: if not is_vision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .image_processing_vit import ViTImageProcessor try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_vit import ViTModel try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_tf_vit import TFViTModel try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_flax_vit import FlaxViTModel else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ init_no_feature_extractor = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_flax_available, is_tf_available, is_torch_available _import_structure = { "configuration_vit": ["ViTConfig"], } try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_vit"] = ["ViTModel"] try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_tf_vit"] = ["TFViTModel"] try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_flax_vit"] = ["FlaxViTModel"] if TYPE_CHECKING: from .configuration_vit import ViTConfig try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_vit import ViTModel try: if not is_tf_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_tf_vit import TFViTModel try: if not is_flax_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_flax_vit import FlaxViTModel else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ init_pt_only = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_torch_available, is_vision_available _import_structure = { "configuration_vit": ["ViTConfig"], } try: if not is_vision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["image_processing_vit"] = ["ViTImageProcessor"] try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_vit"] = ["ViTModel"] if TYPE_CHECKING: from .configuration_vit import ViTConfig try: if not is_vision_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .image_processing_vit import ViTImageProcessor try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_vit import ViTModel else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ init_pt_only_no_feature_extractor = """ from typing import TYPE_CHECKING from ...utils import _LazyModule, is_torch_available _import_structure = { "configuration_vit": ["ViTConfig"], } try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: _import_structure["modeling_vit"] = ["ViTModel"] if TYPE_CHECKING: from .configuration_vit import ViTConfig try: if not is_torch_available(): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: pass else: from .modeling_vit import ViTModel else: import sys sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) """ with tempfile.TemporaryDirectory() as tmp_dir: file_name = os.path.join(tmp_dir, "../__init__.py") self.init_file(file_name, test_init) clean_frameworks_in_init(file_name, keep_processing=False) self.check_result(file_name, init_no_feature_extractor) self.init_file(file_name, test_init) clean_frameworks_in_init(file_name, frameworks=["pt"]) self.check_result(file_name, init_pt_only) self.init_file(file_name, test_init) clean_frameworks_in_init(file_name, frameworks=["pt"], keep_processing=False) self.check_result(file_name, init_pt_only_no_feature_extractor) def test_duplicate_doc_file(self): test_doc = """ # GPT2 ## Overview Overview of the model. ## GPT2Config [[autodoc]] GPT2Config ## GPT2Tokenizer [[autodoc]] GPT2Tokenizer - save_vocabulary ## GPT2TokenizerFast [[autodoc]] GPT2TokenizerFast ## GPT2 specific outputs [[autodoc]] models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput [[autodoc]] models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput ## GPT2Model [[autodoc]] GPT2Model - forward ## TFGPT2Model [[autodoc]] TFGPT2Model - call ## FlaxGPT2Model [[autodoc]] FlaxGPT2Model - __call__ """ test_new_doc = """ # GPT-New New ## Overview The GPT-New New model was proposed in [<INSERT PAPER NAME HERE>](<INSERT PAPER LINK HERE>) by <INSERT AUTHORS HERE>. <INSERT SHORT SUMMARY HERE> The abstract from the paper is the following: *<INSERT PAPER ABSTRACT HERE>* Tips: <INSERT TIPS ABOUT MODEL HERE> This model was contributed by [INSERT YOUR HF USERNAME HERE](https://huggingface.co/<INSERT YOUR HF USERNAME HERE>). The original code can be found [here](<INSERT LINK TO GITHUB REPO HERE>). ## GPTNewNewConfig [[autodoc]] GPTNewNewConfig ## GPTNewNewTokenizer [[autodoc]] GPTNewNewTokenizer - save_vocabulary ## GPTNewNewTokenizerFast [[autodoc]] GPTNewNewTokenizerFast ## GPTNewNew specific outputs [[autodoc]] models.gpt_new_new.modeling_gpt_new_new.GPTNewNewDoubleHeadsModelOutput [[autodoc]] models.gpt_new_new.modeling_tf_gpt_new_new.TFGPTNewNewDoubleHeadsModelOutput ## GPTNewNewModel [[autodoc]] GPTNewNewModel - forward ## TFGPTNewNewModel [[autodoc]] TFGPTNewNewModel - call ## FlaxGPTNewNewModel [[autodoc]] FlaxGPTNewNewModel - __call__ """ with tempfile.TemporaryDirectory() as tmp_dir: doc_file = os.path.join(tmp_dir, "gpt2.md") new_doc_file = os.path.join(tmp_dir, "gpt-new-new.md") gpt2_model_patterns = ModelPatterns("GPT2", "gpt2", tokenizer_class="GPT2Tokenizer") new_model_patterns = ModelPatterns( "GPT-New New", "huggingface/gpt-new-new", tokenizer_class="GPTNewNewTokenizer" ) self.init_file(doc_file, test_doc) duplicate_doc_file(doc_file, gpt2_model_patterns, new_model_patterns) self.check_result(new_doc_file, test_new_doc) test_new_doc_pt_only = test_new_doc.replace( """ ## TFGPTNewNewModel [[autodoc]] TFGPTNewNewModel - call ## FlaxGPTNewNewModel [[autodoc]] FlaxGPTNewNewModel - __call__ """, "", ) self.init_file(doc_file, test_doc) duplicate_doc_file(doc_file, gpt2_model_patterns, new_model_patterns, frameworks=["pt"]) self.check_result(new_doc_file, test_new_doc_pt_only) test_new_doc_no_tok = test_new_doc.replace( """ ## GPTNewNewTokenizer [[autodoc]] GPTNewNewTokenizer - save_vocabulary ## GPTNewNewTokenizerFast [[autodoc]] GPTNewNewTokenizerFast """, "", ) new_model_patterns = ModelPatterns( "GPT-New New", "huggingface/gpt-new-new", tokenizer_class="GPT2Tokenizer" ) self.init_file(doc_file, test_doc) duplicate_doc_file(doc_file, gpt2_model_patterns, new_model_patterns) print(test_new_doc_no_tok) self.check_result(new_doc_file, test_new_doc_no_tok) test_new_doc_pt_only_no_tok = test_new_doc_no_tok.replace( """ ## TFGPTNewNewModel [[autodoc]] TFGPTNewNewModel - call ## FlaxGPTNewNewModel [[autodoc]] FlaxGPTNewNewModel - __call__ """, "", ) self.init_file(doc_file, test_doc) duplicate_doc_file(doc_file, gpt2_model_patterns, new_model_patterns, frameworks=["pt"]) self.check_result(new_doc_file, test_new_doc_pt_only_no_tok)
transformers/tests/utils/test_add_new_model_like.py/0
{ "file_path": "transformers/tests/utils/test_add_new_model_like.py", "repo_id": "transformers", "token_count": 25145 }
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys import tempfile import unittest import unittest.mock as mock from pathlib import Path from huggingface_hub import HfFolder from requests.exceptions import HTTPError from transformers import AutoImageProcessor, ViTImageProcessor from transformers.image_processing_utils import get_size_dict from transformers.testing_utils import TOKEN, TemporaryHubRepo, get_tests_dir, is_staging_test sys.path.append(str(Path(__file__).parent.parent.parent / "utils")) from test_module.custom_image_processing import CustomImageProcessor # noqa E402 SAMPLE_IMAGE_PROCESSING_CONFIG_DIR = get_tests_dir("fixtures") class ImageProcessorUtilTester(unittest.TestCase): def test_cached_files_are_used_when_internet_is_down(self): # A mock response for an HTTP head request to emulate server down response_mock = mock.Mock() response_mock.status_code = 500 response_mock.headers = {} response_mock.raise_for_status.side_effect = HTTPError response_mock.json.return_value = {} # Download this model to make sure it's in the cache. _ = ViTImageProcessor.from_pretrained("hf-internal-testing/tiny-random-vit") # Under the mock environment we get a 500 error when trying to reach the model. with mock.patch("requests.Session.request", return_value=response_mock) as mock_head: _ = ViTImageProcessor.from_pretrained("hf-internal-testing/tiny-random-vit") # This check we did call the fake head request mock_head.assert_called() def test_image_processor_from_pretrained_subfolder(self): with self.assertRaises(OSError): # config is in subfolder, the following should not work without specifying the subfolder _ = AutoImageProcessor.from_pretrained("hf-internal-testing/stable-diffusion-all-variants") config = AutoImageProcessor.from_pretrained( "hf-internal-testing/stable-diffusion-all-variants", subfolder="feature_extractor" ) self.assertIsNotNone(config) @is_staging_test class ImageProcessorPushToHubTester(unittest.TestCase): @classmethod def setUpClass(cls): cls._token = TOKEN HfFolder.save_token(TOKEN) def test_push_to_hub(self): with TemporaryHubRepo(token=self._token) as tmp_repo: image_processor = ViTImageProcessor.from_pretrained(SAMPLE_IMAGE_PROCESSING_CONFIG_DIR) image_processor.push_to_hub(tmp_repo.repo_id, token=self._token) new_image_processor = ViTImageProcessor.from_pretrained(tmp_repo.repo_id) for k, v in image_processor.__dict__.items(): self.assertEqual(v, getattr(new_image_processor, k)) def test_push_to_hub_via_save_pretrained(self): with TemporaryHubRepo(token=self._token) as tmp_repo: image_processor = ViTImageProcessor.from_pretrained(SAMPLE_IMAGE_PROCESSING_CONFIG_DIR) # Push to hub via save_pretrained with tempfile.TemporaryDirectory() as tmp_dir: image_processor.save_pretrained(tmp_dir, repo_id=tmp_repo.repo_id, push_to_hub=True, token=self._token) new_image_processor = ViTImageProcessor.from_pretrained(tmp_repo.repo_id) for k, v in image_processor.__dict__.items(): self.assertEqual(v, getattr(new_image_processor, k)) def test_push_to_hub_in_organization(self): with TemporaryHubRepo(namespace="valid_org", token=self._token) as tmp_repo: image_processor = ViTImageProcessor.from_pretrained(SAMPLE_IMAGE_PROCESSING_CONFIG_DIR) image_processor.push_to_hub(tmp_repo.repo_id, token=self._token) new_image_processor = ViTImageProcessor.from_pretrained(tmp_repo.repo_id) for k, v in image_processor.__dict__.items(): self.assertEqual(v, getattr(new_image_processor, k)) def test_push_to_hub_in_organization_via_save_pretrained(self): with TemporaryHubRepo(namespace="valid_org", token=self._token) as tmp_repo: image_processor = ViTImageProcessor.from_pretrained(SAMPLE_IMAGE_PROCESSING_CONFIG_DIR) # Push to hub via save_pretrained with tempfile.TemporaryDirectory() as tmp_dir: image_processor.save_pretrained(tmp_dir, repo_id=tmp_repo.repo_id, push_to_hub=True, token=self._token) new_image_processor = ViTImageProcessor.from_pretrained(tmp_repo.repo_id) for k, v in image_processor.__dict__.items(): self.assertEqual(v, getattr(new_image_processor, k)) def test_push_to_hub_dynamic_image_processor(self): with TemporaryHubRepo(token=self._token) as tmp_repo: CustomImageProcessor.register_for_auto_class() image_processor = CustomImageProcessor.from_pretrained(SAMPLE_IMAGE_PROCESSING_CONFIG_DIR) image_processor.push_to_hub(tmp_repo.repo_id, token=self._token) # This has added the proper auto_map field to the config self.assertDictEqual( image_processor.auto_map, {"AutoImageProcessor": "custom_image_processing.CustomImageProcessor"}, ) new_image_processor = AutoImageProcessor.from_pretrained(tmp_repo.repo_id, trust_remote_code=True) # Can't make an isinstance check because the new_image_processor is from the CustomImageProcessor class of a dynamic module self.assertEqual(new_image_processor.__class__.__name__, "CustomImageProcessor") class ImageProcessingUtilsTester(unittest.TestCase): def test_get_size_dict(self): # Test a dict with the wrong keys raises an error inputs = {"wrong_key": 224} with self.assertRaises(ValueError): get_size_dict(inputs) inputs = {"height": 224} with self.assertRaises(ValueError): get_size_dict(inputs) inputs = {"width": 224, "shortest_edge": 224} with self.assertRaises(ValueError): get_size_dict(inputs) # Test a dict with the correct keys is returned as is inputs = {"height": 224, "width": 224} outputs = get_size_dict(inputs) self.assertEqual(outputs, inputs) inputs = {"shortest_edge": 224} outputs = get_size_dict(inputs) self.assertEqual(outputs, {"shortest_edge": 224}) inputs = {"longest_edge": 224, "shortest_edge": 224} outputs = get_size_dict(inputs) self.assertEqual(outputs, {"longest_edge": 224, "shortest_edge": 224}) # Test a single int value which represents (size, size) outputs = get_size_dict(224) self.assertEqual(outputs, {"height": 224, "width": 224}) # Test a single int value which represents the shortest edge outputs = get_size_dict(224, default_to_square=False) self.assertEqual(outputs, {"shortest_edge": 224}) # Test a tuple of ints which represents (height, width) outputs = get_size_dict((150, 200)) self.assertEqual(outputs, {"height": 150, "width": 200}) # Test a tuple of ints which represents (width, height) outputs = get_size_dict((150, 200), height_width_order=False) self.assertEqual(outputs, {"height": 200, "width": 150}) # Test an int representing the shortest edge and max_size which represents the longest edge outputs = get_size_dict(224, max_size=256, default_to_square=False) self.assertEqual(outputs, {"shortest_edge": 224, "longest_edge": 256}) # Test int with default_to_square=True and max_size fails with self.assertRaises(ValueError): get_size_dict(224, max_size=256, default_to_square=True)
transformers/tests/utils/test_image_processing_utils.py/0
{ "file_path": "transformers/tests/utils/test_image_processing_utils.py", "repo_id": "transformers", "token_count": 3317 }
{ "ASTForAudioClassification": { "tokenizer_classes": [], "processor_classes": [ "ASTFeatureExtractor" ], "model_classes": [ "ASTForAudioClassification" ], "sha": "83d6e076db7768a3645401bad3204624985e1d08" }, "ASTModel": { "tokenizer_classes": [], "processor_classes": [ "ASTFeatureExtractor" ], "model_classes": [ "ASTModel" ], "sha": "75e68f956f6f2c0709b01e596e7a6aecb1b29dce" }, "AlbertForMaskedLM": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "AlbertForMaskedLM", "TFAlbertForMaskedLM" ], "sha": "d29de71ac29e1019c3a7762f7357f750730cb037" }, "AlbertForMultipleChoice": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "AlbertForMultipleChoice", "TFAlbertForMultipleChoice" ], "sha": "242aecce6a589a2964c0f695621fa22a83751579" }, "AlbertForPreTraining": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "AlbertForPreTraining", "TFAlbertForPreTraining" ], "sha": "41330be4b271687f4d88ddc96346c12aa11de983" }, "AlbertForQuestionAnswering": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "AlbertForQuestionAnswering", "TFAlbertForQuestionAnswering" ], "sha": "040b81c15f437f4722349dc5b41fccd17ebd7fdc" }, "AlbertForSequenceClassification": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "AlbertForSequenceClassification", "TFAlbertForSequenceClassification" ], "sha": "39c1a0e2c1c2623106d3211d751e9b32f23a91a0" }, "AlbertForTokenClassification": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "AlbertForTokenClassification", "TFAlbertForTokenClassification" ], "sha": "359c3f4a311a4053a6f6d6a880db5f82c8e3ff1f" }, "AlbertModel": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "AlbertModel", "TFAlbertModel" ], "sha": "34a63314686b64aaeb595ddb95006f1ff2ffda17" }, "AlignModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "EfficientNetImageProcessor" ], "model_classes": [ "AlignModel" ], "sha": "68a4f9d3f493f44efa7c1dde6fcca23350e2c92b" }, "AltCLIPModel": { "tokenizer_classes": [ "XLMRobertaTokenizerFast" ], "processor_classes": [ "CLIPImageProcessor" ], "model_classes": [ "AltCLIPModel" ], "sha": "3106af0fd503970717c05f27218e5cacf19ba872" }, "BarkModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BarkModel" ], "sha": "187e590fd87359cea47693e8cb11a604cd7b673c" }, "BartForCausalLM": { "tokenizer_classes": [ "BartTokenizer", "BartTokenizerFast" ], "processor_classes": [], "model_classes": [ "BartForCausalLM" ], "sha": "c25526ac67d2dbe79fe5462af4b7908ca2fbc3ff" }, "BartForConditionalGeneration": { "tokenizer_classes": [ "BartTokenizer", "BartTokenizerFast" ], "processor_classes": [], "model_classes": [ "BartForConditionalGeneration", "TFBartForConditionalGeneration" ], "sha": "3a489a21e4b04705f4a6047924b7616a67be7e37" }, "BartForQuestionAnswering": { "tokenizer_classes": [ "BartTokenizer", "BartTokenizerFast" ], "processor_classes": [], "model_classes": [ "BartForQuestionAnswering" ], "sha": "3ebf9aab39a57ceab55128d5fc6f61e4db0dadd4" }, "BartForSequenceClassification": { "tokenizer_classes": [ "BartTokenizer", "BartTokenizerFast" ], "processor_classes": [], "model_classes": [ "BartForSequenceClassification", "TFBartForSequenceClassification" ], "sha": "ea452fd9a928cfebd71723afa50feb20326917bc" }, "BartModel": { "tokenizer_classes": [ "BartTokenizer", "BartTokenizerFast" ], "processor_classes": [], "model_classes": [ "BartModel", "TFBartModel" ], "sha": "e5df6d1aa75f03833b2df328b9c35463f73a421b" }, "BeitForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "BeitImageProcessor" ], "model_classes": [ "BeitForImageClassification" ], "sha": "e997587bb890f82faad4bd25eb23d85ba21ecaaa" }, "BeitForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "BeitImageProcessor" ], "model_classes": [ "BeitForSemanticSegmentation" ], "sha": "d4afa9e21e3fe5b087578ed68974d9b3ffc1fb22" }, "BeitModel": { "tokenizer_classes": [], "processor_classes": [ "BeitImageProcessor" ], "model_classes": [ "BeitModel" ], "sha": "5c4a051f0cca6f64d02c6168deb88413cae10d2c" }, "BertForMaskedLM": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertForMaskedLM", "TFBertForMaskedLM" ], "sha": "3e32baa52ce044c75edfb5c28abd51ee8d051282" }, "BertForMultipleChoice": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertForMultipleChoice", "TFBertForMultipleChoice" ], "sha": "0b8c3a6d411d1e19e5fd98d4d8631ae7616eeeaa" }, "BertForNextSentencePrediction": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertForNextSentencePrediction", "TFBertForNextSentencePrediction" ], "sha": "628e70debf8864bd0b63aff7901d17d9c4f7612c" }, "BertForPreTraining": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertForPreTraining", "TFBertForPreTraining" ], "sha": "c748ad37e6a200a6f64b2764191bfe13f976032f" }, "BertForQuestionAnswering": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertForQuestionAnswering", "TFBertForQuestionAnswering" ], "sha": "4671ad0c21493b97c5eb2f0201192704c29876d5" }, "BertForSequenceClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertForSequenceClassification", "TFBertForSequenceClassification" ], "sha": "37a9d44022264c12bdf3ec257778f953b63d4aaf" }, "BertForTokenClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertForTokenClassification", "TFBertForTokenClassification" ], "sha": "d7dc3a0793ff6dfcb794b21130ee0f185d2c61a2" }, "BertLMHeadModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertLMHeadModel", "TFBertLMHeadModel" ], "sha": "b4e3acc1990f3e365ffddbd54b620a26d9fb4b09" }, "BertModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BertModel", "TFBertModel" ], "sha": "3956d303d3cddf0708ff20660c1ea5f6ec30e434" }, "BigBirdForCausalLM": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdForCausalLM" ], "sha": "5c7a487af5248d9c01b45d5481b7d7bb9b36e1b5" }, "BigBirdForMaskedLM": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdForMaskedLM" ], "sha": "476ef8225c0f69270b577706ad4f1dda13e4dde5" }, "BigBirdForMultipleChoice": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdForMultipleChoice" ], "sha": "cf93eaa1019987112c171a407745bc183a20513a" }, "BigBirdForPreTraining": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdForPreTraining" ], "sha": "5fb9efa13334431e7c186a9fa314b89c4a1eee72" }, "BigBirdForQuestionAnswering": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdForQuestionAnswering" ], "sha": "f82f88bd71fba819a8ffb0692915d3529e705417" }, "BigBirdForSequenceClassification": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdForSequenceClassification" ], "sha": "ea398090858f9af93b54fc9a8d65cfed78ac27ff" }, "BigBirdForTokenClassification": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdForTokenClassification" ], "sha": "2cdea118999fa58ba9fb0162d99e2ffa146c3df1" }, "BigBirdModel": { "tokenizer_classes": [ "BigBirdTokenizer", "BigBirdTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdModel" ], "sha": "9c55989f31df156194e6997606fb14d9897e0300" }, "BigBirdPegasusForCausalLM": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdPegasusForCausalLM" ], "sha": "49bc8816c666dee32e27cd8e00136b604eb85243" }, "BigBirdPegasusForConditionalGeneration": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdPegasusForConditionalGeneration" ], "sha": "e791aa6d1af5a76ca0926d95b1f28bd2d8adf376" }, "BigBirdPegasusForQuestionAnswering": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdPegasusForQuestionAnswering" ], "sha": "7650e076713ca707a37062adc8c9c1cd60dad7c7" }, "BigBirdPegasusForSequenceClassification": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdPegasusForSequenceClassification" ], "sha": "02500e8ebd9c53528750013fb963fbdc2be34034" }, "BigBirdPegasusModel": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "BigBirdPegasusModel" ], "sha": "b07c5304dfba673cf8b9cf5cd1aa45fbfea1c2f3" }, "BioGptForCausalLM": { "tokenizer_classes": [ "BioGptTokenizer" ], "processor_classes": [], "model_classes": [ "BioGptForCausalLM" ], "sha": "07073b31da84054fd12226e3cae4cb3beb2547f9" }, "BioGptForSequenceClassification": { "tokenizer_classes": [ "BioGptTokenizer" ], "processor_classes": [], "model_classes": [ "BioGptForSequenceClassification" ], "sha": "8e18ad6218abd795e050dec324a8c827ccedacb4" }, "BioGptForTokenClassification": { "tokenizer_classes": [ "BioGptTokenizer" ], "processor_classes": [], "model_classes": [ "BioGptForTokenClassification" ], "sha": "67f8173c1a17273064d452a9031a51b67f327b6a" }, "BioGptModel": { "tokenizer_classes": [ "BioGptTokenizer" ], "processor_classes": [], "model_classes": [ "BioGptModel" ], "sha": "fe18551d0743538a990520b75707294ec57b4ebe" }, "BitBackbone": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "BitBackbone" ], "sha": "2f06f6b4395b6dce2b00ac839ff757410e743cd7" }, "BitForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "BitForImageClassification" ], "sha": "d0d8476f2d285ddda7c42c0d4a8e4bf6f5d2bfdf" }, "BitModel": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "BitModel" ], "sha": "30a8a9b1a6b253cc500c01cf41bc1fc9581ea5e5" }, "BlenderbotForCausalLM": { "tokenizer_classes": [ "BlenderbotTokenizer", "BlenderbotTokenizerFast" ], "processor_classes": [], "model_classes": [ "BlenderbotForCausalLM" ], "sha": "8aad2e13e8920bca3cf988ba45f8a7b008b51a81" }, "BlenderbotForConditionalGeneration": { "tokenizer_classes": [ "BlenderbotTokenizer", "BlenderbotTokenizerFast" ], "processor_classes": [], "model_classes": [ "BlenderbotForConditionalGeneration", "TFBlenderbotForConditionalGeneration" ], "sha": "e8532878b9924fa02fb4b059b7f6e7fa372fff91" }, "BlenderbotModel": { "tokenizer_classes": [ "BlenderbotTokenizer", "BlenderbotTokenizerFast" ], "processor_classes": [], "model_classes": [ "BlenderbotModel", "TFBlenderbotModel" ], "sha": "ff848a40c30ca98eb7c6870bbb02677d5af9db55" }, "BlenderbotSmallForCausalLM": { "tokenizer_classes": [ "BlenderbotSmallTokenizer" ], "processor_classes": [], "model_classes": [ "BlenderbotSmallForCausalLM" ], "sha": "4c57c106630932eb9de4d76210a540d04616304d" }, "BlenderbotSmallForConditionalGeneration": { "tokenizer_classes": [ "BlenderbotSmallTokenizer" ], "processor_classes": [], "model_classes": [ "BlenderbotSmallForConditionalGeneration", "TFBlenderbotSmallForConditionalGeneration" ], "sha": "b8db01fcf3e37a5b369cd50e169bf383b8e905d8" }, "BlenderbotSmallModel": { "tokenizer_classes": [ "BlenderbotSmallTokenizer" ], "processor_classes": [], "model_classes": [ "BlenderbotSmallModel", "TFBlenderbotSmallModel" ], "sha": "0a10c70e225ec63278faffa8fabf759f063f0e55" }, "Blip2ForConditionalGeneration": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [ "BlipImageProcessor" ], "model_classes": [ "Blip2ForConditionalGeneration" ], "sha": "35e1ef43da3554af62eb29a7b3dbbef3f3bef48e" }, "Blip2Model": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [ "BlipImageProcessor" ], "model_classes": [ "Blip2Model" ], "sha": "c23378f225be31872fff33c103cf0ebc2454ffcc" }, "BlipForConditionalGeneration": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "BlipImageProcessor" ], "model_classes": [ "BlipForConditionalGeneration", "TFBlipForConditionalGeneration" ], "sha": "eaf32bc0369349deef0c777442fc185119171d1f" }, "BlipModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "BlipImageProcessor" ], "model_classes": [ "BlipModel", "TFBlipModel" ], "sha": "3d1d1c15eff22d6b2664a2d15757fa6f5d93827d" }, "BloomForCausalLM": { "tokenizer_classes": [ "BloomTokenizerFast" ], "processor_classes": [], "model_classes": [ "BloomForCausalLM" ], "sha": "0f4f06f162cd67d34d03ee156484e4001d468500" }, "BloomForQuestionAnswering": { "tokenizer_classes": [ "BloomTokenizerFast" ], "processor_classes": [], "model_classes": [ "BloomForQuestionAnswering" ], "sha": "23f369f163eef8c9c9685900440b0cbb0f3439fd" }, "BloomForSequenceClassification": { "tokenizer_classes": [ "BloomTokenizerFast" ], "processor_classes": [], "model_classes": [ "BloomForSequenceClassification" ], "sha": "b2280eef7172835f39b265eb0c46623257f67bbe" }, "BloomForTokenClassification": { "tokenizer_classes": [ "BloomTokenizerFast" ], "processor_classes": [], "model_classes": [ "BloomForTokenClassification" ], "sha": "9796aa45f99adff987c978089e11c0bd9d7b997f" }, "BloomModel": { "tokenizer_classes": [ "BloomTokenizerFast" ], "processor_classes": [], "model_classes": [ "BloomModel" ], "sha": "28b600fcfdc4f4938406fb518abf895620048cb2" }, "BrosForTokenClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BrosForTokenClassification" ], "sha": "4ec2c91936f96b93667e8946fc7abbdeeb08a6d7" }, "BrosModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "BrosModel" ], "sha": "e2464830b1874eeaf9f4b425fbe0ce8e7c7643e9" }, "CLIPModel": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "CLIPImageProcessor" ], "model_classes": [ "CLIPModel", "TFCLIPModel" ], "sha": "0452d344074485d0e7eb5d5c12447b7c9dbc9619" }, "CLIPSegModel": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "CLIPSegModel" ], "sha": "7b1305214ccc85d29b776ffbee06748693852a04" }, "CTRLForSequenceClassification": { "tokenizer_classes": [ "CTRLTokenizer" ], "processor_classes": [], "model_classes": [ "CTRLForSequenceClassification", "TFCTRLForSequenceClassification" ], "sha": "280b5a3502d607c55c9f8d9f198fe9c2802d6f73" }, "CTRLLMHeadModel": { "tokenizer_classes": [ "CTRLTokenizer" ], "processor_classes": [], "model_classes": [ "CTRLLMHeadModel", "TFCTRLLMHeadModel" ], "sha": "662381663b216f1dd3c9cd30e2e83cb4c6fc9552" }, "CTRLModel": { "tokenizer_classes": [ "CTRLTokenizer" ], "processor_classes": [], "model_classes": [ "CTRLModel", "TFCTRLModel" ], "sha": "68b19b4f132d5a191a73acd78d983cbdcf068e9c" }, "CanineForMultipleChoice": { "tokenizer_classes": [ "CanineTokenizer" ], "processor_classes": [], "model_classes": [ "CanineForMultipleChoice" ], "sha": "fa0451453ed202f903ff7dcf6071aab6630fb89f" }, "CanineForQuestionAnswering": { "tokenizer_classes": [ "CanineTokenizer" ], "processor_classes": [], "model_classes": [ "CanineForQuestionAnswering" ], "sha": "5e1012bb086ac2e0b1497eeb7ed14eb2183d4ecb" }, "CanineForSequenceClassification": { "tokenizer_classes": [ "CanineTokenizer" ], "processor_classes": [], "model_classes": [ "CanineForSequenceClassification" ], "sha": "75336dc9179153869c38a8047ce4b1e02677a260" }, "CanineForTokenClassification": { "tokenizer_classes": [ "CanineTokenizer" ], "processor_classes": [], "model_classes": [ "CanineForTokenClassification" ], "sha": "65a622ea8e12597e12f45e59d46d8dbe8461fc10" }, "CanineModel": { "tokenizer_classes": [ "CanineTokenizer" ], "processor_classes": [], "model_classes": [ "CanineModel" ], "sha": "531ef67ad4f0b3dc7a9e5d722c774096b7401b1b" }, "ChineseCLIPModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "ChineseCLIPImageProcessor" ], "model_classes": [ "ChineseCLIPModel" ], "sha": "504271a3c5fd9c2e877f5b4c01848bc18778c7c3" }, "ClapModel": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [ "ClapFeatureExtractor" ], "model_classes": [ "ClapModel" ], "sha": "a7874595b900f9b2ddc79130dafc3ff48f4fbfb9" }, "ClvpModelForConditionalGeneration": { "tokenizer_classes": [ "ClvpTokenizer" ], "processor_classes": [ "ClvpFeatureExtractor" ], "model_classes": [], "sha": "45df7581535be337ff781707b6c20994ca221f05" }, "CodeGenForCausalLM": { "tokenizer_classes": [ "CodeGenTokenizer", "CodeGenTokenizerFast" ], "processor_classes": [], "model_classes": [ "CodeGenForCausalLM" ], "sha": "a3fc69d757fd1f0aa01bcbc4337f586651c7cb10" }, "CodeGenModel": { "tokenizer_classes": [ "CodeGenTokenizer", "CodeGenTokenizerFast" ], "processor_classes": [], "model_classes": [ "CodeGenModel" ], "sha": "dad4941a2b7429fc6e8206fcc4a04fc40f4a0beb" }, "ConditionalDetrForObjectDetection": { "tokenizer_classes": [], "processor_classes": [ "ConditionalDetrImageProcessor" ], "model_classes": [ "ConditionalDetrForObjectDetection" ], "sha": "762c213a0285edc84eb813a2ed90063cf971ca43" }, "ConditionalDetrModel": { "tokenizer_classes": [], "processor_classes": [ "ConditionalDetrImageProcessor" ], "model_classes": [ "ConditionalDetrModel" ], "sha": "18b75874158cac520c63605293b06e0b1327c263" }, "ConvBertForMaskedLM": { "tokenizer_classes": [ "ConvBertTokenizer", "ConvBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ConvBertForMaskedLM", "TFConvBertForMaskedLM" ], "sha": "307c70e32c3d3c18aeb45e0cbdc9fcd2957d9aba" }, "ConvBertForMultipleChoice": { "tokenizer_classes": [ "ConvBertTokenizer", "ConvBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ConvBertForMultipleChoice", "TFConvBertForMultipleChoice" ], "sha": "d6561a21ffdb82d03c1822af0510eb7482ce5026" }, "ConvBertForQuestionAnswering": { "tokenizer_classes": [ "ConvBertTokenizer", "ConvBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ConvBertForQuestionAnswering", "TFConvBertForQuestionAnswering" ], "sha": "8a056da5cc421415c2a24b9f644dd95ca279411d" }, "ConvBertForSequenceClassification": { "tokenizer_classes": [ "ConvBertTokenizer", "ConvBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ConvBertForSequenceClassification", "TFConvBertForSequenceClassification" ], "sha": "8bb8b20e51d282d777cc567cacadd97a35f0811e" }, "ConvBertForTokenClassification": { "tokenizer_classes": [ "ConvBertTokenizer", "ConvBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ConvBertForTokenClassification", "TFConvBertForTokenClassification" ], "sha": "8db0dd3c2b8ccc958fa9a84801f4f837b42fcf2c" }, "ConvBertModel": { "tokenizer_classes": [ "ConvBertTokenizer", "ConvBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ConvBertModel", "TFConvBertModel" ], "sha": "c9c5b1a74f0e468d8467473cabeaa67fcdbaddb7" }, "ConvNextBackbone": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ConvNextBackbone" ], "sha": "499c7d6a97825b79e19663b70f3b60c4813b6bf2" }, "ConvNextForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ConvNextForImageClassification", "TFConvNextForImageClassification" ], "sha": "0b490fd6b19cdbf721025dbd6ee45dcc5828e6e3" }, "ConvNextModel": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ConvNextModel", "TFConvNextModel" ], "sha": "7b3b47a57b9a9120e022b91d6067daeac55b794f" }, "ConvNextV2Backbone": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ConvNextV2Backbone" ], "sha": "c82fc526949dfd892a1fee3c34be6f8d80c4d3df" }, "ConvNextV2ForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ConvNextV2ForImageClassification", "TFConvNextV2ForImageClassification" ], "sha": "ee22bae1cbb87d66fc7f62f7e15a43d6ff80d3cc" }, "ConvNextV2Model": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ConvNextV2Model", "TFConvNextV2Model" ], "sha": "c4dd68ee1102cba05bcc483da2a88e39427b7249" }, "CvtForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "CvtForImageClassification", "TFCvtForImageClassification" ], "sha": "4b1938e252fdb26a06c1f5755e07fa8f6eed2d75" }, "CvtModel": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "CvtModel", "TFCvtModel" ], "sha": "27fed12c174f4f4f1fe27075d1c29602fe0669f0" }, "DPRQuestionEncoder": { "tokenizer_classes": [ "DPRQuestionEncoderTokenizer", "DPRQuestionEncoderTokenizerFast" ], "processor_classes": [], "model_classes": [ "DPRQuestionEncoder", "TFDPRQuestionEncoder" ], "sha": "09ae0269780271e0a4916f7bab1dbc4f8a76070d" }, "DPTForDepthEstimation": { "tokenizer_classes": [], "processor_classes": [ "DPTImageProcessor" ], "model_classes": [ "DPTForDepthEstimation" ], "sha": "11b7735d64d95b6599811631b012d2dec6eaa2c1" }, "DPTForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "DPTImageProcessor" ], "model_classes": [ "DPTForSemanticSegmentation" ], "sha": "e140c3c716a4bf11dad875e5f5f0abd2bd4cbbcb" }, "DPTModel": { "tokenizer_classes": [], "processor_classes": [ "DPTImageProcessor" ], "model_classes": [ "DPTModel" ], "sha": "1d6ae6c0b60868dffbef0dddeda381c51c6dcba5" }, "Data2VecAudioForAudioFrameClassification": { "tokenizer_classes": [], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Data2VecAudioForAudioFrameClassification" ], "sha": "a64828b27e73fc8dd95aeb315108ca2f6a66b55f" }, "Data2VecAudioForCTC": { "tokenizer_classes": [], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Data2VecAudioForCTC" ], "sha": "bb161b6a181bd2c22cf30222f46fa6ef42225744" }, "Data2VecAudioForSequenceClassification": { "tokenizer_classes": [], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Data2VecAudioForSequenceClassification" ], "sha": "8de17e0a959eca5f72b2ea59a11bc1fa744785d9" }, "Data2VecAudioForXVector": { "tokenizer_classes": [], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Data2VecAudioForXVector" ], "sha": "dcb92484cf28fb4fe1dcf5d6e8d78e04382fdce9" }, "Data2VecAudioModel": { "tokenizer_classes": [], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Data2VecAudioModel" ], "sha": "73f503fdff73b7616154f64dbe38a685cc48e8eb" }, "Data2VecTextForCausalLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "Data2VecTextForCausalLM" ], "sha": "1f3658ce623653338cd31516551e8181aa08bb38" }, "Data2VecTextForMaskedLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "Data2VecTextForMaskedLM" ], "sha": "fb41ac30d0faa0899bf5afaa0986df8993395ca6" }, "Data2VecTextForMultipleChoice": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "Data2VecTextForMultipleChoice" ], "sha": "e7556d520ad90ebae5ad88554d45a37488d00040" }, "Data2VecTextForQuestionAnswering": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "Data2VecTextForQuestionAnswering" ], "sha": "9630833d76a1fd7e96b904d87bb11b7c00ccd021" }, "Data2VecTextForSequenceClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "Data2VecTextForSequenceClassification" ], "sha": "156e4019c37d9592f193ba80553cd245cbccecb3" }, "Data2VecTextForTokenClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "Data2VecTextForTokenClassification" ], "sha": "55b3a49fdbf22479d6eb939261d4b884ea288270" }, "Data2VecTextModel": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "Data2VecTextModel" ], "sha": "c21be3e4f88e8357bf33bfba8f8e05ae2e735124" }, "Data2VecVisionForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "BeitImageProcessor" ], "model_classes": [ "Data2VecVisionForImageClassification", "TFData2VecVisionForImageClassification" ], "sha": "d640e7ced7a3fbbb8c8661a4f67b934e55406172" }, "Data2VecVisionForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "BeitImageProcessor" ], "model_classes": [ "Data2VecVisionForSemanticSegmentation", "TFData2VecVisionForSemanticSegmentation" ], "sha": "3eba3cd694fab6530b7e5da8f49d3951301c816a" }, "Data2VecVisionModel": { "tokenizer_classes": [], "processor_classes": [ "BeitImageProcessor" ], "model_classes": [ "Data2VecVisionModel", "TFData2VecVisionModel" ], "sha": "2a7ad25e4359970dc70494a2f3eb98e2a3c9806d" }, "DebertaForMaskedLM": { "tokenizer_classes": [ "DebertaTokenizer", "DebertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaForMaskedLM", "TFDebertaForMaskedLM" ], "sha": "e0f9ada9e0f6d4d7cc39d7cbd58369b0c84de33d" }, "DebertaForQuestionAnswering": { "tokenizer_classes": [ "DebertaTokenizer", "DebertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaForQuestionAnswering", "TFDebertaForQuestionAnswering" ], "sha": "a3eb69cdb0b52f7d0fb730e882f1a54b9a7442ea" }, "DebertaForSequenceClassification": { "tokenizer_classes": [ "DebertaTokenizer", "DebertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaForSequenceClassification", "TFDebertaForSequenceClassification" ], "sha": "32af91d12c4e9b6d62b420bee93311fd77d3c933" }, "DebertaForTokenClassification": { "tokenizer_classes": [ "DebertaTokenizer", "DebertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaForTokenClassification", "TFDebertaForTokenClassification" ], "sha": "ba62ba2726d813e60e512476fc1b178aa3858175" }, "DebertaModel": { "tokenizer_classes": [ "DebertaTokenizer", "DebertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaModel", "TFDebertaModel" ], "sha": "4273294e14cd04c0e2cd1dcff5cf7e5d4fe906ba" }, "DebertaV2ForMaskedLM": { "tokenizer_classes": [ "DebertaV2Tokenizer", "DebertaV2TokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaV2ForMaskedLM", "TFDebertaV2ForMaskedLM" ], "sha": "a053dedc2cdf32918a84277cb0c05186604496a5" }, "DebertaV2ForMultipleChoice": { "tokenizer_classes": [ "DebertaV2Tokenizer", "DebertaV2TokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaV2ForMultipleChoice", "TFDebertaV2ForMultipleChoice" ], "sha": "07e39f520ce239b39ef8cb24cd7874d06c791063" }, "DebertaV2ForQuestionAnswering": { "tokenizer_classes": [ "DebertaV2Tokenizer", "DebertaV2TokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaV2ForQuestionAnswering", "TFDebertaV2ForQuestionAnswering" ], "sha": "9cecb3a7fc6b95099122283644ea1f8ced287d1b" }, "DebertaV2ForSequenceClassification": { "tokenizer_classes": [ "DebertaV2Tokenizer", "DebertaV2TokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaV2ForSequenceClassification", "TFDebertaV2ForSequenceClassification" ], "sha": "df9ea1f5c0f2ccd139b21cfb3963a5a5ebfb5b81" }, "DebertaV2ForTokenClassification": { "tokenizer_classes": [ "DebertaV2Tokenizer", "DebertaV2TokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaV2ForTokenClassification", "TFDebertaV2ForTokenClassification" ], "sha": "51fe01989df38a540ac1abca5ee71a51365defd5" }, "DebertaV2Model": { "tokenizer_classes": [ "DebertaV2Tokenizer", "DebertaV2TokenizerFast" ], "processor_classes": [], "model_classes": [ "DebertaV2Model", "TFDebertaV2Model" ], "sha": "211df4bd1a4a9b66c97af3f9231a5d2af8de7b9f" }, "DeformableDetrForObjectDetection": { "tokenizer_classes": [], "processor_classes": [ "DeformableDetrImageProcessor" ], "model_classes": [ "DeformableDetrForObjectDetection" ], "sha": "8fa0db215c458f60ae4d455d6fb067c1c5e39fdc" }, "DeformableDetrModel": { "tokenizer_classes": [], "processor_classes": [ "DeformableDetrImageProcessor" ], "model_classes": [ "DeformableDetrModel" ], "sha": "0faac5624696b03edd14694642f9804f2cd8f3da" }, "DeiTForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "DeiTImageProcessor" ], "model_classes": [ "DeiTForImageClassification", "TFDeiTForImageClassification" ], "sha": "21fc864199dafa0130f16a45769c6b6ca22c7784" }, "DeiTForImageClassificationWithTeacher": { "tokenizer_classes": [], "processor_classes": [ "DeiTImageProcessor" ], "model_classes": [ "DeiTForImageClassificationWithTeacher", "TFDeiTForImageClassificationWithTeacher" ], "sha": "5a5738a109e27f3d4b78a0db4cb1d3331140c10e" }, "DeiTForMaskedImageModeling": { "tokenizer_classes": [], "processor_classes": [ "DeiTImageProcessor" ], "model_classes": [ "DeiTForMaskedImageModeling", "TFDeiTForMaskedImageModeling" ], "sha": "d5df5c538fe1efb8d668a3893d1691d505a0de06" }, "DeiTModel": { "tokenizer_classes": [], "processor_classes": [ "DeiTImageProcessor" ], "model_classes": [ "DeiTModel", "TFDeiTModel" ], "sha": "0fdbff6f44b7c6933c2027fec1d7f87bec06b590" }, "DetaForObjectDetection": { "tokenizer_classes": [], "processor_classes": [ "DetaImageProcessor" ], "model_classes": [ "DetaForObjectDetection" ], "sha": "a15ad6ce64fbcb5021b2b99e9587c4011ef3341d" }, "DetaModel": { "tokenizer_classes": [], "processor_classes": [ "DetaImageProcessor" ], "model_classes": [ "DetaModel" ], "sha": "8820f2297ec0dec8f1875054559c8b7a162098e3" }, "DetrForObjectDetection": { "tokenizer_classes": [], "processor_classes": [ "DetrImageProcessor" ], "model_classes": [ "DetrForObjectDetection" ], "sha": "7dc967c53f4b3f07904c42b255346b744d0ad84e" }, "DetrForSegmentation": { "tokenizer_classes": [], "processor_classes": [ "DetrImageProcessor" ], "model_classes": [ "DetrForSegmentation" ], "sha": "e34330acdae359588ef853e961a78d419dc4e8eb" }, "DetrModel": { "tokenizer_classes": [], "processor_classes": [ "DetrImageProcessor" ], "model_classes": [ "DetrModel" ], "sha": "f15ce38a10c7447e8048b1681e4811322a005722" }, "DinatBackbone": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "DinatBackbone" ], "sha": "3ba13790a0796d90104c207f75bb3d5d79723d51" }, "DinatForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "DinatForImageClassification" ], "sha": "624cf2d864a7ea2f90e24014a213e34597e8bd76" }, "DinatModel": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "DinatModel" ], "sha": "d6c75bc51196f0a683afb12de6310fdda13efefd" }, "Dinov2Backbone": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "Dinov2Backbone" ], "sha": "dbf8d2ff3092ac53c11e6525e6cbae7ace84769a" }, "Dinov2ForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "Dinov2ForImageClassification" ], "sha": "ae44840966456aae33641df2c8c8a4af5b457b24" }, "Dinov2Model": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "Dinov2Model" ], "sha": "6f560b1cc9806bcf84fe0b0c60b5faf9c29be959" }, "DistilBertForMaskedLM": { "tokenizer_classes": [ "DistilBertTokenizer", "DistilBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "DistilBertForMaskedLM", "TFDistilBertForMaskedLM" ], "sha": "b2dfda30b012821996e6e603729562d9c900bc0f" }, "DistilBertForMultipleChoice": { "tokenizer_classes": [ "DistilBertTokenizer", "DistilBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "DistilBertForMultipleChoice", "TFDistilBertForMultipleChoice" ], "sha": "ec6b83129a7d1be2a6b8d58303abcca5541a5cb3" }, "DistilBertForQuestionAnswering": { "tokenizer_classes": [ "DistilBertTokenizer", "DistilBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "DistilBertForQuestionAnswering", "TFDistilBertForQuestionAnswering" ], "sha": "812406b226415044469b0e0a84c4fe0ff338c5d3" }, "DistilBertForSequenceClassification": { "tokenizer_classes": [ "DistilBertTokenizer", "DistilBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "DistilBertForSequenceClassification", "TFDistilBertForSequenceClassification" ], "sha": "6f427ce7b3e5aaa596938fbd98437d3875581b7b" }, "DistilBertForTokenClassification": { "tokenizer_classes": [ "DistilBertTokenizer", "DistilBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "DistilBertForTokenClassification", "TFDistilBertForTokenClassification" ], "sha": "166dbe3f5d6ecd871762567069454d6ec65234b4" }, "DistilBertModel": { "tokenizer_classes": [ "DistilBertTokenizer", "DistilBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "DistilBertModel", "TFDistilBertModel" ], "sha": "cc4425ad0676f3ec00e8bffe485fe83cae61041a" }, "DonutSwinModel": { "tokenizer_classes": [], "processor_classes": [ "DonutImageProcessor" ], "model_classes": [ "DonutSwinModel" ], "sha": "1b10654fbfe2f2ea410a672ab605bd5c60d3f284" }, "EfficientFormerForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "EfficientFormerImageProcessor" ], "model_classes": [ "EfficientFormerForImageClassification", "TFEfficientFormerForImageClassification" ], "sha": "ebadb628e12f268e321fcc756fa4606f7b5b3178" }, "EfficientFormerForImageClassificationWithTeacher": { "tokenizer_classes": [], "processor_classes": [ "EfficientFormerImageProcessor" ], "model_classes": [ "EfficientFormerForImageClassificationWithTeacher", "TFEfficientFormerForImageClassificationWithTeacher" ], "sha": "1beabce6da9cb4ebbeafcd1ef23fac36b4a269e2" }, "EfficientFormerModel": { "tokenizer_classes": [], "processor_classes": [ "EfficientFormerImageProcessor" ], "model_classes": [ "EfficientFormerModel", "TFEfficientFormerModel" ], "sha": "200fae5b875844d09c8a91d1c155b72b06a517f6" }, "EfficientNetForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "EfficientNetImageProcessor" ], "model_classes": [ "EfficientNetForImageClassification" ], "sha": "993d088cf937b8a90b61f68677cd8f261321c745" }, "EfficientNetModel": { "tokenizer_classes": [], "processor_classes": [ "EfficientNetImageProcessor" ], "model_classes": [ "EfficientNetModel" ], "sha": "eb03c90d4aaad98af0f19e0dfbdc41106297ffff" }, "ElectraForCausalLM": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraForCausalLM" ], "sha": "c78396bc8cdd8db247892339de8da80d691d1d04" }, "ElectraForMaskedLM": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraForMaskedLM", "TFElectraForMaskedLM" ], "sha": "631337703dbd8d41904c39891a41c6f1edd31813" }, "ElectraForMultipleChoice": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraForMultipleChoice", "TFElectraForMultipleChoice" ], "sha": "66fdea6e22cfcbd3caa49ea82f31871c460612fa" }, "ElectraForPreTraining": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraForPreTraining", "TFElectraForPreTraining" ], "sha": "7b2d0fa8726b1180c7d6cde4f4afc3800eba7e6f" }, "ElectraForQuestionAnswering": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraForQuestionAnswering", "TFElectraForQuestionAnswering" ], "sha": "c6b127fd9f3019462e4ca2373762836207e39ce2" }, "ElectraForSequenceClassification": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraForSequenceClassification", "TFElectraForSequenceClassification" ], "sha": "41f0089ab7876abe0e28dbbd565144acb31f8127" }, "ElectraForTokenClassification": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraForTokenClassification", "TFElectraForTokenClassification" ], "sha": "1fdbbe70c1ddd16503820a1443d6a379a15ed777" }, "ElectraModel": { "tokenizer_classes": [ "ElectraTokenizer", "ElectraTokenizerFast" ], "processor_classes": [], "model_classes": [ "ElectraModel", "TFElectraModel" ], "sha": "312b532cbef26610d80f2bd008650160cae4f7a1" }, "EncodecModel": { "tokenizer_classes": [], "processor_classes": [ "EncodecFeatureExtractor" ], "model_classes": [ "EncodecModel" ], "sha": "e14c5a2fd6529c85cd4ac5a05ee9e550ced6a006" }, "EncoderDecoderModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "EncoderDecoderModel", "TFEncoderDecoderModel" ], "sha": "1038be9fd1b87b2e0a8f33721ff8e4612d34b3b6" }, "ErnieForCausalLM": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForCausalLM" ], "sha": "b49e00112ff06c2f0a0e54499921dddcf8c3c6a8" }, "ErnieForMaskedLM": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForMaskedLM" ], "sha": "30429830d1997222d885dcfdbd36d5e02d0d34b1" }, "ErnieForMultipleChoice": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForMultipleChoice" ], "sha": "5a21144bf35dfb60560ff8249116ad4459c0069a" }, "ErnieForNextSentencePrediction": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForNextSentencePrediction" ], "sha": "ed5868efb39bf6afb29f0cf444deafcf1e50b5bc" }, "ErnieForPreTraining": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForPreTraining" ], "sha": "e4ad30d291c310fea25e6f91f91393f993513b42" }, "ErnieForQuestionAnswering": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForQuestionAnswering" ], "sha": "fe7c74b763f63a9fd864dad325385075df7c80c8" }, "ErnieForSequenceClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForSequenceClassification" ], "sha": "84e0be05fcd52f54e96a69f67a2481323a58a9db" }, "ErnieForTokenClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieForTokenClassification" ], "sha": "91cf62c43a5a83332552ffa2d8e5e44d63a224ea" }, "ErnieMForMultipleChoice": { "tokenizer_classes": [ "ErnieMTokenizer" ], "processor_classes": [], "model_classes": [ "ErnieMForMultipleChoice" ], "sha": "c42ee7fcb132a323ace314c32e63c8a7d36ce18f" }, "ErnieMForQuestionAnswering": { "tokenizer_classes": [ "ErnieMTokenizer" ], "processor_classes": [], "model_classes": [ "ErnieMForQuestionAnswering" ], "sha": "2b90dee75ca87b214f96db00002aa18244ec8e84" }, "ErnieMForSequenceClassification": { "tokenizer_classes": [ "ErnieMTokenizer" ], "processor_classes": [], "model_classes": [ "ErnieMForSequenceClassification" ], "sha": "d8368646d8b1c67b1460af9c6ec13fd9d894cae6" }, "ErnieMForTokenClassification": { "tokenizer_classes": [ "ErnieMTokenizer" ], "processor_classes": [], "model_classes": [ "ErnieMForTokenClassification" ], "sha": "a9e29ba60fa0b7bedc2ed26a6b9911427df1ca6b" }, "ErnieMModel": { "tokenizer_classes": [ "ErnieMTokenizer" ], "processor_classes": [], "model_classes": [ "ErnieMModel" ], "sha": "7306eac3f38c3cf6211f0e741fdb81c6cc92bc09" }, "ErnieModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "ErnieModel" ], "sha": "b51478a9f40e353c41be3a29ccef103dcfe22b4b" }, "EsmForMaskedLM": { "tokenizer_classes": [ "EsmTokenizer" ], "processor_classes": [], "model_classes": [ "EsmForMaskedLM", "TFEsmForMaskedLM" ], "sha": "b56297b6cd64b9ba7c613d0cd146f1ecbea8115e" }, "EsmForSequenceClassification": { "tokenizer_classes": [ "EsmTokenizer" ], "processor_classes": [], "model_classes": [ "EsmForSequenceClassification", "TFEsmForSequenceClassification" ], "sha": "cc6d7ef0a4763540d67b7a4fb31bede9a7d3f245" }, "EsmForTokenClassification": { "tokenizer_classes": [ "EsmTokenizer" ], "processor_classes": [], "model_classes": [ "EsmForTokenClassification", "TFEsmForTokenClassification" ], "sha": "498953f66e260b974c504abbc863ee266d6c84a9" }, "EsmModel": { "tokenizer_classes": [ "EsmTokenizer" ], "processor_classes": [], "model_classes": [ "EsmModel", "TFEsmModel" ], "sha": "183838263b70809310117a0761542501acf64c21" }, "FNetForMaskedLM": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetForMaskedLM" ], "sha": "91eaae1eac894af5d96c0221ec9bcef7f1af41c8" }, "FNetForMultipleChoice": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetForMultipleChoice" ], "sha": "c15d98d5f7a6f3ef3099b1257949bee208d5466e" }, "FNetForNextSentencePrediction": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetForNextSentencePrediction" ], "sha": "c59440b44d07d61fc45a90ded7fc11d6f25b143d" }, "FNetForPreTraining": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetForPreTraining" ], "sha": "c05f55ccfb2f2533babd3c6e99de7749bc8081da" }, "FNetForQuestionAnswering": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetForQuestionAnswering" ], "sha": "47788e49dd435653fa2aa4b3ccae3572a870758e" }, "FNetForSequenceClassification": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetForSequenceClassification" ], "sha": "a3049b896ea6c5a32c364989c3afe604ee58b9fc" }, "FNetForTokenClassification": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetForTokenClassification" ], "sha": "3bcdafca57d544bb81e2f7eead1e512c168582fc" }, "FNetModel": { "tokenizer_classes": [ "FNetTokenizer", "FNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "FNetModel" ], "sha": "48fa66de37df126504db3b658806135eb877f505" }, "FSMTForConditionalGeneration": { "tokenizer_classes": [ "FSMTTokenizer" ], "processor_classes": [], "model_classes": [ "FSMTForConditionalGeneration" ], "sha": "6a1a981b29c8a98c1fd31bd0ad809f5575ca6c7a" }, "FSMTModel": { "tokenizer_classes": [ "FSMTTokenizer" ], "processor_classes": [], "model_classes": [ "FSMTModel" ], "sha": "683f6f73a2ab87801f1695a72d1af63cf173ab7c" }, "FalconForCausalLM": { "tokenizer_classes": [ "PreTrainedTokenizerFast" ], "processor_classes": [], "model_classes": [ "FalconForCausalLM" ], "sha": "60076d5dafc5e33ba9c90dcd05e7c0834e44049a" }, "FalconForQuestionAnswering": { "tokenizer_classes": [ "PreTrainedTokenizerFast" ], "processor_classes": [], "model_classes": [ "FalconForQuestionAnswering" ], "sha": "b1ee9cd5fad2d177ea5a46df4611cd02f66ae788" }, "FalconForSequenceClassification": { "tokenizer_classes": [ "PreTrainedTokenizerFast" ], "processor_classes": [], "model_classes": [ "FalconForSequenceClassification" ], "sha": "007838c0991c2b6a87dc49a8a5c20f29149a00fa" }, "FalconForTokenClassification": { "tokenizer_classes": [ "PreTrainedTokenizerFast" ], "processor_classes": [], "model_classes": [ "FalconForTokenClassification" ], "sha": "0ea6ae548773daa6e3317fddc058957e956eebf4" }, "FalconModel": { "tokenizer_classes": [ "PreTrainedTokenizerFast" ], "processor_classes": [], "model_classes": [ "FalconModel" ], "sha": "ca15a579c946eb00c5b39cc8e0ea63d0c1460f84" }, "FlaubertForMultipleChoice": { "tokenizer_classes": [ "FlaubertTokenizer" ], "processor_classes": [], "model_classes": [ "FlaubertForMultipleChoice", "TFFlaubertForMultipleChoice" ], "sha": "8b12bd87a63f2e86c3482431742f6d8abf6ec4fd" }, "FlaubertForQuestionAnsweringSimple": { "tokenizer_classes": [ "FlaubertTokenizer" ], "processor_classes": [], "model_classes": [ "FlaubertForQuestionAnsweringSimple", "TFFlaubertForQuestionAnsweringSimple" ], "sha": "5c0e7ad1efae7e3497f5cd6d2d9519403df49d37" }, "FlaubertForSequenceClassification": { "tokenizer_classes": [ "FlaubertTokenizer" ], "processor_classes": [], "model_classes": [ "FlaubertForSequenceClassification", "TFFlaubertForSequenceClassification" ], "sha": "762f12a8c99690be8ed2663b7af3011660174a7c" }, "FlaubertForTokenClassification": { "tokenizer_classes": [ "FlaubertTokenizer" ], "processor_classes": [], "model_classes": [ "FlaubertForTokenClassification", "TFFlaubertForTokenClassification" ], "sha": "d2ab741c937bb69ef27c89e4c86a8c9d444874ca" }, "FlaubertModel": { "tokenizer_classes": [ "FlaubertTokenizer" ], "processor_classes": [], "model_classes": [ "FlaubertModel", "TFFlaubertModel" ], "sha": "bdc2f8e17bb869393053429ec8c1c842bfeabb07" }, "FlaubertWithLMHeadModel": { "tokenizer_classes": [ "FlaubertTokenizer" ], "processor_classes": [], "model_classes": [ "FlaubertWithLMHeadModel", "TFFlaubertWithLMHeadModel" ], "sha": "f20eb0932c90061003c9cc4e109c6ea22559c4f2" }, "FlavaForPreTraining": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "FlavaImageProcessor" ], "model_classes": [ "FlavaForPreTraining" ], "sha": "6e9b2094060a5fa27984c7b49e5d0e820a88b487" }, "FlavaModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "FlavaImageProcessor" ], "model_classes": [ "FlavaModel" ], "sha": "31ebf1b7a0ef1fd5059b98e28e5ab1c366d2c482" }, "FocalNetBackbone": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "FocalNetBackbone" ], "sha": "eb8c580969443cb87de7dd9a256deaface03692f" }, "FocalNetForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "FocalNetForImageClassification" ], "sha": "28d30ded26a3213e8fb7011a455afc3aa98b0a95" }, "FocalNetForMaskedImageModeling": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "FocalNetForMaskedImageModeling" ], "sha": "0ea7626d19c9dd2f3113d977f643a1babc720bd3" }, "FocalNetModel": { "tokenizer_classes": [], "processor_classes": [ "BitImageProcessor" ], "model_classes": [ "FocalNetModel" ], "sha": "107b004e6aa14108a359b7d22bdb9aa141ec05d5" }, "FunnelBaseModel": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelBaseModel", "TFFunnelBaseModel" ], "sha": "87fed4252812df23315a56531625333e315681c6" }, "FunnelForMaskedLM": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelForMaskedLM", "TFFunnelForMaskedLM" ], "sha": "5543daf29f185cd45f2599bd6f38c96064c9c8de" }, "FunnelForMultipleChoice": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelForMultipleChoice", "TFFunnelForMultipleChoice" ], "sha": "a8bf597e37dbefb1ac5c97c4cb162c3d522a33a1" }, "FunnelForPreTraining": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelForPreTraining", "TFFunnelForPreTraining" ], "sha": "cbcb300d60aacd5950a45409b6e3f0f240c9082e" }, "FunnelForQuestionAnswering": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelForQuestionAnswering", "TFFunnelForQuestionAnswering" ], "sha": "6a5675305e096434e818486a13892cb55daffd13" }, "FunnelForSequenceClassification": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelForSequenceClassification", "TFFunnelForSequenceClassification" ], "sha": "1bc557a1e4314da21a44dee57b799e95a7025e5c" }, "FunnelForTokenClassification": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelForTokenClassification", "TFFunnelForTokenClassification" ], "sha": "693bc1217a224efd558f410ddc8ffc63739bebc3" }, "FunnelModel": { "tokenizer_classes": [ "FunnelTokenizer", "FunnelTokenizerFast" ], "processor_classes": [], "model_classes": [ "FunnelModel", "TFFunnelModel" ], "sha": "bfbaa8fa21c3abf80b94e7168b5ecff8ec5b5f76" }, "FuyuForCausalLM": { "tokenizer_classes": [ "LlamaTokenizerFast" ], "processor_classes": [ "FuyuImageProcessor" ], "model_classes": [ "FuyuForCausalLM" ], "sha": "685d78258ea95c5c82e0e4555d0d4a2270ab8bff" }, "GLPNForDepthEstimation": { "tokenizer_classes": [], "processor_classes": [ "GLPNImageProcessor" ], "model_classes": [ "GLPNForDepthEstimation" ], "sha": "32ca1c1ef5d33242e5e7c0433bcd773c082f0260" }, "GLPNModel": { "tokenizer_classes": [], "processor_classes": [ "GLPNImageProcessor" ], "model_classes": [ "GLPNModel" ], "sha": "24a8dbb48b1aa0ba2eba44324fcd0c78cca64dd4" }, "GPT2ForQuestionAnswering": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPT2ForQuestionAnswering" ], "sha": "a5bdd6bd4d79feece85ea9a8bd4ee5fe54c1d45b" }, "GPT2ForSequenceClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPT2ForSequenceClassification", "TFGPT2ForSequenceClassification" ], "sha": "90a2d78e5c7f288152f8456c3d58a43b40a58449" }, "GPT2ForTokenClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPT2ForTokenClassification" ], "sha": "da78bc95b45fab2da9d43f2ca27164996e31ade1" }, "GPT2LMHeadModel": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPT2LMHeadModel", "TFGPT2LMHeadModel" ], "sha": "78f56535d4ce19e9d7c0992e390085c5a4196b37" }, "GPT2Model": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPT2Model", "TFGPT2Model" ], "sha": "d6694b0d8fe17978761c9305dc151780506b192e" }, "GPTBigCodeForCausalLM": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTBigCodeForCausalLM" ], "sha": "99f7aaadf9c29669c63ef6c16f6bc5c07dbb9126" }, "GPTBigCodeForSequenceClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTBigCodeForSequenceClassification" ], "sha": "64a7398d5763161037b818314c60dd83d93d03e9" }, "GPTBigCodeForTokenClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTBigCodeForTokenClassification" ], "sha": "310537ecd22d45f71bf594b17922cf2abc338eaf" }, "GPTBigCodeModel": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTBigCodeModel" ], "sha": "3069419084a9dc36802d47de9df3d314ccfc2f28" }, "GPTJForCausalLM": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTJForCausalLM", "TFGPTJForCausalLM" ], "sha": "1fff390baa45cb187903ebdd269c975bb9ed7386" }, "GPTJForQuestionAnswering": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTJForQuestionAnswering", "TFGPTJForQuestionAnswering" ], "sha": "3d4ec61dbed01f844d4c309971eeb5ad722c6c84" }, "GPTJForSequenceClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTJForSequenceClassification", "TFGPTJForSequenceClassification" ], "sha": "4b5db259cd16ca84ae2cd79aa4851cdd14479128" }, "GPTJModel": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTJModel", "TFGPTJModel" ], "sha": "d8e1db30d08fbf57da6fc139aea3ffd63ab6226e" }, "GPTNeoForCausalLM": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoForCausalLM" ], "sha": "e88934e402c15195dd99b2947632415dd7645268" }, "GPTNeoForQuestionAnswering": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoForQuestionAnswering" ], "sha": "623883e94bd08caf9b3f839b98debeea72d5bc2b" }, "GPTNeoForSequenceClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoForSequenceClassification" ], "sha": "bf2090d5d91a70eb37ba51fbdcf23afc7031fea8" }, "GPTNeoForTokenClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoForTokenClassification" ], "sha": "d5208e73e24a1671219776b50fe5f96e0e4cd218" }, "GPTNeoModel": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoModel" ], "sha": "72a7cd49da613c3125a90884df4763545c594e56" }, "GPTNeoXForCausalLM": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoXForCausalLM" ], "sha": "0229cfaaa843c6b492ac2abffabb00f1ff1936f8" }, "GPTNeoXForQuestionAnswering": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoXForQuestionAnswering" ], "sha": "7d2f08c959c211129952ee03b5562add09fe6864" }, "GPTNeoXForSequenceClassification": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoXForSequenceClassification" ], "sha": "17c4b845ee2e0bb780ca2dea2d59a3d9d5d3c651" }, "GPTNeoXForTokenClassification": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoXForTokenClassification" ], "sha": "3aa4fe8a562f32230041d6d3616aa5ecc3f30192" }, "GPTNeoXJapaneseForCausalLM": { "tokenizer_classes": [ "GPTNeoXJapaneseTokenizer" ], "processor_classes": [], "model_classes": [ "GPTNeoXJapaneseForCausalLM" ], "sha": "5fca2479f1064fd22e17f944c8fcc14f7e73f1d5" }, "GPTNeoXJapaneseModel": { "tokenizer_classes": [ "GPTNeoXJapaneseTokenizer" ], "processor_classes": [], "model_classes": [ "GPTNeoXJapaneseModel" ], "sha": "5c6ed124150df845cfc701d70b97fdcde687be52" }, "GPTNeoXModel": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "GPTNeoXModel" ], "sha": "33114ba2f72189d5a2bd63f0cdb78551189242ff" }, "GPTSanJapaneseForConditionalGeneration": { "tokenizer_classes": [ "GPTSanJapaneseTokenizer" ], "processor_classes": [], "model_classes": [ "GPTSanJapaneseForConditionalGeneration" ], "sha": "ff6a41faaa713c7fbd5d9a1a50539745f9e1178e" }, "GitForCausalLM": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "CLIPImageProcessor" ], "model_classes": [ "GitForCausalLM" ], "sha": "60f9c50466ae0beeb11776ca5bfeb6473f441554" }, "GitModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "CLIPImageProcessor" ], "model_classes": [ "GitModel" ], "sha": "3d2eb6bddf95bb4a4e59b045d4e464c730c07f41" }, "GroupViTModel": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "CLIPImageProcessor" ], "model_classes": [ "GroupViTModel", "TFGroupViTModel" ], "sha": "05a3a02dd46cb9eb078608dec98f633c0cf559ef" }, "HubertForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "HubertForCTC" ], "sha": "13431b76106f993eedcff48a75bae590a09b14f7" }, "HubertForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "HubertForSequenceClassification" ], "sha": "d23f46607a900b1a55dfee4b7ed205a6823035b1" }, "HubertModel": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "HubertModel", "TFHubertModel" ], "sha": "3224562c86c4669db65ae7defdc5fb555b113e95" }, "IBertForMaskedLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "IBertForMaskedLM" ], "sha": "e333a9c9d375f4d839b7e9e21d1a1c8dad58d7d1" }, "IBertForMultipleChoice": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "IBertForMultipleChoice" ], "sha": "a81f7d64cd7ce5fe6cd726b23d9d14ac5d17bf53" }, "IBertForQuestionAnswering": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "IBertForQuestionAnswering" ], "sha": "7b66d13d4d6801a82cbeb7f9fd853ca1630d1f8b" }, "IBertForSequenceClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "IBertForSequenceClassification" ], "sha": "309d57145c40f889222fe5df62f14dddf4496b38" }, "IBertForTokenClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "IBertForTokenClassification" ], "sha": "b032e9bff4b081b78c098b2d8bc610ac035c6ddf" }, "IBertModel": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "IBertModel" ], "sha": "6749164c678d4883d455f98b1dfc98c62da8f08b" }, "IdeficsForVisionText2Text": { "tokenizer_classes": [ "LlamaTokenizerFast" ], "processor_classes": [ "IdeficsImageProcessor" ], "model_classes": [ "IdeficsForVisionText2Text" ], "sha": "a6be81294ff7a3d44f3aef0ed18e42b97c426831" }, "IdeficsModel": { "tokenizer_classes": [ "LlamaTokenizerFast" ], "processor_classes": [ "IdeficsImageProcessor" ], "model_classes": [ "IdeficsModel" ], "sha": "649df2e35e067efd573ff2d083784a5cf876545e" }, "ImageGPTForCausalImageModeling": { "tokenizer_classes": [], "processor_classes": [ "ImageGPTImageProcessor" ], "model_classes": [ "ImageGPTForCausalImageModeling" ], "sha": "9a7d1fc04439ab1d9d690de9c3e7673f08568cdf" }, "ImageGPTForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ImageGPTImageProcessor" ], "model_classes": [ "ImageGPTForImageClassification" ], "sha": "d92c7aed4ba5de74a1f542b736010090e4a58b42" }, "ImageGPTModel": { "tokenizer_classes": [], "processor_classes": [ "ImageGPTImageProcessor" ], "model_classes": [ "ImageGPTModel" ], "sha": "5a7983e48d5841704733dd0756177680ed50c074" }, "Kosmos2ForConditionalGeneration": { "tokenizer_classes": [ "XLMRobertaTokenizerFast" ], "processor_classes": [ "CLIPImageProcessor" ], "model_classes": [ "Kosmos2ForConditionalGeneration" ], "sha": "d1d4607782b911411676f1ee79997dee645def58" }, "Kosmos2Model": { "tokenizer_classes": [ "XLMRobertaTokenizerFast" ], "processor_classes": [ "CLIPImageProcessor" ], "model_classes": [ "Kosmos2Model" ], "sha": "379d8944a65312094d9ab1c4b8a82058a2d3274e" }, "LEDForConditionalGeneration": { "tokenizer_classes": [ "LEDTokenizer", "LEDTokenizerFast" ], "processor_classes": [], "model_classes": [ "LEDForConditionalGeneration", "TFLEDForConditionalGeneration" ], "sha": "a354b49a79351f3ea8ae7776d9f8352ae26cfc14" }, "LEDForQuestionAnswering": { "tokenizer_classes": [ "LEDTokenizer", "LEDTokenizerFast" ], "processor_classes": [], "model_classes": [ "LEDForQuestionAnswering" ], "sha": "47c7a75a1e650dae60ff6e9bbab0f2386946670c" }, "LEDForSequenceClassification": { "tokenizer_classes": [ "LEDTokenizer", "LEDTokenizerFast" ], "processor_classes": [], "model_classes": [ "LEDForSequenceClassification" ], "sha": "3571e2c9d9f2f2ec0b8fe47090330b128be05126" }, "LEDModel": { "tokenizer_classes": [ "LEDTokenizer", "LEDTokenizerFast" ], "processor_classes": [], "model_classes": [ "LEDModel", "TFLEDModel" ], "sha": "3c3f6eb142545afc570187bfdabfe65d43dafbe4" }, "LayoutLMForMaskedLM": { "tokenizer_classes": [ "LayoutLMTokenizer", "LayoutLMTokenizerFast" ], "processor_classes": [], "model_classes": [ "LayoutLMForMaskedLM", "TFLayoutLMForMaskedLM" ], "sha": "0368bd9bd8fd3eb43b8a3b38962b5345b8765514" }, "LayoutLMForQuestionAnswering": { "tokenizer_classes": [ "LayoutLMTokenizer", "LayoutLMTokenizerFast" ], "processor_classes": [], "model_classes": [ "LayoutLMForQuestionAnswering", "TFLayoutLMForQuestionAnswering" ], "sha": "0d6a4bc614fccfa313c1fb6d132a250929518f85" }, "LayoutLMForSequenceClassification": { "tokenizer_classes": [ "LayoutLMTokenizer", "LayoutLMTokenizerFast" ], "processor_classes": [], "model_classes": [ "LayoutLMForSequenceClassification", "TFLayoutLMForSequenceClassification" ], "sha": "1bd68c73dbf6c8c0526d24fbe2831be82998c440" }, "LayoutLMForTokenClassification": { "tokenizer_classes": [ "LayoutLMTokenizer", "LayoutLMTokenizerFast" ], "processor_classes": [], "model_classes": [ "LayoutLMForTokenClassification", "TFLayoutLMForTokenClassification" ], "sha": "155e7da3f1d786aa39d957b16080c52de4a7efd7" }, "LayoutLMModel": { "tokenizer_classes": [ "LayoutLMTokenizer", "LayoutLMTokenizerFast" ], "processor_classes": [], "model_classes": [ "LayoutLMModel", "TFLayoutLMModel" ], "sha": "14f77b30d267910f11f0fd532a91a6b85ab3a4de" }, "LayoutLMv2ForQuestionAnswering": { "tokenizer_classes": [ "LayoutLMv2Tokenizer", "LayoutLMv2TokenizerFast" ], "processor_classes": [ "LayoutLMv2ImageProcessor" ], "model_classes": [ "LayoutLMv2ForQuestionAnswering" ], "sha": "f452e28dd34d3c38cce046b1cc7b0ada69f587b1" }, "LayoutLMv2ForSequenceClassification": { "tokenizer_classes": [ "LayoutLMv2Tokenizer", "LayoutLMv2TokenizerFast" ], "processor_classes": [ "LayoutLMv2ImageProcessor" ], "model_classes": [ "LayoutLMv2ForSequenceClassification" ], "sha": "b483e08fd143113629ecda3dbfd57e69bfeb5f11" }, "LayoutLMv2ForTokenClassification": { "tokenizer_classes": [ "LayoutLMv2Tokenizer", "LayoutLMv2TokenizerFast" ], "processor_classes": [ "LayoutLMv2ImageProcessor" ], "model_classes": [ "LayoutLMv2ForTokenClassification" ], "sha": "0721ae69bff00ecfff1b3d1521a475cde0253299" }, "LayoutLMv2Model": { "tokenizer_classes": [ "LayoutLMv2Tokenizer", "LayoutLMv2TokenizerFast" ], "processor_classes": [ "LayoutLMv2ImageProcessor" ], "model_classes": [ "LayoutLMv2Model" ], "sha": "6a1b510769b344979a910a7d0bade613a9ec2dfc" }, "LayoutLMv3ForQuestionAnswering": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [ "LayoutLMv3ImageProcessor" ], "model_classes": [ "LayoutLMv3ForQuestionAnswering", "TFLayoutLMv3ForQuestionAnswering" ], "sha": "4640242388e69cf77ea2dd3ac36ec6f1b26628c8" }, "LayoutLMv3ForSequenceClassification": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [ "LayoutLMv3ImageProcessor" ], "model_classes": [ "LayoutLMv3ForSequenceClassification", "TFLayoutLMv3ForSequenceClassification" ], "sha": "96515f699874cfbfbec7a64c539ae92419e4c6dc" }, "LayoutLMv3ForTokenClassification": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [ "LayoutLMv3ImageProcessor" ], "model_classes": [ "LayoutLMv3ForTokenClassification", "TFLayoutLMv3ForTokenClassification" ], "sha": "ed4ffc464f2028fe50dfc6823f4eda78d34be7e6" }, "LayoutLMv3Model": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [ "LayoutLMv3ImageProcessor" ], "model_classes": [ "LayoutLMv3Model", "TFLayoutLMv3Model" ], "sha": "69725e5e2445e5c1c3aa8a2aa49cfd72e0a44565" }, "LevitForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "LevitImageProcessor" ], "model_classes": [ "LevitForImageClassification" ], "sha": "5ae8ccaa1fe1c947cb8ae6499e4a150c668bb9f0" }, "LevitForImageClassificationWithTeacher": { "tokenizer_classes": [], "processor_classes": [ "LevitImageProcessor" ], "model_classes": [ "LevitForImageClassificationWithTeacher" ], "sha": "568cc0d965b9bd293f240e7724314db6d50f6722" }, "LevitModel": { "tokenizer_classes": [], "processor_classes": [ "LevitImageProcessor" ], "model_classes": [ "LevitModel" ], "sha": "172efa52b50c75c3b3e498fa638f55e65b2ebf87" }, "LiltForQuestionAnswering": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [], "model_classes": [ "LiltForQuestionAnswering" ], "sha": "0a348441999e98ec003b29fc4d5a67ad22ee6ca2" }, "LiltForSequenceClassification": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [], "model_classes": [ "LiltForSequenceClassification" ], "sha": "c53ab0ba33536fe564a4a1e4f1674d990c01b83a" }, "LiltForTokenClassification": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [], "model_classes": [ "LiltForTokenClassification" ], "sha": "14f85076f9b3f7016917e324d51ebd22511a2ae5" }, "LiltModel": { "tokenizer_classes": [ "LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast" ], "processor_classes": [], "model_classes": [ "LiltModel" ], "sha": "3f1166cc14c532388df7e82336a8e575a813bd3f" }, "LongT5ForConditionalGeneration": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "LongT5ForConditionalGeneration" ], "sha": "c685cbbe706ad5c9a28689631765726a1874dcc7" }, "LongT5Model": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "LongT5Model" ], "sha": "6b468e55e2490565e6155690201086ac00c72062" }, "LongformerForMaskedLM": { "tokenizer_classes": [ "LongformerTokenizer", "LongformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "LongformerForMaskedLM", "TFLongformerForMaskedLM" ], "sha": "929d3bda9a1485d9bae41f9dbfc1d149c1c4e78e" }, "LongformerForMultipleChoice": { "tokenizer_classes": [ "LongformerTokenizer", "LongformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "LongformerForMultipleChoice", "TFLongformerForMultipleChoice" ], "sha": "60b1ecac6b9385ce18c7e6978ab161cce8e7f9d4" }, "LongformerForQuestionAnswering": { "tokenizer_classes": [ "LongformerTokenizer", "LongformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "LongformerForQuestionAnswering", "TFLongformerForQuestionAnswering" ], "sha": "be45ab1321b703f2200cbbcae560aaf2e2afef88" }, "LongformerForSequenceClassification": { "tokenizer_classes": [ "LongformerTokenizer", "LongformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "LongformerForSequenceClassification", "TFLongformerForSequenceClassification" ], "sha": "8bc0de0b0f740bf397eb2770ec3ce3a24f3d7af9" }, "LongformerForTokenClassification": { "tokenizer_classes": [ "LongformerTokenizer", "LongformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "LongformerForTokenClassification", "TFLongformerForTokenClassification" ], "sha": "efa33a9b6f47f0f7979af08ae8d04a5a7363a14b" }, "LongformerModel": { "tokenizer_classes": [ "LongformerTokenizer", "LongformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "LongformerModel", "TFLongformerModel" ], "sha": "b023d531688e8655fc09300ac36742588efb3240" }, "LukeForMaskedLM": { "tokenizer_classes": [ "LukeTokenizer" ], "processor_classes": [], "model_classes": [ "LukeForMaskedLM" ], "sha": "954cf6cd2bf1f298a3956b10c36656c57387506d" }, "LukeForMultipleChoice": { "tokenizer_classes": [ "LukeTokenizer" ], "processor_classes": [], "model_classes": [ "LukeForMultipleChoice" ], "sha": "d1310a9174ad50d60b30ad6049e165deb2539034" }, "LukeForQuestionAnswering": { "tokenizer_classes": [ "LukeTokenizer" ], "processor_classes": [], "model_classes": [ "LukeForQuestionAnswering" ], "sha": "3ea38da4e32cb4e45bea82b2e81a8639aeba2c35" }, "LukeForSequenceClassification": { "tokenizer_classes": [ "LukeTokenizer" ], "processor_classes": [], "model_classes": [ "LukeForSequenceClassification" ], "sha": "b5b11248aeb4f5976379d15a977aeb2677e0c0f9" }, "LukeForTokenClassification": { "tokenizer_classes": [ "LukeTokenizer" ], "processor_classes": [], "model_classes": [ "LukeForTokenClassification" ], "sha": "8aab1a33ad26a344a6f4dfd68630e9661e174471" }, "LukeModel": { "tokenizer_classes": [ "LukeTokenizer" ], "processor_classes": [], "model_classes": [ "LukeModel" ], "sha": "ae23a674e7297d41f33c9af86e039757dfd2d531" }, "LxmertForPreTraining": { "tokenizer_classes": [ "LxmertTokenizer", "LxmertTokenizerFast" ], "processor_classes": [], "model_classes": [ "LxmertForPreTraining", "TFLxmertForPreTraining" ], "sha": "7b0843403c187aef00f20d5087086468d9613d2c" }, "LxmertForQuestionAnswering": { "tokenizer_classes": [ "LxmertTokenizer", "LxmertTokenizerFast" ], "processor_classes": [], "model_classes": [ "LxmertForQuestionAnswering" ], "sha": "27a74bd2cd156e46656c43ceb432c4deda0df5c1" }, "LxmertModel": { "tokenizer_classes": [ "LxmertTokenizer", "LxmertTokenizerFast" ], "processor_classes": [], "model_classes": [ "LxmertModel", "TFLxmertModel" ], "sha": "97612a0d6b14406ea9bfd7672e6974e0961cbef1" }, "M2M100ForConditionalGeneration": { "tokenizer_classes": [ "M2M100Tokenizer" ], "processor_classes": [], "model_classes": [ "M2M100ForConditionalGeneration" ], "sha": "32ac347092d51f658b41ffc111b67d49acdeab46" }, "M2M100Model": { "tokenizer_classes": [ "M2M100Tokenizer" ], "processor_classes": [], "model_classes": [ "M2M100Model" ], "sha": "e95c2ae168c7ba19f8114def40e1b1edd953b2f5" }, "MBartForCausalLM": { "tokenizer_classes": [ "MBartTokenizer", "MBartTokenizerFast" ], "processor_classes": [], "model_classes": [ "MBartForCausalLM" ], "sha": "a45044f8056328d20a764356eca3d0746a7a195e" }, "MBartForConditionalGeneration": { "tokenizer_classes": [ "MBartTokenizer", "MBartTokenizerFast" ], "processor_classes": [], "model_classes": [ "MBartForConditionalGeneration", "TFMBartForConditionalGeneration" ], "sha": "171e918962d6c0ee56c6b070858e19e16c8dd09f" }, "MBartForQuestionAnswering": { "tokenizer_classes": [ "MBartTokenizer", "MBartTokenizerFast" ], "processor_classes": [], "model_classes": [ "MBartForQuestionAnswering" ], "sha": "1ee08565d24777335595e0d2940e454abdcff731" }, "MBartForSequenceClassification": { "tokenizer_classes": [ "MBartTokenizer", "MBartTokenizerFast" ], "processor_classes": [], "model_classes": [ "MBartForSequenceClassification" ], "sha": "53e9c88ecfa2475d27afe099ffa7a8bcdb7ef7e4" }, "MBartModel": { "tokenizer_classes": [ "MBartTokenizer", "MBartTokenizerFast" ], "processor_classes": [], "model_classes": [ "MBartModel", "TFMBartModel" ], "sha": "2d492b34d69dd63b411990d5c8bb692fd637e91c" }, "MCTCTForCTC": { "tokenizer_classes": [], "processor_classes": [ "MCTCTFeatureExtractor" ], "model_classes": [ "MCTCTForCTC" ], "sha": "895a3d74f87b344b1f0a71eae4f085941d51b5cf" }, "MCTCTModel": { "tokenizer_classes": [], "processor_classes": [ "MCTCTFeatureExtractor" ], "model_classes": [ "MCTCTModel" ], "sha": "ce73d5c2b6fe163de778697d7b0543bf00d7ffa8" }, "MPNetForMaskedLM": { "tokenizer_classes": [ "MPNetTokenizer", "MPNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "MPNetForMaskedLM", "TFMPNetForMaskedLM" ], "sha": "50af96e7d0202aef86e396c136e4c4fde8afe183" }, "MPNetForMultipleChoice": { "tokenizer_classes": [ "MPNetTokenizer", "MPNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "MPNetForMultipleChoice", "TFMPNetForMultipleChoice" ], "sha": "af4ff8bf296a3a51f5ab6cd9f56741e4c732487c" }, "MPNetForQuestionAnswering": { "tokenizer_classes": [ "MPNetTokenizer", "MPNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "MPNetForQuestionAnswering", "TFMPNetForQuestionAnswering" ], "sha": "3e1a25c0d3243f78f81580c312ada3b39c06b428" }, "MPNetForSequenceClassification": { "tokenizer_classes": [ "MPNetTokenizer", "MPNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "MPNetForSequenceClassification", "TFMPNetForSequenceClassification" ], "sha": "43da45c0a0d73c5a5567b4c7ec512ec5023e52dd" }, "MPNetForTokenClassification": { "tokenizer_classes": [ "MPNetTokenizer", "MPNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "MPNetForTokenClassification", "TFMPNetForTokenClassification" ], "sha": "4e825eff24df533321ebab823eb66ce67e4ab3d9" }, "MPNetModel": { "tokenizer_classes": [ "MPNetTokenizer", "MPNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "MPNetModel", "TFMPNetModel" ], "sha": "847c68344c2922e9a71fa8835b87a0f6f72b9f47" }, "MarianForCausalLM": { "tokenizer_classes": [ "MarianTokenizer" ], "processor_classes": [], "model_classes": [], "sha": "5fb205e6db8e18e3c6cdd4e4709be292ba4599f3" }, "MarianMTModel": { "tokenizer_classes": [ "MarianTokenizer" ], "processor_classes": [], "model_classes": [ "MarianMTModel", "TFMarianMTModel" ], "sha": "0405f542b31561592231a86e3009d05256cbf49f" }, "MarianModel": { "tokenizer_classes": [ "MarianTokenizer" ], "processor_classes": [], "model_classes": [ "MarianModel", "TFMarianModel" ], "sha": "3649748c0286c6d5179a7013a716f7314db182a8" }, "MarkupLMForQuestionAnswering": { "tokenizer_classes": [ "MarkupLMTokenizer", "MarkupLMTokenizerFast" ], "processor_classes": [ "MarkupLMFeatureExtractor" ], "model_classes": [ "MarkupLMForQuestionAnswering" ], "sha": "c8bb9f93591d980362547b0bdca9f23ace2f383e" }, "MarkupLMForSequenceClassification": { "tokenizer_classes": [ "MarkupLMTokenizer", "MarkupLMTokenizerFast" ], "processor_classes": [ "MarkupLMFeatureExtractor" ], "model_classes": [ "MarkupLMForSequenceClassification" ], "sha": "c2cb7245d68d76e0a5f993fc8a3de099ecebc68b" }, "MarkupLMForTokenClassification": { "tokenizer_classes": [ "MarkupLMTokenizer", "MarkupLMTokenizerFast" ], "processor_classes": [ "MarkupLMFeatureExtractor" ], "model_classes": [ "MarkupLMForTokenClassification" ], "sha": "b9f924e82f400de0b34b46ee4ba276d686bd4890" }, "MarkupLMModel": { "tokenizer_classes": [ "MarkupLMTokenizer", "MarkupLMTokenizerFast" ], "processor_classes": [ "MarkupLMFeatureExtractor" ], "model_classes": [ "MarkupLMModel" ], "sha": "9687ba29f1c59d978e3d4b0fa702031f88eff53b" }, "Mask2FormerForUniversalSegmentation": { "tokenizer_classes": [], "processor_classes": [ "Mask2FormerImageProcessor" ], "model_classes": [ "Mask2FormerForUniversalSegmentation" ], "sha": "6429a7349527c9ef140ae691b83c47702cce1bc0" }, "Mask2FormerModel": { "tokenizer_classes": [], "processor_classes": [ "Mask2FormerImageProcessor" ], "model_classes": [ "Mask2FormerModel" ], "sha": "9bee8709204024b3669d503cdfe8890182f2a075" }, "MaskFormerForInstanceSegmentation": { "tokenizer_classes": [], "processor_classes": [ "MaskFormerImageProcessor" ], "model_classes": [ "MaskFormerForInstanceSegmentation" ], "sha": "f844aaa81f55cb199c115f1bf95c217a70685570" }, "MaskFormerModel": { "tokenizer_classes": [], "processor_classes": [ "MaskFormerImageProcessor" ], "model_classes": [ "MaskFormerModel" ], "sha": "473b54a464bc0ccee29bc23b4f6610f32eec05af" }, "MegaForCausalLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegaForCausalLM" ], "sha": "6642b9da860f8b62abcfb0660feabcebf6698418" }, "MegaForMaskedLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegaForMaskedLM" ], "sha": "6b2d47ba03bec9e6f7eefdd4a67351fa191aae6f" }, "MegaForMultipleChoice": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegaForMultipleChoice" ], "sha": "2b1e751da36a4410473eef07a62b09227a26d504" }, "MegaForQuestionAnswering": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegaForQuestionAnswering" ], "sha": "612acd9a53c351c42514adb3c04f2057d2870be7" }, "MegaForSequenceClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegaForSequenceClassification" ], "sha": "4871572da1613b7e9cfd3640c6d1129af004eefb" }, "MegaForTokenClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegaForTokenClassification" ], "sha": "450d3722c3b995215d06b9c12544c99f958581c7" }, "MegaModel": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegaModel" ], "sha": "ca0862db27428893fe22f9bb5d2eb0875c2156f3" }, "MegatronBertForCausalLM": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForCausalLM" ], "sha": "ff08d05ef8f98fdccf1f01560ec6ec4adbc8a3e3" }, "MegatronBertForMaskedLM": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForMaskedLM" ], "sha": "2ed25e2681d26b51b404ef1347a385c5f2c86a9a" }, "MegatronBertForMultipleChoice": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForMultipleChoice" ], "sha": "1485af4b75f8f234d2b4b5aea50ab2ec55223a15" }, "MegatronBertForNextSentencePrediction": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForNextSentencePrediction" ], "sha": "52bc9ee1d5145344f66b088ed278f07ed3d90584" }, "MegatronBertForPreTraining": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForPreTraining" ], "sha": "e580d0efd54e1c92789e39b32929234e36ee427f" }, "MegatronBertForQuestionAnswering": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForQuestionAnswering" ], "sha": "7342ba042a3c30c15382d00fcb0521533fc43841" }, "MegatronBertForSequenceClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForSequenceClassification" ], "sha": "6a7cd480511d817a1e221c8f7558c55a93baed1b" }, "MegatronBertForTokenClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertForTokenClassification" ], "sha": "8b5334b6ec5f025293ca861de474b57ca84bc005" }, "MegatronBertModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MegatronBertModel" ], "sha": "f2457fbe535ba97ea13db049f53618b42e13f047" }, "MgpstrForSceneTextRecognition": { "tokenizer_classes": [], "processor_classes": [ "MgpstrProcessor" ], "model_classes": [ "MgpstrForSceneTextRecognition" ], "sha": "f197d5bfa1fe27b5f28a6e6d4e3ad229b753450a" }, "MistralForCausalLM": { "tokenizer_classes": [ "LlamaTokenizer", "LlamaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MistralForCausalLM" ], "sha": "f7e06aeedbba8f4f665b438b868ed932d451f64b" }, "MistralForSequenceClassification": { "tokenizer_classes": [ "LlamaTokenizer", "LlamaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MistralForSequenceClassification" ], "sha": "65045444ea1933309270d8b08b21d3fa94a84290" }, "MistralModel": { "tokenizer_classes": [ "LlamaTokenizer", "LlamaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MistralModel" ], "sha": "becd727ad72b1e8a7c0fa0ea39b61904fa68aeac" }, "MobileBertForMaskedLM": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertForMaskedLM", "TFMobileBertForMaskedLM" ], "sha": "d689e737d73ad23aed3aabd3177591fc827d1c62" }, "MobileBertForMultipleChoice": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertForMultipleChoice", "TFMobileBertForMultipleChoice" ], "sha": "403d1f88be7eb0c769ff3a8e57eab21cc3e75afb" }, "MobileBertForNextSentencePrediction": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertForNextSentencePrediction", "TFMobileBertForNextSentencePrediction" ], "sha": "b4d8836a0f259ee3bca9f230093836c9117c5e4d" }, "MobileBertForPreTraining": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertForPreTraining", "TFMobileBertForPreTraining" ], "sha": "fbaa13ea6f9fcebb9fde620dd009d12510440d17" }, "MobileBertForQuestionAnswering": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertForQuestionAnswering", "TFMobileBertForQuestionAnswering" ], "sha": "ba6a55cf2daec55bfb220c9bab0bc4ad96510087" }, "MobileBertForSequenceClassification": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertForSequenceClassification", "TFMobileBertForSequenceClassification" ], "sha": "17ab35603bec351457e035eef2d0426538071f72" }, "MobileBertForTokenClassification": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertForTokenClassification", "TFMobileBertForTokenClassification" ], "sha": "dee83e820e6c4f069886a5d1875bf6775897313e" }, "MobileBertModel": { "tokenizer_classes": [ "MobileBertTokenizer", "MobileBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "MobileBertModel", "TFMobileBertModel" ], "sha": "09b2db33ea798a762eeaf7e727e95f9ea8a6d14f" }, "MobileNetV1ForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "MobileNetV1ImageProcessor" ], "model_classes": [ "MobileNetV1ForImageClassification" ], "sha": "55023dbd0935f147bf1bccf960cea01ca07e0f0c" }, "MobileNetV1Model": { "tokenizer_classes": [], "processor_classes": [ "MobileNetV1ImageProcessor" ], "model_classes": [ "MobileNetV1Model" ], "sha": "178bd24528147a028938d6ee5c7e65c969ea37b0" }, "MobileNetV2ForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "MobileNetV2ImageProcessor" ], "model_classes": [ "MobileNetV2ForImageClassification" ], "sha": "ff907f740cf9ea91bc3cdf403a94ae28fbb2548a" }, "MobileNetV2ForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "MobileNetV2ImageProcessor" ], "model_classes": [ "MobileNetV2ForSemanticSegmentation" ], "sha": "48adbc340e42882f52b54d4f5dd045e16e9ef2d6" }, "MobileNetV2Model": { "tokenizer_classes": [], "processor_classes": [ "MobileNetV2ImageProcessor" ], "model_classes": [ "MobileNetV2Model" ], "sha": "e876885828825472a80ef1796d89d60b901813ba" }, "MobileViTForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "MobileViTImageProcessor" ], "model_classes": [ "MobileViTForImageClassification", "TFMobileViTForImageClassification" ], "sha": "7d0b31864f856e00f9e34e8c6781dcc7a8cdaf1e" }, "MobileViTForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "MobileViTImageProcessor" ], "model_classes": [ "MobileViTForSemanticSegmentation", "TFMobileViTForSemanticSegmentation" ], "sha": "215f727caa3c3fc94fa4df486aa706e5d99d4194" }, "MobileViTModel": { "tokenizer_classes": [], "processor_classes": [ "MobileViTImageProcessor" ], "model_classes": [ "MobileViTModel", "TFMobileViTModel" ], "sha": "b3a1452e7cb44b600b21ee14f3d5382366855a46" }, "MobileViTV2ForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "MobileViTImageProcessor" ], "model_classes": [ "MobileViTV2ForImageClassification" ], "sha": "25752b0967ad594341d1b685401450d7f698433c" }, "MobileViTV2ForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "MobileViTImageProcessor" ], "model_classes": [ "MobileViTV2ForSemanticSegmentation" ], "sha": "13b953f50be33219d55a12f1098be38b88000897" }, "MobileViTV2Model": { "tokenizer_classes": [], "processor_classes": [ "MobileViTImageProcessor" ], "model_classes": [ "MobileViTV2Model" ], "sha": "2f46357659db2d6d54d870e28073deeea1c8cb64" }, "MptForCausalLM": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "MptForCausalLM" ], "sha": "500c869b956c65f6b1a7b4867727f124c6f5728a" }, "MptForQuestionAnswering": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "MptForQuestionAnswering" ], "sha": "6ee46572bf61eb5e7dbbdaf00b73c4d37efc42d9" }, "MptForSequenceClassification": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "MptForSequenceClassification" ], "sha": "f0b9153413b5dfceeb96b67d4b0f22c94bbaf64a" }, "MptForTokenClassification": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "MptForTokenClassification" ], "sha": "3f7c3ccd67cd0b2aae56d37613429a64ef813246" }, "MptModel": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "MptModel" ], "sha": "ea747f234556661b0c8b84a626f267066ce586bf" }, "MraForMaskedLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MraForMaskedLM" ], "sha": "c00ee46cfd2b8fed29cc37f0a4ead40ad51a439c" }, "MraForMultipleChoice": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MraForMultipleChoice" ], "sha": "f397469ba8109f64dab2d75335ea7bf0c2dbeb74" }, "MraForQuestionAnswering": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MraForQuestionAnswering" ], "sha": "c2ed75acd20e5440a76d6504d9a3ebc2513011f0" }, "MraForSequenceClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MraForSequenceClassification" ], "sha": "f47672d3708508bda7774215bee44a92ec16ab2f" }, "MraForTokenClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MraForTokenClassification" ], "sha": "f0961ab5818bca473607fb94b391c186dc1d3492" }, "MraModel": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "MraModel" ], "sha": "315f34f30bcc4b0b66b11987726df2a80c50e271" }, "MusicgenForCausalLM": { "tokenizer_classes": [ "T5TokenizerFast" ], "processor_classes": [], "model_classes": [], "sha": "f67d387eaaa7c71ddf88af95eda4bf14ace08d49" }, "MusicgenForConditionalGeneration": { "tokenizer_classes": [ "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "MusicgenForConditionalGeneration" ], "sha": "16102cdf580e70cf0b4e0e2cda5bc75b934da92c" }, "MvpForCausalLM": { "tokenizer_classes": [ "MvpTokenizer", "MvpTokenizerFast" ], "processor_classes": [], "model_classes": [ "MvpForCausalLM" ], "sha": "105e5f2c8a0f20d404cb71795539cda5dd49716d" }, "MvpForConditionalGeneration": { "tokenizer_classes": [ "MvpTokenizer", "MvpTokenizerFast" ], "processor_classes": [], "model_classes": [ "MvpForConditionalGeneration" ], "sha": "b0b706f14b2f8aae288cba30ae0064e0be7e888b" }, "MvpForQuestionAnswering": { "tokenizer_classes": [ "MvpTokenizer", "MvpTokenizerFast" ], "processor_classes": [], "model_classes": [ "MvpForQuestionAnswering" ], "sha": "82f152b36a40a4c22edcb146e6eaec636d84fa2d" }, "MvpForSequenceClassification": { "tokenizer_classes": [ "MvpTokenizer", "MvpTokenizerFast" ], "processor_classes": [], "model_classes": [ "MvpForSequenceClassification" ], "sha": "506b68544d064001929ee9e6db3752e62972a6aa" }, "MvpModel": { "tokenizer_classes": [ "MvpTokenizer", "MvpTokenizerFast" ], "processor_classes": [], "model_classes": [ "MvpModel" ], "sha": "3f4653184721a2bc029b27706d335ef7ddd219d5" }, "NatBackbone": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "NatBackbone" ], "sha": "d5cc5eccba4da609c82e9f5c649301b9f9fee9fb" }, "NatForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "NatForImageClassification" ], "sha": "2ff4c9e73c49c392c02a467e87b5511fd924242a" }, "NatModel": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "NatModel" ], "sha": "75e9756bb94d0ccdce98a8e963eeecbc66f9d573" }, "NezhaForMaskedLM": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaForMaskedLM" ], "sha": "5991cca4b78f0ed7299259a71f3eeed3f3452b72" }, "NezhaForMultipleChoice": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaForMultipleChoice" ], "sha": "0f6e9ec791d85ad4503acdec50b3a120f984016b" }, "NezhaForNextSentencePrediction": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaForNextSentencePrediction" ], "sha": "9a34316c14ec8ecc98ff08e46760915c80098a57" }, "NezhaForPreTraining": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaForPreTraining" ], "sha": "6259db427a0073061de352ea819d38a74798edd7" }, "NezhaForQuestionAnswering": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaForQuestionAnswering" ], "sha": "31c6a34e85ae8c41294e0f4ef25044e00e511c4d" }, "NezhaForSequenceClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaForSequenceClassification" ], "sha": "db057c308ba2e05f223404de11e1816ce4bd62a9" }, "NezhaForTokenClassification": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaForTokenClassification" ], "sha": "235f4e10b4a59709650c2bece3e342ec153d9cfc" }, "NezhaModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NezhaModel" ], "sha": "80e05ba7c55bcdd7f4d1387ef9a09a7a8e95b5ac" }, "NllbMoeForConditionalGeneration": { "tokenizer_classes": [ "NllbTokenizer", "NllbTokenizerFast" ], "processor_classes": [], "model_classes": [ "NllbMoeForConditionalGeneration" ], "sha": "2a7f87dffe826af3d52086888f3f3773246e5528" }, "NllbMoeModel": { "tokenizer_classes": [ "NllbTokenizer", "NllbTokenizerFast" ], "processor_classes": [], "model_classes": [ "NllbMoeModel" ], "sha": "9f7a2261eed4658e1aa5623be4672ba64bee7da5" }, "NystromformerForMaskedLM": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NystromformerForMaskedLM" ], "sha": "37036847783f1e65e81ecd43803270a1ecb276f3" }, "NystromformerForMultipleChoice": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NystromformerForMultipleChoice" ], "sha": "42a077d5ab6830e20560466eaccc525eff10c3ae" }, "NystromformerForQuestionAnswering": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NystromformerForQuestionAnswering" ], "sha": "1cfaf79051731824db4f09989f093f87f4fceec5" }, "NystromformerForSequenceClassification": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NystromformerForSequenceClassification" ], "sha": "d75231203066df41e9b6b25dbee9ad40e8515c18" }, "NystromformerForTokenClassification": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NystromformerForTokenClassification" ], "sha": "5a499dc96e106bf41fc9166f2ad06527ec7ca14e" }, "NystromformerModel": { "tokenizer_classes": [ "AlbertTokenizer", "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "NystromformerModel" ], "sha": "2b6adb37ec473b15d71e2eb459acea08df6940ce" }, "OPTForCausalLM": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "OPTForCausalLM", "TFOPTForCausalLM" ], "sha": "190d1f4fc0011d2eaeaa05282e0fbd2445e4b11f" }, "OPTForQuestionAnswering": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "OPTForQuestionAnswering" ], "sha": "0fa9277ce10dbc3d0922b354befb684a136af00b" }, "OPTForSequenceClassification": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "OPTForSequenceClassification" ], "sha": "784ab288ab7280b1853ee400ef10ee2a965df352" }, "OPTModel": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [], "model_classes": [ "OPTModel", "TFOPTModel" ], "sha": "901d92b8f51edb0ec9614cb185fb66a8b5d364c3" }, "OneFormerForUniversalSegmentation": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "OneFormerImageProcessor" ], "model_classes": [ "OneFormerForUniversalSegmentation" ], "sha": "fee1cfd676acc40f09017702ddac6504f3090d14" }, "OneFormerModel": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "OneFormerImageProcessor" ], "model_classes": [ "OneFormerModel" ], "sha": "4163a79328c78f93ec57942598698a138c19a577" }, "OpenAIGPTForSequenceClassification": { "tokenizer_classes": [ "OpenAIGPTTokenizer", "OpenAIGPTTokenizerFast" ], "processor_classes": [], "model_classes": [ "OpenAIGPTForSequenceClassification", "TFOpenAIGPTForSequenceClassification" ], "sha": "c513f7f952935085f7573bf70a1ac3ad8f33434c" }, "OpenAIGPTLMHeadModel": { "tokenizer_classes": [ "OpenAIGPTTokenizer", "OpenAIGPTTokenizerFast" ], "processor_classes": [], "model_classes": [ "OpenAIGPTLMHeadModel", "TFOpenAIGPTLMHeadModel" ], "sha": "33f59ecd860f7a998483ec7631fe32d257235461" }, "OpenAIGPTModel": { "tokenizer_classes": [ "OpenAIGPTTokenizer", "OpenAIGPTTokenizerFast" ], "processor_classes": [], "model_classes": [ "OpenAIGPTModel", "TFOpenAIGPTModel" ], "sha": "00f6ec0a3a5276af71d08a26199e0ccbf2556fc9" }, "OwlViTForObjectDetection": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "OwlViTImageProcessor" ], "model_classes": [ "OwlViTForObjectDetection" ], "sha": "af958c9164f23d0f12921a8edf687f9aaa6af90e" }, "OwlViTModel": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "OwlViTImageProcessor" ], "model_classes": [ "OwlViTModel" ], "sha": "f0e27b2b4e53ba70e05d13dcfea8e85272b292a5" }, "Owlv2ForObjectDetection": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "Owlv2ImageProcessor" ], "model_classes": [ "Owlv2ForObjectDetection" ], "sha": "30439c0b2749726468dc13a755261e8101170052" }, "Owlv2Model": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "Owlv2ImageProcessor" ], "model_classes": [ "Owlv2Model" ], "sha": "7aeebdad5f72b36cb07c74355afad8e6052e2377" }, "PLBartForCausalLM": { "tokenizer_classes": [ "PLBartTokenizer" ], "processor_classes": [], "model_classes": [ "PLBartForCausalLM" ], "sha": "6ee51133246dbdb18fc3681ebd62d21e421b9bb4" }, "PLBartForConditionalGeneration": { "tokenizer_classes": [ "PLBartTokenizer" ], "processor_classes": [], "model_classes": [ "PLBartForConditionalGeneration" ], "sha": "ba191d28f4678d20b4dfed5fca5944018282cf20" }, "PLBartForSequenceClassification": { "tokenizer_classes": [ "PLBartTokenizer" ], "processor_classes": [], "model_classes": [ "PLBartForSequenceClassification" ], "sha": "02063b3d9707fcff619a4e37a0d6e58f76e39b18" }, "PLBartModel": { "tokenizer_classes": [ "PLBartTokenizer" ], "processor_classes": [], "model_classes": [ "PLBartModel" ], "sha": "cfbba29169b3f40d800403fc1b53982e1f88c5f8" }, "PegasusForCausalLM": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "PegasusForCausalLM" ], "sha": "6e685a698302a3ba33e5379d3a37eb0bc1ae2f70" }, "PegasusForConditionalGeneration": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "PegasusForConditionalGeneration", "TFPegasusForConditionalGeneration" ], "sha": "15e58ee2ebc14b6e80ef2891259057ee5f049be2" }, "PegasusModel": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "PegasusModel", "TFPegasusModel" ], "sha": "fa36b24523db411ef77903453346b8be81ef73fe" }, "PegasusXForConditionalGeneration": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "PegasusXForConditionalGeneration" ], "sha": "7588a8120f26a36c1687c14bdf1e9f9656891c1a" }, "PegasusXModel": { "tokenizer_classes": [ "PegasusTokenizer", "PegasusTokenizerFast" ], "processor_classes": [], "model_classes": [ "PegasusXModel" ], "sha": "a0bdff627416ac3c39c22d081f5d88d8b8fd99cc" }, "PerceiverForImageClassificationConvProcessing": { "tokenizer_classes": [ "PerceiverTokenizer" ], "processor_classes": [ "PerceiverImageProcessor" ], "model_classes": [ "PerceiverForImageClassificationConvProcessing" ], "sha": "2c1e5e62ebc9d0c931adc8c665fb05bde6c1c1f1" }, "PerceiverForImageClassificationFourier": { "tokenizer_classes": [ "PerceiverTokenizer" ], "processor_classes": [ "PerceiverImageProcessor" ], "model_classes": [ "PerceiverForImageClassificationFourier" ], "sha": "88da41b8851b76b8be0dacdb3de023db02bb031a" }, "PerceiverForImageClassificationLearned": { "tokenizer_classes": [ "PerceiverTokenizer" ], "processor_classes": [ "PerceiverImageProcessor" ], "model_classes": [ "PerceiverForImageClassificationLearned" ], "sha": "879bd1fa38d3baddb027bb2cacba2d160a741375" }, "PerceiverForMaskedLM": { "tokenizer_classes": [ "PerceiverTokenizer" ], "processor_classes": [ "PerceiverImageProcessor" ], "model_classes": [ "PerceiverForMaskedLM" ], "sha": "1d2459cbd281ef72da5682e65102aaca96183045" }, "PerceiverForSequenceClassification": { "tokenizer_classes": [ "PerceiverTokenizer" ], "processor_classes": [ "PerceiverImageProcessor" ], "model_classes": [ "PerceiverForSequenceClassification" ], "sha": "576f1f96348f0343458499fbf53d4102b5c0f2ff" }, "PerceiverModel": { "tokenizer_classes": [ "PerceiverTokenizer" ], "processor_classes": [ "PerceiverImageProcessor" ], "model_classes": [ "PerceiverModel" ], "sha": "83ec4d2d61ed62525ee033e13d144817beb29d19" }, "PersimmonForCausalLM": { "tokenizer_classes": [ "LlamaTokenizer", "LlamaTokenizerFast" ], "processor_classes": [], "model_classes": [ "PersimmonForCausalLM" ], "sha": "454234d6496c3857f5bf3eafb784616e2cd3ea82" }, "PersimmonForSequenceClassification": { "tokenizer_classes": [ "LlamaTokenizer", "LlamaTokenizerFast" ], "processor_classes": [], "model_classes": [ "PersimmonForSequenceClassification" ], "sha": "1d2674846543a181ca67bafa8b8f3a48bd2eefd1" }, "PersimmonModel": { "tokenizer_classes": [ "LlamaTokenizer", "LlamaTokenizerFast" ], "processor_classes": [], "model_classes": [ "PersimmonModel" ], "sha": "b8c8d479e29e9ee048e2d0b05b001ac835ad8859" }, "Pix2StructForConditionalGeneration": { "tokenizer_classes": [ "T5TokenizerFast" ], "processor_classes": [ "Pix2StructImageProcessor", "Pix2StructProcessor" ], "model_classes": [ "Pix2StructForConditionalGeneration" ], "sha": "42b3de00ad535076c4893e4ac5ae2d2748cc4ccb" }, "PoolFormerForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "PoolFormerImageProcessor" ], "model_classes": [ "PoolFormerForImageClassification" ], "sha": "ef04de5a6896100d457fb9553dd9789c09cca98e" }, "PoolFormerModel": { "tokenizer_classes": [], "processor_classes": [ "PoolFormerImageProcessor" ], "model_classes": [ "PoolFormerModel" ], "sha": "e8037215ebdbf795329ef6525cdc6aa547f04ace" }, "ProphetNetForCausalLM": { "tokenizer_classes": [ "ProphetNetTokenizer" ], "processor_classes": [], "model_classes": [ "ProphetNetForCausalLM" ], "sha": "d40b1e75bbc5ea0839563457aff6eee5bc0bb03e" }, "ProphetNetForConditionalGeneration": { "tokenizer_classes": [ "ProphetNetTokenizer" ], "processor_classes": [], "model_classes": [ "ProphetNetForConditionalGeneration" ], "sha": "d842875c41278032af39c03c66902786bb5ff2c7" }, "ProphetNetModel": { "tokenizer_classes": [ "ProphetNetTokenizer" ], "processor_classes": [], "model_classes": [ "ProphetNetModel" ], "sha": "f1ddbbcc768c7ba54c4d75b319540c1635e65937" }, "PvtForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "PvtImageProcessor" ], "model_classes": [ "PvtForImageClassification" ], "sha": "589b37bd6941aff6dd248259f9eee3c422a41fde" }, "PvtModel": { "tokenizer_classes": [], "processor_classes": [ "PvtImageProcessor" ], "model_classes": [ "PvtModel" ], "sha": "c40765c382515ae627652d60e9077b6478448d48" }, "ReformerForMaskedLM": { "tokenizer_classes": [ "ReformerTokenizer", "ReformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "ReformerForMaskedLM" ], "sha": "1e6431e42c676b525e3215e9e3cc8f1404f9f82b" }, "ReformerForQuestionAnswering": { "tokenizer_classes": [ "ReformerTokenizer", "ReformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "ReformerForQuestionAnswering" ], "sha": "62b43977f244474bd6982c6327d0c57310258fcd" }, "ReformerForSequenceClassification": { "tokenizer_classes": [ "ReformerTokenizer", "ReformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "ReformerForSequenceClassification" ], "sha": "67bd534a990a7dcfa02406987e7f066caa2a30e8" }, "ReformerModel": { "tokenizer_classes": [ "ReformerTokenizer", "ReformerTokenizerFast" ], "processor_classes": [], "model_classes": [ "ReformerModel" ], "sha": "a34ddb1389067448e9bc1323de674951cfb4cff1" }, "ReformerModelWithLMHead": { "tokenizer_classes": [ "ReformerTokenizer", "ReformerTokenizerFast" ], "processor_classes": [], "model_classes": [], "sha": "e7a8addaea8407d4c55e144e48aee04be6cca618" }, "RegNetForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "RegNetForImageClassification", "TFRegNetForImageClassification" ], "sha": "5ec67c84fc7944c0c5b386bd26820bc4d1f3b32a" }, "RegNetModel": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "RegNetModel", "TFRegNetModel" ], "sha": "72375e1401dc8271d4abb6295c9cee376f7b8f1a" }, "RemBertForCausalLM": { "tokenizer_classes": [ "RemBertTokenizer", "RemBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "RemBertForCausalLM", "TFRemBertForCausalLM" ], "sha": "8d9ae3d74a0e0a8958b4ee8c9dca3632abf52ef9" }, "RemBertForMaskedLM": { "tokenizer_classes": [ "RemBertTokenizer", "RemBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "RemBertForMaskedLM", "TFRemBertForMaskedLM" ], "sha": "b7c27d01e1cc3bef9ddd6a78627d700b3bffd759" }, "RemBertForMultipleChoice": { "tokenizer_classes": [ "RemBertTokenizer", "RemBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "RemBertForMultipleChoice", "TFRemBertForMultipleChoice" ], "sha": "2fe192677b9740cf24dd559339d46925e8ac23d4" }, "RemBertForQuestionAnswering": { "tokenizer_classes": [ "RemBertTokenizer", "RemBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "RemBertForQuestionAnswering", "TFRemBertForQuestionAnswering" ], "sha": "22b8ba44681b96292a1cf7f6df4ba6bb7937ec6e" }, "RemBertForSequenceClassification": { "tokenizer_classes": [ "RemBertTokenizer", "RemBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "RemBertForSequenceClassification", "TFRemBertForSequenceClassification" ], "sha": "20f3e89341ea15266d2685a8798142fba03c3f98" }, "RemBertForTokenClassification": { "tokenizer_classes": [ "RemBertTokenizer", "RemBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "RemBertForTokenClassification", "TFRemBertForTokenClassification" ], "sha": "15712ff753708da3cf0550e76e73a5d0bba7784e" }, "RemBertModel": { "tokenizer_classes": [ "RemBertTokenizer", "RemBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "RemBertModel", "TFRemBertModel" ], "sha": "59cc6d099b1ded0aaead8684457415b129f79e86" }, "ResNetBackbone": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ResNetBackbone" ], "sha": "c84a6bcf8af4b6a3403dea3cf4c55965ac39f239" }, "ResNetForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ResNetForImageClassification", "TFResNetForImageClassification" ], "sha": "34a180ad24d80811d420d7aa4fbec4a17751aaf8" }, "ResNetModel": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "ResNetModel", "TFResNetModel" ], "sha": "fafa6cdf9986c6cfbae360596b3574162430bcd3" }, "RoCBertForCausalLM": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertForCausalLM" ], "sha": "194d8dafc4f4142f8d31e6b4be14b55d812f923b" }, "RoCBertForMaskedLM": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertForMaskedLM" ], "sha": "8bc285f32f3b932dbd56ddf91b1170734d638eeb" }, "RoCBertForMultipleChoice": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertForMultipleChoice" ], "sha": "bb54e5ae021d728022d34b12fee3f087d9486af9" }, "RoCBertForPreTraining": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertForPreTraining" ], "sha": "86ebbd5b0bc84660ad7f505082eff19b86c137c8" }, "RoCBertForQuestionAnswering": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertForQuestionAnswering" ], "sha": "1bfc2dc3d6e76170e6dca1ff32a54a0887ff28a3" }, "RoCBertForSequenceClassification": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertForSequenceClassification" ], "sha": "c329038802241f454273894128fea38b60f7c739" }, "RoCBertForTokenClassification": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertForTokenClassification" ], "sha": "afe5ec22c2ad1d9ff6e3e64c87eb7555faaa936d" }, "RoCBertModel": { "tokenizer_classes": [ "RoCBertTokenizer" ], "processor_classes": [], "model_classes": [ "RoCBertModel" ], "sha": "29de5580d5f5d3461a88673e7b4c492a9d8a67a4" }, "RoFormerForCausalLM": { "tokenizer_classes": [ "RoFormerTokenizer", "RoFormerTokenizerFast" ], "processor_classes": [], "model_classes": [ "RoFormerForCausalLM", "TFRoFormerForCausalLM" ], "sha": "6e074219c6dd8f8b221bbfda64fba100f729f88d" }, "RoFormerForMaskedLM": { "tokenizer_classes": [ "RoFormerTokenizer", "RoFormerTokenizerFast" ], "processor_classes": [], "model_classes": [ "RoFormerForMaskedLM", "TFRoFormerForMaskedLM" ], "sha": "a3a4d05f9b29601553a77244f2adcf8194f9367c" }, "RoFormerForMultipleChoice": { "tokenizer_classes": [ "RoFormerTokenizer", "RoFormerTokenizerFast" ], "processor_classes": [], "model_classes": [ "RoFormerForMultipleChoice", "TFRoFormerForMultipleChoice" ], "sha": "aca3999a1d14f09644faed44e2cdfb28ed68a3d3" }, "RoFormerForQuestionAnswering": { "tokenizer_classes": [ "RoFormerTokenizer", "RoFormerTokenizerFast" ], "processor_classes": [], "model_classes": [ "RoFormerForQuestionAnswering", "TFRoFormerForQuestionAnswering" ], "sha": "b8a20b3a788f178b9ef64e2eb9587f693dca1b69" }, "RoFormerForSequenceClassification": { "tokenizer_classes": [ "RoFormerTokenizer", "RoFormerTokenizerFast" ], "processor_classes": [], "model_classes": [ "RoFormerForSequenceClassification", "TFRoFormerForSequenceClassification" ], "sha": "d092e2d5e62012bf4ec921e763b37865d6189216" }, "RoFormerForTokenClassification": { "tokenizer_classes": [ "RoFormerTokenizer", "RoFormerTokenizerFast" ], "processor_classes": [], "model_classes": [ "RoFormerForTokenClassification", "TFRoFormerForTokenClassification" ], "sha": "85d3a17062e1f3e0539abfe738a88203e25349b6" }, "RoFormerModel": { "tokenizer_classes": [ "RoFormerTokenizer", "RoFormerTokenizerFast" ], "processor_classes": [], "model_classes": [ "RoFormerModel", "TFRoFormerModel" ], "sha": "22e7df2f4cd66caf449f2342f63d176005afccc9" }, "RobertaForCausalLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaForCausalLM", "TFRobertaForCausalLM" ], "sha": "5d1d24d56f9735402e50a2ea513ffde44487733e" }, "RobertaForMaskedLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaForMaskedLM", "TFRobertaForMaskedLM" ], "sha": "b21c9daf0b3b66530bf5d45d67df5ec392b5059c" }, "RobertaForMultipleChoice": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaForMultipleChoice", "TFRobertaForMultipleChoice" ], "sha": "10020d9546d4d7318f4d514fe13daaad07e6269f" }, "RobertaForQuestionAnswering": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaForQuestionAnswering", "TFRobertaForQuestionAnswering" ], "sha": "eea4a81306891746bac9e7715f805a2d9dbf4be7" }, "RobertaForSequenceClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaForSequenceClassification", "TFRobertaForSequenceClassification" ], "sha": "6a6f53fc6ab98e29ed539e76b1cb76d25a2cd720" }, "RobertaForTokenClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaForTokenClassification", "TFRobertaForTokenClassification" ], "sha": "9190044c4091eb0d98ae7638c453e24846bca5d7" }, "RobertaModel": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaModel", "TFRobertaModel" ], "sha": "181a0b8a7ad24500ec327ad07ddb225f0680ac0a" }, "RobertaPreLayerNormForCausalLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaPreLayerNormForCausalLM", "TFRobertaPreLayerNormForCausalLM" ], "sha": "73b6d4531b41f295a5d310d7aa44736004a59865" }, "RobertaPreLayerNormForMaskedLM": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaPreLayerNormForMaskedLM", "TFRobertaPreLayerNormForMaskedLM" ], "sha": "a61723c77e5ab7adc95285e7823a0a49b99af395" }, "RobertaPreLayerNormForMultipleChoice": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaPreLayerNormForMultipleChoice", "TFRobertaPreLayerNormForMultipleChoice" ], "sha": "3dcfa62e0771358c60232a18135bfe7c7f6d715e" }, "RobertaPreLayerNormForQuestionAnswering": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaPreLayerNormForQuestionAnswering", "TFRobertaPreLayerNormForQuestionAnswering" ], "sha": "a8e76a5a50f7df60055e5ed6a1c3af2e7d34cf01" }, "RobertaPreLayerNormForSequenceClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaPreLayerNormForSequenceClassification", "TFRobertaPreLayerNormForSequenceClassification" ], "sha": "7509cb0286d146ef2fc6beb8867ae31b92fb1b16" }, "RobertaPreLayerNormForTokenClassification": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaPreLayerNormForTokenClassification", "TFRobertaPreLayerNormForTokenClassification" ], "sha": "3ad5814ba126b41e18c1978c970e396fab6da9bf" }, "RobertaPreLayerNormModel": { "tokenizer_classes": [ "RobertaTokenizer", "RobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "RobertaPreLayerNormModel", "TFRobertaPreLayerNormModel" ], "sha": "4830db38fd310404c5ab70bd00684eca0bc06ca8" }, "RwkvForCausalLM": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "RwkvForCausalLM" ], "sha": "2f452fd46b39e39b1a6a95fa1d8232405bbb3e96" }, "RwkvModel": { "tokenizer_classes": [ "GPTNeoXTokenizerFast" ], "processor_classes": [], "model_classes": [ "RwkvModel" ], "sha": "88a52c9437dc3c06f65a8252490be7eb91197804" }, "SEWDForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "SEWDForCTC" ], "sha": "5c7495c77ae9e0f12c0de05d3a5fb95bdcd91768" }, "SEWDForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "SEWDForSequenceClassification" ], "sha": "d6cbf1164ce1999fdaf3deeb7a6eba19a3b1f873" }, "SEWDModel": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "SEWDModel" ], "sha": "dde4e02219449f149bb3403bbeae127cafaf9c79" }, "SEWForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "SEWForCTC" ], "sha": "4477c7a277059fba08772acf91cf3e3dd3cb073b" }, "SEWForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "SEWForSequenceClassification" ], "sha": "3b90fbb1c0c3848fed18f91a0169bb297a3e6619" }, "SEWModel": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "SEWModel" ], "sha": "0a0fbb844eeefa0dce62bd05db30a2bb91e5dc88" }, "SamModel": { "tokenizer_classes": [], "processor_classes": [ "SamImageProcessor" ], "model_classes": [ "SamModel", "TFSamModel" ], "sha": "eca8651bc84e5ac3b1b62e784b744a6bd1b82575" }, "SegformerForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "SegformerImageProcessor" ], "model_classes": [ "SegformerForImageClassification", "TFSegformerForImageClassification" ], "sha": "c566ae0ed382be4ed61ed6dacffa2ba663e9cc19" }, "SegformerForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "SegformerImageProcessor" ], "model_classes": [ "SegformerForSemanticSegmentation", "TFSegformerForSemanticSegmentation" ], "sha": "b73798972cdf24daafa858994713aca60e2bf90d" }, "SegformerModel": { "tokenizer_classes": [], "processor_classes": [ "SegformerImageProcessor" ], "model_classes": [ "SegformerModel", "TFSegformerModel" ], "sha": "3d4ba8ed2bdf801e6afa855b9d77893f2b7f9e10" }, "Speech2TextForConditionalGeneration": { "tokenizer_classes": [ "Speech2TextTokenizer" ], "processor_classes": [ "Speech2TextFeatureExtractor" ], "model_classes": [ "Speech2TextForConditionalGeneration", "TFSpeech2TextForConditionalGeneration" ], "sha": "1da80293ec78762e136cf6dd64b652693f9ab364" }, "Speech2TextModel": { "tokenizer_classes": [ "Speech2TextTokenizer" ], "processor_classes": [ "Speech2TextFeatureExtractor" ], "model_classes": [ "Speech2TextModel", "TFSpeech2TextModel" ], "sha": "7c6e63bd0c15dd99ef01573d4c43f90e4920cc91" }, "SpeechEncoderDecoderModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "SpeechEncoderDecoderModel" ], "sha": "78602ae0857728e95de4042bdca8a31ef818890a" }, "SpeechT5ForSpeechToText": { "tokenizer_classes": [ "SpeechT5Tokenizer" ], "processor_classes": [ "SpeechT5FeatureExtractor" ], "model_classes": [ "SpeechT5ForSpeechToText" ], "sha": "d46f0a83324e5865420a27a738ef203292de3479" }, "SpeechT5ForTextToSpeech": { "tokenizer_classes": [ "SpeechT5Tokenizer" ], "processor_classes": [ "SpeechT5FeatureExtractor" ], "model_classes": [ "SpeechT5ForTextToSpeech" ], "sha": "922e748d9e1ea256a8d9259782021cd3820d5924" }, "SpeechT5Model": { "tokenizer_classes": [ "SpeechT5Tokenizer" ], "processor_classes": [ "SpeechT5FeatureExtractor" ], "model_classes": [ "SpeechT5Model" ], "sha": "7b248f77ca88ffddcdb538e772f6de63a86a4f9b" }, "SplinterForPreTraining": { "tokenizer_classes": [ "SplinterTokenizer" ], "processor_classes": [], "model_classes": [ "SplinterForPreTraining" ], "sha": "e8a94efa740f1d685fa553f49132c6f022de5389" }, "SplinterForQuestionAnswering": { "tokenizer_classes": [ "SplinterTokenizer" ], "processor_classes": [], "model_classes": [ "SplinterForQuestionAnswering" ], "sha": "d038b7b683face4a361ab0f474d8a5b111c44c4d" }, "SplinterModel": { "tokenizer_classes": [ "SplinterTokenizer" ], "processor_classes": [], "model_classes": [ "SplinterModel" ], "sha": "a35b13cbb7faba46dc265761bb839267eb53d248" }, "SqueezeBertForMaskedLM": { "tokenizer_classes": [ "SqueezeBertTokenizer", "SqueezeBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "SqueezeBertForMaskedLM" ], "sha": "33ce239408c22d2c98be63c9ab4607ef9ceb6d49" }, "SqueezeBertForMultipleChoice": { "tokenizer_classes": [ "SqueezeBertTokenizer", "SqueezeBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "SqueezeBertForMultipleChoice" ], "sha": "7e9e666896420c7839e27dcb280981d034ba4da5" }, "SqueezeBertForQuestionAnswering": { "tokenizer_classes": [ "SqueezeBertTokenizer", "SqueezeBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "SqueezeBertForQuestionAnswering" ], "sha": "bceb045a9ac6eb2ded7d358ed577c6dc28ea487a" }, "SqueezeBertForSequenceClassification": { "tokenizer_classes": [ "SqueezeBertTokenizer", "SqueezeBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "SqueezeBertForSequenceClassification" ], "sha": "c5aeb1f454a1d059d41a5f8dacaf784b9de0b899" }, "SqueezeBertForTokenClassification": { "tokenizer_classes": [ "SqueezeBertTokenizer", "SqueezeBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "SqueezeBertForTokenClassification" ], "sha": "70ba60ca44a380e6aa983a37b163c57217219df7" }, "SqueezeBertModel": { "tokenizer_classes": [ "SqueezeBertTokenizer", "SqueezeBertTokenizerFast" ], "processor_classes": [], "model_classes": [ "SqueezeBertModel" ], "sha": "e0a3ac56a4047da3f921638252ead5e44438bbdb" }, "SwiftFormerForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "SwiftFormerForImageClassification" ], "sha": "a249b14a525d29e675b6e4af4baacd9ba7df7598" }, "SwiftFormerModel": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "SwiftFormerModel" ], "sha": "25ba2d88c770533f8c69811d2a454a00c1d09f5d" }, "Swin2SRForImageSuperResolution": { "tokenizer_classes": [], "processor_classes": [ "Swin2SRImageProcessor" ], "model_classes": [ "Swin2SRForImageSuperResolution" ], "sha": "3a2780de0b455084c018ac8a62b56040969e26ec" }, "Swin2SRModel": { "tokenizer_classes": [], "processor_classes": [ "Swin2SRImageProcessor" ], "model_classes": [ "Swin2SRModel" ], "sha": "c67f6ecff9ef8675c3869c987277b0a1e040f4be" }, "SwinBackbone": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "SwinBackbone" ], "sha": "89b28b8ec05a7b3357be75a77eb7809e6fd5cfef" }, "SwinForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "SwinForImageClassification", "TFSwinForImageClassification" ], "sha": "e3c2e80f380ef79781313981da1a993dd8b8d34d" }, "SwinForMaskedImageModeling": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "SwinForMaskedImageModeling", "TFSwinForMaskedImageModeling" ], "sha": "d84b061fbace1bc6e697e3253e222de42053f978" }, "SwinModel": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "SwinModel", "TFSwinModel" ], "sha": "23ff641295660ec4fea399be8aa1bc14565961f8" }, "Swinv2ForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "Swinv2ForImageClassification" ], "sha": "3fd755cdf4cf611db83f72f9c9b00eb9257a38ca" }, "Swinv2ForMaskedImageModeling": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "Swinv2ForMaskedImageModeling" ], "sha": "8375c31eb6231fde36ec6533a34ba5b28e296163" }, "Swinv2Model": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "Swinv2Model" ], "sha": "70aeb72e8a266f668c8b51a517ec01003b8d6804" }, "SwitchTransformersForConditionalGeneration": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "SwitchTransformersForConditionalGeneration" ], "sha": "c8fcd2bb735894c78db7f1e5b51afc78aced7adb" }, "SwitchTransformersModel": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "SwitchTransformersModel" ], "sha": "275bbf6d389bfd0540b9f824c609c6b22a577328" }, "T5EncoderModel": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "T5EncoderModel", "TFT5EncoderModel" ], "sha": "1c75090036a2b3740dfe2d570b889332ad8e59e8" }, "T5ForConditionalGeneration": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "T5ForConditionalGeneration", "TFT5ForConditionalGeneration" ], "sha": "593fd6072a4e265f5cc73b1973cd8af76b261f29" }, "T5ForQuestionAnswering": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "T5ForQuestionAnswering" ], "sha": "b9edf2de494244ff032f67d2d7bdf6c591000c94" }, "T5ForSequenceClassification": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "T5ForSequenceClassification" ], "sha": "105b5c4c8e1efe927444108f1388c4f102ebad15" }, "T5Model": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "T5Model", "TFT5Model" ], "sha": "eb3d20dda0ba77c1de618d78116a1a0c784c515c" }, "TableTransformerForObjectDetection": { "tokenizer_classes": [], "processor_classes": [ "DetrImageProcessor" ], "model_classes": [ "TableTransformerForObjectDetection" ], "sha": "9cf1e3f5c3555a727672a32b49f8b96c5aa20be6" }, "TableTransformerModel": { "tokenizer_classes": [], "processor_classes": [ "DetrImageProcessor" ], "model_classes": [ "TableTransformerModel" ], "sha": "7b446244d8739b0c29d98f7d537b15ad578577d5" }, "TapasForMaskedLM": { "tokenizer_classes": [ "TapasTokenizer" ], "processor_classes": [], "model_classes": [ "TFTapasForMaskedLM", "TapasForMaskedLM" ], "sha": "2cedb92dd9a3dc37ffb7d35ad5190b110992577c" }, "TapasForQuestionAnswering": { "tokenizer_classes": [ "TapasTokenizer" ], "processor_classes": [], "model_classes": [ "TFTapasForQuestionAnswering", "TapasForQuestionAnswering" ], "sha": "4cc91b9e5db662e6e392d8052587ae419896d72b" }, "TapasForSequenceClassification": { "tokenizer_classes": [ "TapasTokenizer" ], "processor_classes": [], "model_classes": [ "TFTapasForSequenceClassification", "TapasForSequenceClassification" ], "sha": "7c37bfb87a6fce2f8604bb3cab2a14e09a285e14" }, "TapasModel": { "tokenizer_classes": [ "TapasTokenizer" ], "processor_classes": [], "model_classes": [ "TFTapasModel", "TapasModel" ], "sha": "bc004af0a415afe1f566c3afe8dd4d48d08c1ce0" }, "TimesformerForVideoClassification": { "tokenizer_classes": [], "processor_classes": [ "VideoMAEImageProcessor" ], "model_classes": [ "TimesformerForVideoClassification" ], "sha": "0b3b8e314618d7af34fb44477745491b44bf556d" }, "TimesformerModel": { "tokenizer_classes": [], "processor_classes": [ "VideoMAEImageProcessor" ], "model_classes": [ "TimesformerModel" ], "sha": "ea51f7ebb6426ad2b1fa1396e83f8e8ad5bc3b44" }, "TransfoXLForSequenceClassification": { "tokenizer_classes": [ "TransfoXLTokenizer" ], "processor_classes": [], "model_classes": [ "TFTransfoXLForSequenceClassification", "TransfoXLForSequenceClassification" ], "sha": "f3d370184350667d74056b979081b0bf5b0083c1" }, "TransfoXLLMHeadModel": { "tokenizer_classes": [ "TransfoXLTokenizer" ], "processor_classes": [], "model_classes": [ "TFTransfoXLLMHeadModel", "TransfoXLLMHeadModel" ], "sha": "e0d4cebcdde52d8d4c81782a1edc606830bd6afd" }, "TransfoXLModel": { "tokenizer_classes": [ "TransfoXLTokenizer" ], "processor_classes": [], "model_classes": [ "TFTransfoXLModel", "TransfoXLModel" ], "sha": "6938eeae35662a862accb01412dfc486454bdc8f" }, "TvltForPreTraining": { "tokenizer_classes": [], "processor_classes": [ "TvltProcessor" ], "model_classes": [ "TvltForPreTraining" ], "sha": "f7bd2833764eb6d55a921aaed81d3f21119016ae" }, "TvltModel": { "tokenizer_classes": [], "processor_classes": [ "TvltProcessor" ], "model_classes": [ "TvltModel" ], "sha": "c3cbf7a6159c038f333ce7adda2480ea3396b2b3" }, "UMT5EncoderModel": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "UMT5EncoderModel" ], "sha": "2894e49c9fbd17ea4b3dab56ec388be354c1a5f0" }, "UMT5ForQuestionAnswering": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "UMT5ForQuestionAnswering" ], "sha": "b381aa068a44200db539f2f48f4e34a5ed1cb093" }, "UMT5ForSequenceClassification": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "UMT5ForSequenceClassification" ], "sha": "aa9f77b7b3cff21425b7512e7c0f478af7b5db14" }, "UMT5Model": { "tokenizer_classes": [ "T5Tokenizer", "T5TokenizerFast" ], "processor_classes": [], "model_classes": [ "UMT5Model" ], "sha": "9180d850b24e5494442a4f7a8ca1a4c102f9babd" }, "UniSpeechForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechForCTC" ], "sha": "102b56d76f4d74cface309801c0ad80892583751" }, "UniSpeechForPreTraining": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechForPreTraining" ], "sha": "830be5b3e85aaae7bcc961218e417c29743d6042" }, "UniSpeechForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechForSequenceClassification" ], "sha": "a30ac1516944757ccd8efcbcf94033a03f8708bf" }, "UniSpeechModel": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechModel" ], "sha": "18e170eb1091715b74ace28c8c380b6bf2b6202d" }, "UniSpeechSatForAudioFrameClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechSatForAudioFrameClassification" ], "sha": "7eba5a1c6cd610928b27ecb217bb17c729a07a57" }, "UniSpeechSatForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechSatForCTC" ], "sha": "a8617538d3a2ae990f022bb0c36b8428a4870822" }, "UniSpeechSatForPreTraining": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechSatForPreTraining" ], "sha": "a772f66db0ab49e1050e524d7fcbe5106ebdaf96" }, "UniSpeechSatForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechSatForSequenceClassification" ], "sha": "f1c16567bd829a6d8a7a2d167d22e9653149e625" }, "UniSpeechSatForXVector": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechSatForXVector" ], "sha": "71cb3780cf3678f74fba00e19df82df76dca6133" }, "UniSpeechSatModel": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "UniSpeechSatModel" ], "sha": "ea755bbc7c6c6aa649c58b4b000f243acbbd6b5a" }, "UperNetForSemanticSegmentation": { "tokenizer_classes": [], "processor_classes": [ "SegformerImageProcessor" ], "model_classes": [ "UperNetForSemanticSegmentation" ], "sha": "f1871cb388bc0b203f5397bfc06a373736c2fb9c" }, "VanForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "VanForImageClassification" ], "sha": "694eb147bc4768aeabeffbfb97732281b71a621d" }, "VanModel": { "tokenizer_classes": [], "processor_classes": [ "ConvNextImageProcessor" ], "model_classes": [ "VanModel" ], "sha": "d8ac60ce952020f2b0355fc566d634b2c5ba635d" }, "ViTForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "TFViTForImageClassification", "ViTForImageClassification" ], "sha": "5b3b44a3ed492070c273e481e30ecf4deddc5ec3" }, "ViTForMaskedImageModeling": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "ViTForMaskedImageModeling" ], "sha": "d984e0b432fe195c2c26952d4f249031e7b1e2ea" }, "ViTHybridForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTHybridImageProcessor" ], "model_classes": [ "ViTHybridForImageClassification" ], "sha": "69c7c396032ffe60d54953b584394899fb95ccc1" }, "ViTHybridModel": { "tokenizer_classes": [], "processor_classes": [ "ViTHybridImageProcessor" ], "model_classes": [ "ViTHybridModel" ], "sha": "077443bfefe40d625314dbd274d2ff8089624797" }, "ViTMAEForPreTraining": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "TFViTMAEForPreTraining", "ViTMAEForPreTraining" ], "sha": "2d98d80d9c45eef0d5b6f5426d7196bb546fe9fc" }, "ViTMAEModel": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "TFViTMAEModel", "ViTMAEModel" ], "sha": "c7c2f12c19d2dbec08851a9dac7485909629a5fd" }, "ViTMSNForImageClassification": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "ViTMSNForImageClassification" ], "sha": "feda819aa7dbb55d850130f4cf1d210858d7eb89" }, "ViTMSNModel": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "ViTMSNModel" ], "sha": "0733abf168cb47a149821fdd2113d546e15c47de" }, "ViTModel": { "tokenizer_classes": [], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "TFViTModel", "ViTModel" ], "sha": "31817b7a64ebc3333fcd4801dfbb356ab07b13dd" }, "VideoMAEForPreTraining": { "tokenizer_classes": [], "processor_classes": [ "VideoMAEImageProcessor" ], "model_classes": [ "VideoMAEForPreTraining" ], "sha": "9de66c4bb759dc7269a7af17bf70b3194550acaa" }, "VideoMAEForVideoClassification": { "tokenizer_classes": [], "processor_classes": [ "VideoMAEImageProcessor" ], "model_classes": [ "VideoMAEForVideoClassification" ], "sha": "d3f743408386bc0ffe2d979de35335e87bc34aec" }, "VideoMAEModel": { "tokenizer_classes": [], "processor_classes": [ "VideoMAEImageProcessor" ], "model_classes": [ "VideoMAEModel" ], "sha": "a2be96beba888817d92b67525601569d830342ff" }, "ViltForQuestionAnswering": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "ViltImageProcessor" ], "model_classes": [ "ViltForQuestionAnswering" ], "sha": "faeffbf43da6621717d8b13e7ebe87d58d750cb2" }, "ViltModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "ViltImageProcessor" ], "model_classes": [ "ViltModel" ], "sha": "3a89b7b5782947c4f4125162ffe1c9cc18c9c800" }, "VisionEncoderDecoderModel": { "tokenizer_classes": [ "GPT2Tokenizer", "GPT2TokenizerFast" ], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "TFVisionEncoderDecoderModel", "VisionEncoderDecoderModel" ], "sha": "23917761070cf16b26a6d033b6bff9100bbc618b" }, "VisionTextDualEncoderModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [ "ViTImageProcessor" ], "model_classes": [ "TFVisionTextDualEncoderModel", "VisionTextDualEncoderModel" ], "sha": "c3569ef17f66acbacb76f7ceb6f71e02d075dd6c" }, "VisualBertForPreTraining": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "VisualBertForPreTraining" ], "sha": "ce5a4d93ce762971cd216cda9aef8b9ce3f0450b" }, "VisualBertModel": { "tokenizer_classes": [ "BertTokenizer", "BertTokenizerFast" ], "processor_classes": [], "model_classes": [ "VisualBertModel" ], "sha": "85020189fb7bf1217eb9370b09bca8ec5bcfdafa" }, "VitsModel": { "tokenizer_classes": [ "VitsTokenizer" ], "processor_classes": [], "model_classes": [ "VitsModel" ], "sha": "b9a20ca5b6a7874576e485850260578895587dd2" }, "Wav2Vec2ConformerForAudioFrameClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ConformerForAudioFrameClassification" ], "sha": "e316a18a1d165b4cb51a7f28f8e8dab676da4b56" }, "Wav2Vec2ConformerForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ConformerForCTC" ], "sha": "a2ecb2985fcbb9f3ed000c12c1af6da36f5eaa3a" }, "Wav2Vec2ConformerForPreTraining": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ConformerForPreTraining" ], "sha": "099279b69e5da19efb05589804ccee210a0e57ae" }, "Wav2Vec2ConformerForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ConformerForSequenceClassification" ], "sha": "e8c1bca543c54bf15a6c026cb3761993b52cf617" }, "Wav2Vec2ConformerForXVector": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ConformerForXVector" ], "sha": "ba206a55998f16e134960728bd02006eaf39114f" }, "Wav2Vec2ConformerModel": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ConformerModel" ], "sha": "ef2fe3aa8c23e6f8696e6612061aaddecae49994" }, "Wav2Vec2ForAudioFrameClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ForAudioFrameClassification" ], "sha": "ab219f119e10f56e1059966c66d23f0df3c2c343" }, "Wav2Vec2ForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ForCTC" ], "sha": "6245fbb1cb99cea5c4de1e73f81fba978fb275ac" }, "Wav2Vec2ForMaskedLM": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ForMaskedLM" ], "sha": "e083cf4fefec4df3c241dbbe5e17a84a794a89bd" }, "Wav2Vec2ForPreTraining": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ForPreTraining" ], "sha": "a8d71e216334260353ccbf5ce84cd6924f7457da" }, "Wav2Vec2ForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "TFWav2Vec2ForSequenceClassification", "Wav2Vec2ForSequenceClassification" ], "sha": "2000b2022abcc37100241485f5872126b70164c9" }, "Wav2Vec2ForXVector": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "Wav2Vec2ForXVector" ], "sha": "f4c422db53aae061ea609f4407af7cd5b33c8942" }, "Wav2Vec2Model": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "TFWav2Vec2Model", "Wav2Vec2Model" ], "sha": "7a998ee3ee0619a52828a79c3eed6872fd053f37" }, "WavLMForAudioFrameClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "WavLMForAudioFrameClassification" ], "sha": "b135610f8d5de0b1a5bf5ed7212966135c63d6ec" }, "WavLMForCTC": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "WavLMForCTC" ], "sha": "f1139c5ddf34d2327ae1f6917edd7da180b06971" }, "WavLMForSequenceClassification": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "WavLMForSequenceClassification" ], "sha": "4ba5f2019b46866ce2011c993194ebda60afc028" }, "WavLMForXVector": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "WavLMForXVector" ], "sha": "faf9264eac56a56d5510a0984d7e1146e4c8cf62" }, "WavLMModel": { "tokenizer_classes": [ "Wav2Vec2CTCTokenizer" ], "processor_classes": [ "Wav2Vec2FeatureExtractor" ], "model_classes": [ "WavLMModel" ], "sha": "e932275e37cb643be271f655bd1d649f4f4b4bd5" }, "WhisperForAudioClassification": { "tokenizer_classes": [ "WhisperTokenizer" ], "processor_classes": [ "WhisperFeatureExtractor" ], "model_classes": [ "WhisperForAudioClassification" ], "sha": "d71b13674b1a67443cd19d0594a3b5b1e5968f0d" }, "WhisperForCausalLM": { "tokenizer_classes": [ "WhisperTokenizer" ], "processor_classes": [ "WhisperFeatureExtractor" ], "model_classes": [ "WhisperForCausalLM" ], "sha": "e7febfd7f4512e029293c677e6d2633e23fc459a" }, "WhisperForConditionalGeneration": { "tokenizer_classes": [ "WhisperTokenizer", "WhisperTokenizerFast" ], "processor_classes": [ "WhisperFeatureExtractor" ], "model_classes": [ "TFWhisperForConditionalGeneration", "WhisperForConditionalGeneration" ], "sha": "598101b885b24508042d9292e54aa04bff96318e" }, "WhisperModel": { "tokenizer_classes": [ "WhisperTokenizer", "WhisperTokenizerFast" ], "processor_classes": [ "WhisperFeatureExtractor" ], "model_classes": [ "TFWhisperModel", "WhisperModel" ], "sha": "c04c50216bb6b0a8f4d55f2fa9f9f4cf61c8a77c" }, "XCLIPModel": { "tokenizer_classes": [ "CLIPTokenizer", "CLIPTokenizerFast" ], "processor_classes": [ "VideoMAEImageProcessor" ], "model_classes": [ "XCLIPModel" ], "sha": "299ffffc6b94c3558bf7dbc38e24074c99490046" }, "XGLMForCausalLM": { "tokenizer_classes": [ "XGLMTokenizer", "XGLMTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXGLMForCausalLM", "XGLMForCausalLM" ], "sha": "d5381ce297c249d559937c6bb6316cf1fdad2613" }, "XGLMModel": { "tokenizer_classes": [ "XGLMTokenizer", "XGLMTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXGLMModel", "XGLMModel" ], "sha": "2b5cef167822cfaa558d259af1722e2f785cd3d5" }, "XLMForMultipleChoice": { "tokenizer_classes": [ "XLMTokenizer" ], "processor_classes": [], "model_classes": [ "TFXLMForMultipleChoice", "XLMForMultipleChoice" ], "sha": "f0c8cc6462449ac9eb9b4158e433bd3c923db3af" }, "XLMForQuestionAnsweringSimple": { "tokenizer_classes": [ "XLMTokenizer" ], "processor_classes": [], "model_classes": [ "TFXLMForQuestionAnsweringSimple", "XLMForQuestionAnsweringSimple" ], "sha": "82e93a2653cf3646eaaf02d8cc5f8ff9a4551523" }, "XLMForSequenceClassification": { "tokenizer_classes": [ "XLMTokenizer" ], "processor_classes": [], "model_classes": [ "TFXLMForSequenceClassification", "XLMForSequenceClassification" ], "sha": "2d6892f5f703be9b481bca91477032bd0e36dbe5" }, "XLMForTokenClassification": { "tokenizer_classes": [ "XLMTokenizer" ], "processor_classes": [], "model_classes": [ "TFXLMForTokenClassification", "XLMForTokenClassification" ], "sha": "9a591395e7a0643a03f5d2debb98caa3966e021c" }, "XLMModel": { "tokenizer_classes": [ "XLMTokenizer" ], "processor_classes": [], "model_classes": [ "TFXLMModel", "XLMModel" ], "sha": "022b86df246414ff712475d9ca55db690ff1d3bf" }, "XLMRobertaXLForCausalLM": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XLMRobertaXLForCausalLM" ], "sha": "fc05408e5b33a31638476ef337719dfbb7615ef3" }, "XLMRobertaXLForMaskedLM": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XLMRobertaXLForMaskedLM" ], "sha": "e96f198eede757e5ae2c87632fdcfb341073ef6e" }, "XLMRobertaXLForMultipleChoice": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XLMRobertaXLForMultipleChoice" ], "sha": "52732625f1bfbbb7cb4ba1cf0963de596d81822d" }, "XLMRobertaXLForQuestionAnswering": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XLMRobertaXLForQuestionAnswering" ], "sha": "da388fdd2d28e0757eb0c2b2c612a8ff03af2223" }, "XLMRobertaXLForSequenceClassification": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XLMRobertaXLForSequenceClassification" ], "sha": "980721187633bcf21ac0b8edbed933527f4611df" }, "XLMRobertaXLForTokenClassification": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XLMRobertaXLForTokenClassification" ], "sha": "37a97280faf6fef0bd946d3934d77a1b60fbf473" }, "XLMRobertaXLModel": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XLMRobertaXLModel" ], "sha": "8fbeb39a984912e47f5d24a31be61639031a0fc3" }, "XLMWithLMHeadModel": { "tokenizer_classes": [ "XLMTokenizer" ], "processor_classes": [], "model_classes": [ "TFXLMWithLMHeadModel", "XLMWithLMHeadModel" ], "sha": "db70bdefbaf095e88b8097e4b601d9105a511afa" }, "XLNetForMultipleChoice": { "tokenizer_classes": [ "XLNetTokenizer", "XLNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXLNetForMultipleChoice", "XLNetForMultipleChoice" ], "sha": "8bb7e28d0cd1e93154d3232baf5e9c79acaf9f1a" }, "XLNetForQuestionAnsweringSimple": { "tokenizer_classes": [ "XLNetTokenizer", "XLNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXLNetForQuestionAnsweringSimple", "XLNetForQuestionAnsweringSimple" ], "sha": "fabd06a45d947f3d46f1b8dce2186cf3b27776dc" }, "XLNetForSequenceClassification": { "tokenizer_classes": [ "XLNetTokenizer", "XLNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXLNetForSequenceClassification", "XLNetForSequenceClassification" ], "sha": "e3c194f24537ebf2c474ade60becb9397696edec" }, "XLNetForTokenClassification": { "tokenizer_classes": [ "XLNetTokenizer", "XLNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXLNetForTokenClassification", "XLNetForTokenClassification" ], "sha": "16aa15029aa667046d504c4a88ceddfdd5b5fb40" }, "XLNetLMHeadModel": { "tokenizer_classes": [ "XLNetTokenizer", "XLNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXLNetLMHeadModel", "XLNetLMHeadModel" ], "sha": "c9a98cc982a16ca162832a8cbea25116479bb938" }, "XLNetModel": { "tokenizer_classes": [ "XLNetTokenizer", "XLNetTokenizerFast" ], "processor_classes": [], "model_classes": [ "TFXLNetModel", "XLNetModel" ], "sha": "1d6e231942135faf32b8d9a97773d8f6c85ca561" }, "XmodForCausalLM": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XmodForCausalLM" ], "sha": "c6b746071f2f067099a8fb4f57ce3c27a7e4b67d" }, "XmodForMaskedLM": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XmodForMaskedLM" ], "sha": "e1085818f4ed3c6073b2038635e5f3061208923d" }, "XmodForMultipleChoice": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XmodForMultipleChoice" ], "sha": "c63042cdf196be3fed846421b345d439b2483f69" }, "XmodForQuestionAnswering": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XmodForQuestionAnswering" ], "sha": "75acd3071fae9978c82618cd0f090c87aabc1f23" }, "XmodForSequenceClassification": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XmodForSequenceClassification" ], "sha": "523a16570be048618913ac17ccd00d343bcb5e99" }, "XmodForTokenClassification": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XmodForTokenClassification" ], "sha": "a0f0a02732b4579670dad11a69ae244ebd777b49" }, "XmodModel": { "tokenizer_classes": [ "XLMRobertaTokenizer", "XLMRobertaTokenizerFast" ], "processor_classes": [], "model_classes": [ "XmodModel" ], "sha": "bc286de0035450e7dcd6bcce78098a967b9c2b6c" }, "YolosForObjectDetection": { "tokenizer_classes": [], "processor_classes": [ "YolosImageProcessor" ], "model_classes": [ "YolosForObjectDetection" ], "sha": "0a4aae25bfbe8b5edd4815cb00d697a6ba7d2126" }, "YolosModel": { "tokenizer_classes": [], "processor_classes": [ "YolosImageProcessor" ], "model_classes": [ "YolosModel" ], "sha": "339bc51f1914f031a550e5f95095ed4a4c22a7de" }, "YosoForMaskedLM": { "tokenizer_classes": [ "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "YosoForMaskedLM" ], "sha": "cb291bedcbec199ea195f086e3ebea6fab026bba" }, "YosoForMultipleChoice": { "tokenizer_classes": [ "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "YosoForMultipleChoice" ], "sha": "cf2d3a3f0628bc9d0da68ea8de26b12016453fee" }, "YosoForQuestionAnswering": { "tokenizer_classes": [ "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "YosoForQuestionAnswering" ], "sha": "e8c3091f674588adfa3371b3de0427a9b39dd03f" }, "YosoForSequenceClassification": { "tokenizer_classes": [ "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "YosoForSequenceClassification" ], "sha": "88132cbaa1a9a87f65b6f9813c388011377f18cf" }, "YosoForTokenClassification": { "tokenizer_classes": [ "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "YosoForTokenClassification" ], "sha": "fd2219856608d3dba70dc7b1a06af629903dec31" }, "YosoModel": { "tokenizer_classes": [ "AlbertTokenizerFast" ], "processor_classes": [], "model_classes": [ "YosoModel" ], "sha": "e144d9f1fe39c21eda1177702640e126892605ce" } }
transformers/tests/utils/tiny_model_summary.json/0
{ "file_path": "transformers/tests/utils/tiny_model_summary.json", "repo_id": "transformers", "token_count": 116207 }
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utility that checks the supports of 3rd party libraries are listed in the documentation file. Currently, this includes: - flash attention support - SDPA support Use from the root of the repo with (as used in `make repo-consistency`): ```bash python utils/check_support_list.py ``` It has no auto-fix mode. """ import os from glob import glob # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_doctest_list.py REPO_PATH = "." def check_flash_support_list(): with open(os.path.join(REPO_PATH, "docs/source/en/perf_infer_gpu_one.md"), "r") as f: doctext = f.read() doctext = doctext.split("FlashAttention-2 is currently supported for the following architectures:")[1] doctext = doctext.split("You can request to add FlashAttention-2 support")[0] patterns = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_*.py")) patterns_tf = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_tf_*.py")) patterns_flax = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_flax_*.py")) patterns = list(set(patterns) - set(patterns_tf) - set(patterns_flax)) archs_supporting_fa2 = [] for filename in patterns: with open(filename, "r") as f: text = f.read() if "_supports_flash_attn_2 = True" in text: model_name = os.path.basename(filename).replace(".py", "").replace("modeling_", "") archs_supporting_fa2.append(model_name) for arch in archs_supporting_fa2: if arch not in doctext: raise ValueError( f"{arch} should be in listed in the flash attention documentation but is not. Please update the documentation." ) def check_sdpa_support_list(): with open(os.path.join(REPO_PATH, "docs/source/en/perf_infer_gpu_one.md"), "r") as f: doctext = f.read() doctext = doctext.split( "For now, Transformers supports SDPA inference and training for the following architectures:" )[1] doctext = doctext.split("Note that FlashAttention can only be used for models using the")[0] doctext = doctext.lower() patterns = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_*.py")) patterns_tf = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_tf_*.py")) patterns_flax = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_flax_*.py")) patterns = list(set(patterns) - set(patterns_tf) - set(patterns_flax)) archs_supporting_sdpa = [] for filename in patterns: with open(filename, "r") as f: text = f.read() if "_supports_sdpa = True" in text: model_name = os.path.basename(filename).replace(".py", "").replace("modeling_", "") archs_supporting_sdpa.append(model_name) for arch in archs_supporting_sdpa: if not any(term in doctext for term in [arch, arch.replace("_", "-"), arch.replace("_", " ")]): raise ValueError( f"{arch} should be in listed in the SDPA documentation but is not. Please update the documentation." ) if __name__ == "__main__": check_flash_support_list() check_sdpa_support_list()
transformers/utils/check_support_list.py/0
{ "file_path": "transformers/utils/check_support_list.py", "repo_id": "transformers", "token_count": 1485 }
# coding=utf-8 # Copyright 2024 the HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import glob import importlib import os import re from abc import ABC, abstractmethod from collections import Counter, defaultdict, deque from typing import Dict, Optional, Set, Union import libcst as cst from check_copies import run_ruff from create_dependency_mapping import find_priority_list from libcst import ClassDef, CSTVisitor from libcst import matchers as m from libcst.metadata import MetadataWrapper, ParentNodeProvider, PositionProvider, ScopeProvider from transformers import logging from transformers.models.auto.configuration_auto import CONFIG_MAPPING_NAMES logger = logging.get_logger(__name__) AUTO_GENERATED_MESSAGE = """# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 # This file was automatically generated from {relative_path}. # Do NOT edit this file manually as any edits will be overwritten by the generation of # the file from the modular. If any change should be done, please apply the change to the # {short_name} file directly. One of our CI enforces this. # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 """ def get_module_source_from_name(module_name: str) -> str: # Extract the source code from the module name spec = importlib.util.find_spec(module_name) if spec is None or spec.origin is None: raise ValueError(f"Cannot open file associated with {module_name} module.") with open(spec.origin, "r", encoding="utf-8") as file: source_code = file.read() return source_code def preserve_case_replace(text, patterns: dict, default_name: str): # Create a regex pattern to match all variations regex_pattern = "|".join(re.escape(key) for key in patterns.keys()) compiled_regex = re.compile(f"(?<![a-z0-9])({regex_pattern})(.|$)", re.IGNORECASE | re.DOTALL) def replace(match): matched_pattern = match.group(1) next_char = match.group(2) new_pattern = patterns.get(matched_pattern, default_name) # In this case, the cased old model did not respect CamelCase and was all UPPERCASE, so we need to rely on next char # The heuristic is: if next char is not a letter, then it is not part of a model name and result should be `new_name`.upper() if len(patterns) == 2 and matched_pattern.isupper(): if not next_char.isalpha(): # `new_name.upper()` is just the other entry for `matched_pattern.lower()`, uppercased new_pattern = patterns[matched_pattern.lower()].upper() return new_pattern + next_char return compiled_regex.sub(replace, text) def get_cased_name(lowercase_name: str) -> str: """From a model name in lowercase in the format `my_model`, return the cased name in the format `MyModel`.""" if lowercase_name in CONFIG_MAPPING_NAMES: return CONFIG_MAPPING_NAMES[lowercase_name].replace("Config", "") else: return "".join(x.title() for x in lowercase_name.split("_")) def get_lowercase_name(cased_name: str) -> str: """From a model name in Camelcase in the format `MyModel`, return the lowercase name in the format `my_model`.""" inverse_mapping = {value: key for key, value in CONFIG_MAPPING_NAMES.items()} if cased_name + "Config" in inverse_mapping: return inverse_mapping[cased_name + "Config"] else: return "_".join([s.lower() for s in re.findall(r"[A-Z][^A-Z]*", cased_name)]) class ReplaceNameTransformer(m.MatcherDecoratableTransformer): """A transformer that replaces `old_name` with `new_name` in comments, string and any references. It should take into account name like `MyNewModel`, or `my_new_model`. Without using the AUTO_MAPPING. Supported renaming patterns: - llama -> my_new_model and my_new_model -> llama - Llama -> MyNewModel and MyNewModel -> Llama - LLAMA -> MY_NEW_MODEL and MY_NEW_MODEL -> LLAMA - LLaMa -> MyNewModel abd MyNewModel -> Llama """ def __init__(self, old_name: str, new_name: str, original_new_model_name: str = "", only_doc: bool = False): super().__init__() self.old_name = old_name self.new_name = new_name self.cased_new_name = get_cased_name(self.new_name) self.cased_old_name = get_cased_name(self.old_name) self.patterns = { old_name: new_name, old_name.upper(): new_name.upper(), # For some old models, `self.cased_old_name` == `old_name.upper()` in which case this overwrite previous entry self.cased_old_name: self.cased_new_name, } # In case new_name is a prefix alias, and not the original new model name self.original_new_model_name = original_new_model_name self.only_doc = only_doc def _replace_name(self, original_node, updated_node): if re.findall(r"# Copied from", updated_node.value): return cst.RemoveFromParent() update = preserve_case_replace(updated_node.value, self.patterns, self.cased_new_name) return updated_node.with_changes(value=update) @m.leave(m.SimpleString() | m.Comment()) def replace_name(self, original_node, updated_node): return self._replace_name(original_node, updated_node) def leave_Name(self, original_node, updated_node): if not self.only_doc: return self._replace_name(original_node, updated_node) return updated_node def leave_ImportFrom(self, original_node, updated_node): """The imports from other file types (configuration, processing etc) should use original model name.""" if self.original_new_model_name != self.new_name and m.matches(updated_node.module, m.Name()): patterns = "|".join(ALL_FILE_TYPES) regex = rf"({patterns})_{self.new_name}" new_source = re.sub( regex, lambda m: f"{m.group(1)}_{self.original_new_model_name}", updated_node.module.value ) updated_node = updated_node.with_changes(module=updated_node.module.with_changes(value=new_source)) return updated_node DOCSTRING_NODE = m.SimpleStatementLine( body=[ m.Expr( value=m.SimpleString( # match anything between """ """ value=m.MatchIfTrue(lambda value: re.search(r"\"\"\"[\s\S]*\"\"\"", value) is not None) ) ) ] ) def SUPER_CALL_NODE(func_name): return m.Call(func=m.Attribute(value=m.Call(func=m.Name("super")), attr=m.Name(func_name))) def is_call_to_super(node, func_name): return m.matches( node, m.SimpleStatementLine(body=[m.Return(SUPER_CALL_NODE(func_name)) | m.Expr(SUPER_CALL_NODE(func_name))]) ) def get_full_attribute_name(node: Union[cst.Attribute, cst.Name]) -> Optional[str]: """Get the full name of an Attribute or Name node (e.g. `"nn.Module"` for an Attribute representing it). If the successive value of an Attribute are not Name nodes, return `None`.""" if m.matches(node, m.Name()): return node.value elif m.matches(node, m.Attribute()): if not m.matches(node.attr, m.Name()): return None name = node.attr.value new_node = node.value while m.matches(new_node, m.Attribute()): if not m.matches(new_node.attr, m.Name()): return None name = new_node.attr.value + "." + name new_node = new_node.value if not m.matches(new_node, m.Name()): return None return new_node.value + "." + name return None # Transformer class to replace ClassB.call_to_method and ClassB().call_to_method with super().call_to_method class ReplaceMethodCallTransformer(cst.CSTTransformer): def __init__(self, all_bases: Set[str]): self.all_bases = all_bases def leave_Attribute(self, original_node: cst.Attribute, updated_node: cst.Attribute) -> cst.CSTNode: # Handle ClassB.call_to_method or module.classB.call_to_method if ( m.matches(original_node.value, m.Name() | m.Attribute()) and get_full_attribute_name(original_node.value) in self.all_bases and m.matches(original_node.attr, m.Name()) ): # Replace with super().call_to_method return updated_node.with_changes( value=cst.Call(cst.Name("super")), ) # Handle ClassB().call_to_method or module.ClassB().call_to_method elif ( m.matches(original_node.value, m.Call()) and m.matches(original_node.value.func, m.Name() | m.Attribute()) and get_full_attribute_name(original_node.value.func) in self.all_bases and m.matches(original_node.attr, m.Name()) ): # Replace with super().call_to_method return updated_node.with_changes(value=cst.Call(cst.Name("super"))) return updated_node def leave_Call(self, original_node: cst.Call, updated_node: cst.Call) -> cst.CSTNode: # Check if the function being called is of the form ClassB().func_a or ClassB.func_a if m.matches(original_node.func, m.Attribute()) and ( # Match ClassB().func_a(...) or module ( m.matches(original_node.func.value, m.Call()) and m.matches(original_node.func.value.func, m.Name() | m.Attribute()) and get_full_attribute_name(original_node.func.value.func) in self.all_bases ) or # Match ClassB.func_a(...) ( m.matches(original_node.func.value, m.Name() | m.Attribute()) and get_full_attribute_name(original_node.func.value) in self.all_bases ) ): # Check if the first argument is 'self', and remove it if len(original_node.args) > 0 and m.matches(original_node.args[0].value, m.Name("self")): # Create the new argument list without 'self' new_args = updated_node.args[1:] else: new_args = updated_node.args return updated_node.with_changes(args=new_args) return updated_node def get_docstring_indent(docstring): # Match the first line after the opening triple quotes match = re.search(r'(?:"""|\'\'\'|```)\n(\s+)', docstring) if match: # Return the indentation spaces captured return len(match.group(1)) return 0 def merge_docstrings(original_docstring, updated_docstring): # indent_level = get_docstring_indent(updated_docstring) original_level = get_docstring_indent(original_docstring) if not re.findall(r"\n\s*Args:\n", updated_docstring): # Split the docstring at the example section, assuming `"""` is used to define the docstring parts = original_docstring.split("```") if "```" in updated_docstring and len(parts) > 1: updated_docstring = updated_docstring.lstrip('r"') new_parts = updated_docstring.split("```") if len(new_parts) != 3: raise ValueError("There should only be one example, and it should have opening and closing '```'") parts[1] = new_parts[1] updated_docstring = "".join( [ parts[0].rstrip(" \n") + new_parts[0], f"\n{original_level * ' '}```", parts[1], "```", parts[2], ] ) elif updated_docstring not in original_docstring: # add tabulation if we are at the lowest level. if re.search(r"\n\s*.*\(.*\)\:\n\s*\w", updated_docstring): updated_docstring = updated_docstring.replace("\n ", "\n ") updated_docstring = original_docstring.rstrip('"') + "\n" + updated_docstring.lstrip('r"\n') return updated_docstring class SuperTransformer(cst.CSTTransformer): METADATA_DEPENDENCIES = (ParentNodeProvider,) def __init__(self, python_module: cst.Module, original_methods, updated_methods, all_bases=None): self.python_module = python_module self.original_methods = original_methods self.updated_methods = updated_methods self.all_assign_target = {} self.deleted_targets = {} # child node can delete some arguments self.all_bases = all_bases or [] self.transformer = ReplaceMethodCallTransformer(set(self.all_bases)) def update_body(self, existing_body, new_statements): """ Helper method to update the body by removing duplicates before adding new statements. `existing_body` is the body of the original method, the parent class `new_statements` are the additional statements """ deduplicated_new_body = [] existing_nodes = set() for node in new_statements: if m.matches(node, m.SimpleStatementLine(body=[m.Assign()])): target = self.python_module.code_for_node(node.body[0].targets[0].target) self.all_assign_target[target] = node if m.matches(node, m.SimpleStatementLine(body=[m.Del()])): target = self.python_module.code_for_node(node.body[0].target) self.deleted_targets[target] = node for stmt in existing_body: if m.matches(stmt, m.SimpleStatementLine(body=[m.Assign()])): target = self.python_module.code_for_node(stmt.body[0].targets[0].target) if target in self.deleted_targets: continue if target in self.all_assign_target: stmt = self.all_assign_target[target] # Skip the docstring (will be added later on, at the beginning) elif m.matches(stmt, DOCSTRING_NODE): continue comment_less_code = re.sub(r"#.*", "", self.python_module.code_for_node(stmt)).strip() comment_less_code = re.sub(r"\ *\n", "\n", comment_less_code).strip() deduplicated_new_body.append(stmt) existing_nodes.add(comment_less_code) for node in new_statements: code = self.python_module.code_for_node(node) comment_less_code = re.sub(r"#.*", "", code).strip() comment_less_code = re.sub(r"\ *\n", "\n", comment_less_code).strip() if node not in deduplicated_new_body and comment_less_code not in existing_nodes: if not m.matches(node, m.SimpleStatementLine(body=[m.Del()])): deduplicated_new_body.append(node) existing_nodes.add(comment_less_code) deduplicated_new_body = self._fix_post_init_location(deduplicated_new_body) return deduplicated_new_body def _fix_post_init_location(self, new_body: list[cst.CSTNode]): """Fix the location of the `post_init()` in the new body, if we added statements after the call to `super()` (it needs to be the very last statement called)""" # Fix the post_init() that has to be last for i, node in enumerate(new_body): code = self.python_module.code_for_node(node) comment_less_code = re.sub(r"#.*", "", code).strip() comment_less_code = re.sub(r"\ *\n", "\n", comment_less_code).strip() if "self.post_init(" in comment_less_code and i < len(new_body) - 1: # Remove it and add it again at the end new_body.pop(i) new_body.append(node) break return new_body def _fix_init_location(self, new_body): """Fix the location of the `super().__init__()` in the new body, if we had new statements before it.""" start_index = 0 for i, node in enumerate(new_body): if m.matches(node, DOCSTRING_NODE) and i == start_index: start_index += 1 continue code = self.python_module.code_for_node(node) comment_less_code = re.sub(r"#.*", "", code).strip() comment_less_code = re.sub(r"\ *\n", "\n", comment_less_code).strip() if "super().__init__" in comment_less_code and i > start_index: # Remove it and add it again at the top after the docstrings node = new_body.pop(i) new_body = new_body[:start_index] + [node] + new_body[start_index:] break return new_body def replace_super_calls(self, node: cst.IndentedBlock, func_name: str) -> cst.CSTNode: """Updates the body of the input `node`'s `func_name` function by replacing calls to super().func_name() with the source code of the parent class' `func_name`. It keeps everything that is defined before `super().func_name()`. """ self.has_docstring = False parent_has_docstring = False if func_name in self.original_methods: parent_has_docstring = m.matches(self.original_methods[func_name].body.body[0], DOCSTRING_NODE) new_body = [] has_super_call = False for i, expr in enumerate(node.body): if is_call_to_super(expr, func_name): has_super_call = True new_body.extend(self.update_body(self.original_methods[func_name].body.body, node.body[i + 1 :])) new_body = self._fix_init_location(new_body) else: expr = expr.visit(self.transformer) if m.matches(expr, DOCSTRING_NODE): self.has_docstring = True if parent_has_docstring: # actually here we ought to de-duplicate? original_docstring = self.original_methods[func_name].body.body[0].body[0].value.value updated_docstring = expr.body[0].value.value merged_doc = merge_docstrings(original_docstring, updated_docstring) new_node = [expr.with_changes(body=[cst.Expr(value=cst.SimpleString(value=merged_doc))])] else: new_node = [expr] new_body.extend(new_node) elif not m.matches(expr, m.SimpleStatementLine(body=[m.Del()])) and not has_super_call: new_body.append(expr) if not self.has_docstring and parent_has_docstring: new_body = [self.original_methods[func_name].body.body[0]] + new_body return node.with_changes(body=new_body) def leave_FunctionDef(self, original_node: cst.Call, updated_node: cst.Call) -> cst.CSTNode: if updated_node.name.value in self.updated_methods: name = updated_node.name.value new_body = self.replace_super_calls(updated_node.body, name) return updated_node.with_changes(body=new_body, params=updated_node.params) return updated_node def leave_Return(self, original_node: cst.Return, updated_node: cst.Return) -> cst.CSTNode: """ "When a return statement is reached, it is replaced with the unrolled super code""" if m.matches(updated_node.value, m.Call(func=m.Attribute(attr=m.Name("super")))): func_def = self.get_metadata(ParentNodeProvider, original_node) if m.matched(func_def, m.FunctionDef()) and func_def.name.value in self.original_methods: updated_return_value = updated_node.value.with_changes( args=[ cst.Arg( value=cst.Call(func=cst.Name("super"), args=[cst.Arg(value=cst.Name(func_def.name.value))]) ) ] ) return updated_node.with_changes(value=updated_return_value) return updated_node def find_all_dependencies( dependency_mapping: Dict[str, set], start_entity: Optional[str] = None, initial_dependencies: Optional[set] = None, initial_checked_dependencies: Optional[set] = None, return_parent: bool = False, ) -> Union[list, set]: """Return all the dependencies of the given `start_entity` or `initial_dependencies`. This is basically some kind of BFS traversal algorithm. It can either start from `start_entity`, or `initial_dependencies`. Args: dependency_mapping (`Dict[str, set]`): A mapping from entities (usually function/assignment names), to immediate dependencies. That is, for function names, a mapping {"foo": {"bar", "test"}} would indicate that functions `bar` and `test` are immediately called in `foo`'s definition. start_entity (str | None, *optional*): A key of `dependency_mapping`, indicating from which entity to start the search. initial_dependencies (set | None, *optional*): If `start_entity` is not provided, this can be used as an alternative. In this case, the search will continue from all the entities in `initial_dependencies`, if they are in `dependency_mapping`. initial_checked_dependencies (set | None, *optional*): If provided, entities already present in `initial_checked_dependencies` will not be part of the returned dependencies. return_parent (bool, *optional*): If `True`, will return a list consisting of tuples (dependency, parent) instead of a simple set of dependencies. Note that the order of the items in the list reflects the traversal order. Thus, no parent can ever appear before childs. Returns: A set of all the dependencies, or a list of tuples `(dependency, parent)` if `return_parent=True`. Example: Given the following structure in the `modular_xxx.py` file: ``` def foo1(): pass def foo2(): pass def bar(): foo1() def foobar(): bar() foo2() class MyLayer(SomeOtherModelLayer): def forward(...): foobar() ``` and the `dependency_mapping` created when visiting the `modular_xxx.py` file, we get: ``` dependency_mapping = {'bar': {'foo1'}, 'foobar': {'bar', 'foo2'}} find_all_dependencies(dependency_mapping, start_entity='foobar', return_parent=True) >>> [('bar', 'foobar'), ('foo2', 'foobar'), ('foo1', 'bar')] ``` That is, all the functions needed (and potentially their immediate parent) so that the function to be added in MyLayer (`foobar`) can work correctly. """ if initial_dependencies is None and start_entity is not None: initial_dependencies = dependency_mapping[start_entity] if initial_checked_dependencies is None: initial_checked_dependencies = set() dependency_queue = deque(initial_dependencies) all_dependencies = set() all_dependencies_with_parent = [] checked_dependencies = set(initial_checked_dependencies) parents = {initial_dep: start_entity for initial_dep in initial_dependencies} while len(dependency_queue) > 0: # Pick element to visit current = dependency_queue.popleft() if current not in checked_dependencies: # Add the dependencies all_dependencies.add(current) all_dependencies_with_parent += [(current, parents[current])] if current in dependency_mapping.keys(): # Update dependency queue dependency_queue.extend(dependency_mapping[current]) parents.update({dep: current for dep in dependency_mapping[current]}) # add visited node to the list checked_dependencies.add(current) if not return_parent: return all_dependencies # no child can ever appear before its parent thanks to the queue (needed to add them at the correct location in the body later) return all_dependencies_with_parent # Top-level variables that match the following patterns will always use the value in the `modular_xxx.py` file ASSIGNMENTS_REGEX_TO_KEEP = [r"_CHECKPOINT", r"_EXPECTED", r"_FOR_DOC"] class ClassDependencyMapper(CSTVisitor): """A visitor which is designed to analyze a single class node to get all its dependencies that are shared with the set of `global_names`. """ def __init__( self, class_name: str, global_names: set[str], objects_imported_from_modeling: Optional[set[str]] = None ): super().__init__() self.class_name = class_name self.dependencies = set() self.global_names = global_names self.objects_imported_from_modeling = ( set() if objects_imported_from_modeling is None else objects_imported_from_modeling ) def visit_Name(self, node): if ( node.value != self.class_name and node.value in self.global_names and node.value not in self.objects_imported_from_modeling ): self.dependencies.add(node.value) def dependencies_for_class_node(node: cst.ClassDef, global_names: set[str]) -> set: """Create immediate dependencies for a class node based on the `global_names`.""" temp_module = cst.Module(body=[node]) visitor = ClassDependencyMapper(node.name.value, global_names) temp_module.visit(visitor) return visitor.dependencies def augmented_dependencies_for_class_node( node: cst.ClassDef, mapper: "ModuleMapper", objects_imported_from_modeling: Optional[set[str]] = None ) -> set: """Create augmented dependencies for a class node based on a `mapper`. Augmented dependencies means immediate dependencies + recursive function and assignments dependencies. """ temp_module = cst.Module(body=[node]) visitor = ClassDependencyMapper(node.name.value, set(mapper.global_nodes.keys()), objects_imported_from_modeling) temp_module.visit(visitor) return mapper.augment_dependencies(visitor.dependencies) # All the potential file types to create ALL_FILE_TYPES = ( "modeling", "configuration", "tokenization", "processing", "image_processing", "feature_extractor", ) class ModuleMapper(CSTVisitor, ABC): """An abstract visitor class which analyses a module, creating a mapping of dependencies for classes, functions and assignments. Class dependencies are computed with `compute_class_dependencies()`, while function and assignment dependencies are stored in `self.object_recursive_dependency_mapping` (can be computed by `_compute_recursive_object_dependencies()`). It defines common visiting patterns (i.e. common visit_xxx/leave_xxx functions) between the modular file and the modeling files that will be visited. """ METADATA_DEPENDENCIES = (ParentNodeProvider, PositionProvider) def __init__(self, python_module: cst.Module): # fmt: off self.python_module: cst.Module = python_module # original cst.Module being visited self.classes: Dict[str, cst.ClassDef] = {} # mapping from class names to Nodes (it will be ordered by default!!) self.imports = [] # stores all import statements self.functions: Dict[str, cst.FunctionDef] = {} # mapping of global scope function names to Nodes self.object_dependency_mapping = defaultdict(set) # immediate function/assignment dependency mapping (i.e. dependencies immediately in the function/assignment definition) self.assignments: Dict[str, cst.SimpleStatementLine] = {} # mapping of global assignments names to Nodes self.current_function = None # this keeps track of the current module-scope function self.current_assignment = None # this keeps track of the current module-scope assignment # this keeps track of objects imported from modeling files (`from .configuration import Config`) -> `Config` should not be a dependency self.objects_imported_from_modeling = set() # regex pattern joining every possible file type self.match_patterns = "|".join(ALL_FILE_TYPES) # fmt: on def visit_ImportFrom(self, node): """This keeps track of objects imported from neighbor modeling files (e.g. in `modeling_xxx.py, we have `from .configuration_xxx import Config`, then `Config` should be recorded as it is not a dependency that needs to be added (because it will be part of the imports)""" import_module = self.python_module.code_for_node(node.module) import_statement = "." * len(node.relative) + import_module if re.search(rf"^\.({self.match_patterns})_.*", import_statement): for imported_object in node.names: # If an alias is present, we record it and not the original name if imported_object.evaluated_alias is not None: self.objects_imported_from_modeling.add(imported_object.evaluated_alias) else: self.objects_imported_from_modeling.add(imported_object.evaluated_name) def visit_SimpleStatementLine(self, node): """ Global Assigns like `GEMMA_INPUT_DOCSTRING = 'THIS IS THE INPUT'` and all import statements are extracted and saved in their corresponding dict. They are then used when updating dependency mappings. """ parent_node = self.get_metadata(cst.metadata.ParentNodeProvider, node) simple_top_level_assign_structure = m.SimpleStatementLine( body=[m.Assign(targets=[m.AssignTarget(target=m.Name())])] ) if m.matches(parent_node, m.Module()): if m.matches(node, simple_top_level_assign_structure): left_hand_side = node.body[0].targets[0].target.value self.current_assignment = left_hand_side self.assignments[left_hand_side] = node elif m.matches(node, m.SimpleStatementLine(body=[m.Import() | m.ImportFrom()])): self.imports.append(node) def leave_SimpleStatementLine(self, node): # No need to check for the parent here -> everytime we exit one, it should be None anyway independently of where the # SimpleStatement is located self.current_assignment = None def visit_FunctionDef(self, node): parent_node = self.get_metadata(cst.metadata.ParentNodeProvider, node) if m.matches(parent_node, m.Module()): self.current_function = node.name.value self.functions[node.name.value] = node def leave_FunctionDef(self, node): parent_node = self.get_metadata(cst.metadata.ParentNodeProvider, node) if m.matches(parent_node, m.Module()): self.current_function = None def visit_If(self, node): for stmt in node.body.body: if m.matches(stmt, m.SimpleStatementLine(body=[m.ImportFrom() | m.Import()])): self.imports.append(node) def visit_ClassDef(self, node: ClassDef) -> None: """Record class nodes to create their dependencies at the end.""" self.classes[node.name.value] = node def visit_Name(self, node: cst.Call): """This is used to create a mapping from module-scope functions and assignments to objects used inside them.""" if self.current_function is not None: self.object_dependency_mapping[self.current_function].add(node.value) if self.current_assignment is not None: self.object_dependency_mapping[self.current_assignment].add(node.value) def leave_Module(self, node): """When leaving the module, we store the position of each global scoped node to allow sorting the dependencies based on their position in the code later. We use the PositionProvider metadata wrapper for this. We also make sure to update `self.object_dependency_mapping` so that it contains only names recorded in `self.global_nodes`. """ # assign all nodes self.global_nodes = {**self.assignments, **self.classes, **self.functions} # now sort the class dependency_mapping based on the position of the nodes self.start_lines = {} for id, node in self.global_nodes.items(): self.start_lines[id] = self.get_metadata(cst.metadata.PositionProvider, node).start.line def _restrict_dependencies_to_known_entities(self): """Since we added every Name as part of `self.object_dependency_mapping`, we need to remove those that are not part of the recorded objects in `self.global_nodes` (i.e. built-in variables, imports, etc). This should be called only after all merging operations have been finalized!!""" global_objects = set(self.global_nodes.keys()) for object_name, dependencies in self.object_dependency_mapping.items(): self.object_dependency_mapping[object_name] = {dep for dep in dependencies if dep in global_objects} def _compute_recursive_object_dependencies(self) -> dict[str, set]: """Based on immediate dependency mapping, create the recursive dependency mapping. For example, given the following file: ``` def foo(): pass def bar(): foo() def test(): bar() ``` this visitor can only record immediate dependencies, i.e. it will record the following `self.object_dependency_mapping = {"test": {"bar"}, "bar": {"foo}}`. This function is used to create the recursive mapping, i.e. `recursive_dependencies = {"test": {"bar", "foo"}, "bar": {"foo}}`. """ recursive_dependencies = {} for object_name in self.object_dependency_mapping.keys(): all_dependencies = find_all_dependencies(self.object_dependency_mapping, start_entity=object_name) recursive_dependencies[object_name] = all_dependencies return recursive_dependencies def augment_dependencies(self, dependencies: set[str]) -> set[str]: """For a set of `dependencies`, augment them by adding all potential dependencies of the **functions** and **assignments** present in the `dependencies`. """ new_dependencies = dependencies.copy() # Go through the set of dependencies for dep in tuple(dependencies): if dep in self.object_recursive_dependency_mapping.keys(): new_dependencies.update(self.object_recursive_dependency_mapping[dep]) return new_dependencies def compute_class_dependencies(self): """For each visited class, find its dependencies based on visiting the current file + potential merged dependencies.""" self.class_dependency_mapping = {} for class_name, class_node in self.classes.items(): dependencies = dependencies_for_class_node(class_node, set(self.global_nodes.keys())) # Correctly augment class dependencies with all needed objects self.class_dependency_mapping[class_name] = self.augment_dependencies(dependencies) @abstractmethod def compute_relative_order(self, missing_dependencies: set) -> dict[str, int]: raise NotImplementedError class ModelFileMapper(ModuleMapper): """A mapper designed to parse modeling files (like `modeling_llama.py`). When encountering such a file in the `modular_xxx.py` file, we need to correctly visit it and merge the dependencies of the modular and current file. For this reason, this class should only be instantiated from the class method `visit_and_merge_dependencies`, which takes care of correctly merging dependencies, then finalizes all dependency graph computations. Note that we only merge functions and assignments here, as classes will be treated later on as they may be modified. For example, if you redefine `apply_rotary_pos_emb()` in the modular, the new node should be used in the dependencies of the modeling files as well. """ def __init__(self, python_module: cst.Module): super().__init__(python_module) def compute_relative_order(self, missing_dependencies: set[str]) -> dict[str, int]: """Compute in which relative order the `missing_dependencies` should appear when the nodes are added to the final file that will be created based on the modular. """ relative_order = {} idx = 0 classes = sorted( [dep for dep in tuple(missing_dependencies) if dep in self.classes], key=lambda x: self.start_lines[x] ) # This is because for merged dependencies, we only have relative order in the other visited file, so we need # to track dependency order relative to a given class if len(classes) > 0 and not hasattr(self, "class_dependency_mapping"): raise ValueError("Cannot correctly find the relative order of the dependencies.") remaining_dependencies = missing_dependencies.copy() # Start by tracking relative order class by class for class_name in classes: class_dependencies = tuple(self.class_dependency_mapping[class_name] & remaining_dependencies) original_dependencies = [] merged_dependencies = [] # We need to differentiate between nodes that were already present (we can get relative order globally) and # nodes that were merged (we can get relative order only relative to the class the dependencies relate to) for class_dep in class_dependencies: if class_dep in self.start_lines: original_dependencies.append(class_dep) else: merged_dependencies.append(class_dep) # We need to sort deterministically before actual sorting, so that entries missing (i.e. with value 1e10) # will always get the same order independently of the system (they come from a set, which has no deterministic order) original_dependencies = sorted(original_dependencies, reverse=True) # Sort both list according to the order in their respective file original_dependencies = sorted(original_dependencies, key=lambda x: self.start_lines.get(x, 1e10)) merged_dependencies = sorted(merged_dependencies, key=lambda x: self.modular_file_start_lines[x]) # Add all original node first, then merged ones for dep in original_dependencies + merged_dependencies: remaining_dependencies.remove(dep) relative_order[dep] = idx idx += 1 # Add the class itself (it can sometimes already be present if the order of classes in the source file # does not make sense, i.e. a class is used somewhere before being defined like in `rt_detr`...) if class_name in remaining_dependencies: remaining_dependencies.remove(class_name) relative_order[class_name] = idx idx += 1 # Now add what still remains remaining_dependencies = tuple(remaining_dependencies) original_dependencies = [] merged_dependencies = [] for dep in remaining_dependencies: if dep in self.modular_file_start_lines: merged_dependencies.append(dep) else: original_dependencies.append(dep) # We need to sort deterministically before actual sorting, so that entries missing (i.e. with value 1e10) # will always get the same order independently of the system (they come from a set, which has no deterministic order) original_dependencies = sorted(original_dependencies, reverse=True) # Sort both list according to the order in their respective file original_dependencies = sorted(original_dependencies, key=lambda x: self.start_lines.get(x, 1e10)) merged_dependencies = sorted(merged_dependencies, key=lambda x: self.modular_file_start_lines[x]) # Add all original node first, then merged ones for dep in original_dependencies + merged_dependencies: relative_order[dep] = idx idx += 1 return relative_order def _merge_functions(self, functions: dict[str, cst.CSTNode], object_mapping: dict[str, set]): """Update the global nodes and function dependency mapping with those from the modular file. Merging rule: if any function with the same name was redefined in the modular, use it and its dependencies instead of the original ones (this may mean to add new functions as well, if any redefined function uses a new one). """ # Add/overwrite all needed function nodes and dependencies self.functions.update(functions) self.object_dependency_mapping.update( {obj: dep for obj, dep in object_mapping.items() if obj in functions.keys()} ) # Add them to global nodes self.global_nodes.update(self.functions) def _merge_assignments(self, assignments: dict[str, cst.CSTNode], object_mapping: dict[str, set]): """Update the global nodes with the assignment from the modular file. Merging rule: if any assignment with the same name was redefined in the modular, we use it and its dependencies ONLY if it matches a pattern in `ASSIGNMENTS_REGEX_TO_KEEP`. Otherwise, we use the original value and dependencies. This rule was chosen to avoid having to rewrite the big docstrings. """ for assignment, node in assignments.items(): should_keep = any(re.search(pattern, assignment) for pattern in ASSIGNMENTS_REGEX_TO_KEEP) if should_keep or assignment not in self.assignments: self.assignments[assignment] = node if assignment in object_mapping: self.object_dependency_mapping[assignment] = object_mapping[assignment] # Add them to global nodes self.global_nodes.update(self.assignments) def _merge_classes(self, classes: dict[str, cst.CSTNode]): """Update the global nodes with the new classes from the modular (i.e. classes which do not exist in current file, and are not imported). We do NOT update any dependency mapping here. This is because we only need the names of newly defined classes in the modular to be discoverable when computing dependencies for new nodes later on. For this reason, we do not add the new classes to `self.classes`, but only to `global_nodes`. """ # Add/overwrite all needed function nodes and dependencies self.global_nodes.update( { name: node for name, node in classes.items() if name not in self.classes and name not in self.objects_imported_from_modeling } ) def merge_modular_dependencies(self, classes, functions, assignments, object_mapping, start_lines): """Merge classes, functions and assignments from the modular definitions into the current module file, then record the relative order of all nodes. Note: This function takes care of updating `global_nodes` and `object_recursive_dependency_mapping` as well after the merge with other files dependencies. """ self._merge_functions(functions, object_mapping) self._merge_assignments(assignments, object_mapping) self._merge_classes(classes) self.modular_file_start_lines = start_lines # Restrict the dependency mappings to the known entities to avoid Python's built-ins and imports self._restrict_dependencies_to_known_entities() # Create the global mapping of recursive dependencies for functions and assignments self.object_recursive_dependency_mapping = self._compute_recursive_object_dependencies() @classmethod def visit_and_merge_dependencies( cls, module: cst.Module, classes, functions, assignments, object_mapping, start_lines ) -> "ModelFileMapper": wrapper = MetadataWrapper(module) mapper = cls(module) wrapper.visit(mapper) # Merge dependencies mapper.merge_modular_dependencies(classes, functions, assignments, object_mapping, start_lines) # Create the class dependencies graph mapper.compute_class_dependencies() return mapper def common_partial_suffix(str1: str, str2: str) -> str: """Return the biggest common suffix between 2 strings. If one string is a full suffix of the other string, we do not consider it a common suffix and return `""`""" common_suffix = "" for i in range(1, min(len(str1), len(str2)) + 1): if str1[-i] == str2[-i]: common_suffix = str1[-i] + common_suffix else: break # We do not allow full string suffix if common_suffix == str1 or common_suffix == str2: common_suffix = "" return common_suffix def replace_class_node( mapper: ModelFileMapper, class_node: cst.ClassDef, renamed_super_class: str, original_super_class: str ): """ Replace a class node which inherits from another modeling class. This function works in the following way: - start from the base class node of the inherited class (a cst.Node) - replace all methods of the base node with the methods defined in the child class - append all new methods defined in the child class - replace all calls to super() with the unravelled code | ```python | | ```python | class GemmaModel(LlamaModel): | | class GemmaModel(nn.Module): | def __init__(self): | | def __init__(self): Going from: | super().__init__() | to: | super().__init__(config) | self.dropout = 0.2 | | self.dropout = 0.2 | ``` | | self.padding_idx = config.pad_token_id | self.vocab_size = config.vocab_size | self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) | self.layers = nn.ModuleList( | [LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)] | ) | self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) | self.gradient_checkpointing = False | # Initialize weights and apply final processing | self.post_init() | ``` """ all_bases = [get_full_attribute_name(k.value) for k in class_node.bases] if any(base is None for base in all_bases): raise ValueError(f"Could not parse the name of the bases for {class_node.name.value}") original_node = mapper.classes[renamed_super_class] # Always use the new name of the class (in case we use e.g. `ColPaliForRetrieval` inheriting from `PaliGemmaForConditionalGeneration`) new_name = class_node.name # If the new class name is different from the renamed super class name, we need to update the docstrings/comments accordingly if new_name.value != renamed_super_class: common_suffix = common_partial_suffix(new_name.value, renamed_super_class) # Note that this works even without common prefix, in which case it does not replace anything old, new = renamed_super_class.replace(common_suffix, ""), new_name.value.replace(common_suffix, "") temp_module = cst.Module(body=[original_node]) original_node = temp_module.visit( ReplaceNameTransformer(get_lowercase_name(old), get_lowercase_name(new), only_doc=True) ).body[0] # If we explicitly passed a new base with common suffix to an old base, it is for switching the prefix # e.g. if the "natural" parent class is `PreTrainedModel` but we wanted to rename it to `PreTrainedVisionModel` additional_bases = [base for base in all_bases if base != original_super_class] new_bases = [] for original_base in original_node.bases: new_base = original_base # we only potentially switch base for Name-based bases, not Attribute if m.matches(original_base.value, m.Name()): original_base_name = original_base.value.value for additional_base_name in additional_bases: suffix = common_partial_suffix(original_base_name, additional_base_name) if len(suffix) > 0 and suffix[0].isupper(): new_name_node = original_base.value.with_changes(value=additional_base_name) new_base = original_base.with_changes(value=new_name_node) break new_bases.append(new_base) original_methods = { f.name.value if hasattr(f, "name") else mapper.python_module.code_for_node(f): f for f in original_node.body.body } updated_methods = { f.name.value if hasattr(f, "name") else mapper.python_module.code_for_node(f): f for f in class_node.body.body } end_meth = [] assign_targets = {} docstring_node = [] # Iterate directly from node.body as there can be property/setters with same names which are overwritten when we use a dict for func in original_node.body.body: name = func.name.value if hasattr(func, "name") else mapper.python_module.code_for_node(func) if m.matches(func, m.FunctionDef()) and name in updated_methods and updated_methods[name] is not None: new_params = updated_methods[name].params # Replace the method in the replacement class, preserving decorators kwarg_name = getattr(updated_methods[name].params, "star_kwarg", None) if kwarg_name and kwarg_name.name.value == "super_kwargs": parent_params = {k.name.value: k for k in func.params.params} parent_params.update({k.name.value: k for k in new_params.params[1:]}) new_params = new_params.with_changes( params=list(parent_params.values()), star_kwarg=func.params.star_kwarg ) # Keep decorators in `modular_xxx.py` if any, else original decorators new_decorators = ( updated_methods[name].decorators if len(updated_methods[name].decorators) > 0 else func.decorators ) if not re.match( r"\ndef .*\(.*\):\n raise.*Error\(.*", mapper.python_module.code_for_node(updated_methods[name]), ): func = func.with_changes(body=updated_methods[name].body, params=new_params, decorators=new_decorators) else: continue if m.matches(func, m.SimpleStatementLine(body=[m.Assign()])): target = mapper.python_module.code_for_node(func.body[0].targets[0]) assign_targets[target] = func elif m.matches(func, m.SimpleStatementLine(body=[m.AnnAssign()])): target = mapper.python_module.code_for_node(func.body[0].target) assign_targets[target] = func elif m.matches(func, DOCSTRING_NODE): docstring_node = [func] else: end_meth.append(func) # Port new methods that are defined only in modular-file and append at the end for func in class_node.body.body: name = func.name.value if hasattr(func, "name") else mapper.python_module.code_for_node(func) if m.matches(func, DOCSTRING_NODE): # This processes the docstring of the class! # Extract the original docstring updated_docstring = func.body[0].value.value if len(docstring_node) == 0: # If the original docstring is empty, just create one from the updated. docstring_node = [ cst.SimpleStatementLine(body=[cst.Expr(value=cst.SimpleString(value=updated_docstring))]) ] else: original_docstring = docstring_node[0].body[0].value.value merged_doc = merge_docstrings(original_docstring, updated_docstring) # Update the docstring in the original function docstring_node = [ docstring_node[0].with_changes(body=[cst.Expr(value=cst.SimpleString(value=merged_doc))]) ] if name not in original_methods and func is not None and isinstance(func, cst.FunctionDef): end_meth.append(func) if m.matches(func, m.SimpleStatementLine(body=[m.Assign()])): # TODO we only use single assign might cause issues target = mapper.python_module.code_for_node(func.body[0].targets[0]) assign_targets[target] = func if m.matches(func, m.SimpleStatementLine(body=[m.AnnAssign()])): target = mapper.python_module.code_for_node(func.body[0].target) assign_targets[target] = func end_meth = docstring_node + list(assign_targets.values()) + end_meth # Replace the calls to `super()` with the unrolled code result_node = original_node.with_changes(body=cst.IndentedBlock(body=end_meth)) temp_module = cst.Module(body=[result_node]) new_module = MetadataWrapper(temp_module) new_replacement_class = new_module.visit( SuperTransformer(temp_module, original_methods, updated_methods, all_bases) ) new_replacement_body = new_replacement_class.body[0].body # get the indented block # Use decorators redefined in `modular_xxx.py` if any new_decorators = class_node.decorators if len(class_node.decorators) > 0 else original_node.decorators return original_node.with_changes( body=new_replacement_body, decorators=new_decorators, bases=new_bases, name=new_name ) TYPE_TO_FILE_TYPE = { "Config": "configuration", "Tokenizer": "tokenization", "Processor": "processing", "ImageProcessor": "image_processing", "ImageProcessorFast": "image_processing*_fast", # "*" indicates where to insert the model name before the "_fast" suffix "FastImageProcessorInitKwargs": "image_processing*_fast", "FastImageProcessorPreprocessKwargs": "image_processing*_fast", "FeatureExtractor": "feature_extractor", "ProcessorKwargs": "processing", "ImagesKwargs": "processing", "TextKwargs": "processing", } def find_file_type(class_name: str) -> str: """Based on a class name, find the file type corresponding to the class. If the class name is `LlamaConfig` it will return `configuration`. The list of suffixes is in `TYPE_TO_FILE_TYPE`. If there are no match, we match by default to `modeling` """ match_pattern = "|".join(TYPE_TO_FILE_TYPE.keys()) match = re.search(rf"({match_pattern})$", class_name) if match: file_type = TYPE_TO_FILE_TYPE[match.group(1)] else: file_type = "modeling" return file_type # These top-level variables will always appear at the very beginning of the file, in the order they are defined in # this list (this is to avoid having variables at weird places, even if they are not used before) VARIABLES_AT_THE_BEGINNING = ( "logger", "_CHECKPOINT_FOR_DOC", "_CONFIG_FOR_DOC", ) # These specific modeling imports should not be visited as other modeling files IMPORTS_TO_SKIP_IN_MODULAR = ("auto.modeling_auto",) def append_new_import_node( node: cst.CSTNode, unused_imports: set[str], added_names: set, imports_to_keep: list[cst.CSTNode] ): """Insert the new `node` to the list of `imports_to_keep` in-place, if it is not part of the `unused_imports` or `added_names`. Also modifies `added_names` in-place accordingly.""" import_node = node.body[0] names_to_keep = [] for name in import_node.names: name_value = name.evaluated_name if name_value not in unused_imports and name_value not in added_names: names_to_keep.append(name.with_changes(comma=cst.MaybeSentinel.DEFAULT)) added_names.add(name_value) if len(names_to_keep) > 0: new_node = node.with_changes(body=[import_node.with_changes(names=names_to_keep)]) imports_to_keep.append(new_node) def get_needed_imports(body: dict[str, dict], all_imports: list[cst.CSTNode]) -> list[cst.CSTNode]: """Get all the imports needed in the `body`, from the list of `all_imports`. `body` is a dict with the following structure `{str: {"insert_idx": int, "node": cst.CSTNode}}`. Note: we need to use `isinstance` on scope assignements, m.matches apparently does not work here yet! """ new_body = [k[1]["node"] for k in sorted(body.items(), key=lambda x: x[1]["insert_idx"])] wrapper = MetadataWrapper(cst.Module(body=all_imports + new_body)) scopes = set(wrapper.resolve(ScopeProvider).values()) unused_imports = set() import_ref_count = defaultdict(lambda: 0) for scope in scopes: for assignment in scope.assignments: node = assignment.node if isinstance(assignment, cst.metadata.Assignment) and isinstance(node, (cst.Import, cst.ImportFrom)): ref_count = len(assignment.references) name = assignment.name import_ref_count[name] = max(ref_count, import_ref_count[name]) # Similar imports may be redefined, and only used between their 1st and 2nd definition so if we already have # a ref count > 0 at any point, the imports is actually used unused_imports = {name for name, count in import_ref_count.items() if count <= 0 or name in body.keys()} imports_to_keep = [] # We need to keep track of which names were already imported, because some import may be duplicated from multiple sources # or be both protected and unprotected due to inconsistency between models added_names = set() existing_protected_statements = set() # str repr of the import nodes - does not work with the nodes directly for node in all_imports: if m.matches(node, m.If()): # handle safe imports new_statements = [] for stmt_node in node.body.body: append_new_import_node(stmt_node, unused_imports, added_names, new_statements) new_statements = [stmt for stmt in new_statements if str(stmt) not in existing_protected_statements] if len(new_statements) > 0: new_node = node.with_changes(body=node.body.with_changes(body=new_statements)) imports_to_keep.append(new_node) existing_protected_statements.update({str(stmt) for stmt in new_statements}) else: append_new_import_node(node, unused_imports, added_names, imports_to_keep) protected_import_nodes = [node for node in imports_to_keep if m.matches(node, m.If())] usual_import_nodes = [node for node in imports_to_keep if not m.matches(node, m.If())] # Protected imports always appear at the end of all imports return usual_import_nodes + protected_import_nodes def split_all_assignment(node: cst.CSTNode) -> dict[str, cst.CSTNode]: """Split the `__all__` assignment found in the modular between each corresponding files.""" all_all_per_file = {} assign_node = node.body[0] if isinstance(assign_node.value, cst.List): # Extract the elements from the list all_all_to_add = defaultdict(list) for element in assign_node.value.elements: if isinstance(element.value, cst.SimpleString): # Remove quotes and add the string to the elements list class_name = element.value.value file = find_file_type(element.value.evaluated_value) all_all_to_add[file] += [class_name] for file, new_alls in all_all_to_add.items(): new_node = assign_node.with_changes( value=cst.List(elements=[cst.Element(value=cst.SimpleString(value=k)) for k in new_alls]) ) all_all_per_file[file] = node.with_changes(body=[new_node]) return all_all_per_file class ModularFileMapper(ModuleMapper): """This is a Mapper to visit a modular file (like `modular_llama.py`). It visits the whole file, recording dependency, then visits all imported modeling files (like `modeling_llama.py`), and manages their mutual dependencies. Calling the method `create_modules()` after visit will create all modules based on this modular file. """ def __init__(self, python_module, new_name): super().__init__(python_module) # fmt: off self.model_name = new_name # name of the model being defined. Should be in the format of `llama` or `layout_xlm` or `phi3` self.model_specific_imported_objects: Dict[str, str] = {} # e.g. {"LlamaModel": "transformers.models.llama.modeling_llama"} self.model_specific_modules: Dict[str, cst.Module] = {} # e.g. {"transformers.models.llama.modeling_llama": cst.Module} self.all_all_to_add = {} # fmt: on def visit_ImportFrom(self, node: cst.ImportFrom) -> None: """When visiting imports from modeling files (i.e. `transformers.models.xxx`) we get the code, parse it, and save it in `self.model_specific_modules` to later visit. The imported objects are saved in `self.model_specific_imported_objects`. """ import_module = self.python_module.code_for_node(node.module) import_statement = "." * len(node.relative) + import_module if any(import_to_skip in import_statement for import_to_skip in IMPORTS_TO_SKIP_IN_MODULAR): return if m.matches(node.module, m.Attribute()): for imported_ in node.names: _import = re.search( rf"(?:transformers\.models\.)|(?:\.\.)\w+\.({self.match_patterns})_.*", import_statement ) if _import: source = _import.group(1) if source == "modeling" and "Config" in self.python_module.code_for_node(imported_): raise ValueError( f"You are importing {self.python_module.code_for_node(imported_)} from the modeling file. Import from the `configuration_xxxx.py` file instead" ) if import_module not in self.model_specific_modules: if "models" not in import_module: import_module = "models." + import_module if "transformers" not in import_module: import_module = "transformers." + import_module source_code = get_module_source_from_name(import_module) tree = cst.parse_module(source_code) self.model_specific_modules[import_module] = tree imported_object = self.python_module.code_for_node(imported_.name) self.model_specific_imported_objects[imported_object] = import_module if m.matches(node.module, m.Name()): if "transformers" == import_module: raise ValueError( f"You are importing from {import_module} directly using global imports. Import from the correct local path" ) def visit_SimpleStatementLine(self, node): """If we visit an import statement not previously visited, record it. If we visit a module-scope assignment, simply record it or, if it is `__all__`, split it between files where we should dispatch it. """ parent_node = self.get_metadata(cst.metadata.ParentNodeProvider, node) simple_top_level_assign_structure = m.SimpleStatementLine( body=[m.Assign(targets=[m.AssignTarget(target=m.Name())])] ) if m.matches(parent_node, m.Module()): if m.matches(node, m.SimpleStatementLine(body=[m.Import()])): self.imports.append(node) elif m.matches(node, m.SimpleStatementLine(body=[m.ImportFrom()])): import_module = self.python_module.code_for_node(node.body[0].module) import_statement = "." * len(node.body[0].relative) + import_module if not ( re.search(rf"(?:transformers\.models\.)|(?:\.\.)\w+\.({self.match_patterns})_.*", import_statement) and not any(import_to_skip in import_statement for import_to_skip in IMPORTS_TO_SKIP_IN_MODULAR) ): self.imports.append(node) elif m.matches(node, simple_top_level_assign_structure): assigned_variable = node.body[0].targets[0].target.value # __all__ is treated differently and not added to general assignments if assigned_variable == "__all__": self.all_all_to_add = split_all_assignment(node) else: self.current_assignment = assigned_variable self.assignments[assigned_variable] = node def leave_Module(self, node): """When we leave the modular file, we do the following in order: 1. for each modeling file found in the imports, rename it with the new model name, visit it, and update its dependency graph with the new function and assignment definitions found in the modular 2. update the modular dependency graph with the imported functions and assignments (found when visiting the matching files) 3. compute the nested (recursive) function and assignment dependencies """ # Takes care of finalizing our visit super().leave_Module(node) # 1. for each modeling file found in the imports, rename it with the new model name, visit it, and update dependencies self.visited_modules = {} self.renamers = {} name_prefixes = self.infer_new_model_name() for file, module in self.model_specific_modules.items(): file_model_name = file.split(".")[-2] new_name = name_prefixes[file] renamer = ReplaceNameTransformer(file_model_name, new_name, self.model_name) renamed_module = module.visit(renamer) self.visited_modules[file] = ModelFileMapper.visit_and_merge_dependencies( renamed_module, self.classes, self.functions, self.assignments, self.object_dependency_mapping, self.start_lines, ) # We record it so that we can rename classes later the exact same way self.renamers[file] = renamer # 2. in turn, we need to add the imported functions/assignments to the dependencies of the modular mapper, using the # definitions found in the visited files self.merge_model_specific_imports(self.visited_modules) # 3. compute the nested (recursive) function and assignment dependencies self.object_recursive_dependency_mapping = self._compute_recursive_object_dependencies() # We need to keep track of which objects were imported directly into which modeling file to not add them wrongly later # Note that we may visit several of the same file types, thus we save them per file type, not file self.imported_objects_per_file = defaultdict(set) for file, mapper in self.visited_modules.items(): file_type = re.search(rf"^transformers\.models\.\w+\.({self.match_patterns})_.*", file).group(1) self.imported_objects_per_file[file_type].update(mapper.objects_imported_from_modeling) def merge_model_specific_imports(self, visited_modules): """Merge the functions and assignments imported from the modeling files to the modular nodes and dependency graph, based on the visited files.""" self.start_lines_file_mapping = {} self.added_objects_file_mapping = {} for object_name, file in self.model_specific_imported_objects.items(): visited_module = visited_modules[file] self.start_lines_file_mapping[file] = visited_module.start_lines # Add functions and their dependencies if object_name in visited_module.functions and object_name not in self.functions: self.functions[object_name] = visited_module.functions[object_name] self.added_objects_file_mapping[object_name] = file dependencies = visited_module.object_dependency_mapping.get(object_name, None) if dependencies is not None: self.object_dependency_mapping[object_name] = dependencies for dep in dependencies: if dep not in self.global_nodes: self.added_objects_file_mapping[dep] = file self.functions[dep] = visited_module.global_nodes[dep] # Add/overwrite the imported functions to other visited modules as well, in case it is absent/different # in he modeling source file of the inherited class. See `examples/modular-tranformers/modular_switch_function.py` # and `examples/modular-tranformers/modular_add_function.py` for examples recursive_dependencies = visited_module.object_recursive_dependency_mapping.get(object_name, set()) node_recursive_dependencies_mapping = { dep: visited_module.global_nodes[dep] for dep in recursive_dependencies } for filename, module_mapper in self.visited_modules.items(): if filename != file: module_mapper.global_nodes[object_name] = visited_module.functions[object_name] if len(recursive_dependencies) > 0: module_mapper.object_recursive_dependency_mapping[object_name] = recursive_dependencies module_mapper.global_nodes.update(node_recursive_dependencies_mapping) # Add assignments and their dependencies elif object_name in visited_module.assignments and object_name not in self.assignments: self.assignments[object_name] = visited_module.assignments[object_name] self.added_objects_file_mapping[object_name] = file dependencies = visited_module.object_dependency_mapping.get(object_name, None) if dependencies is not None: self.object_dependency_mapping[object_name] = dependencies for dep in dependencies: if dep not in self.global_nodes: self.added_objects_file_mapping[dep] = file self.assignments[dep] = visited_module.global_nodes[dep] # Do not forget to re-assign all nodes after the merge self.global_nodes = {**self.assignments, **self.classes, **self.functions} # And restric dependencies to those nodes only self._restrict_dependencies_to_known_entities() def compute_relative_order(self, missing_dependencies: set) -> dict[str, int]: """Compute in which relative order the `missing_dependencies` should appear when the nodes are added to the final file that will be created based on the modular. """ relative_order = {} idx = 0 original_dependencies = [] other_files_dependencies = defaultdict(list) for dep in tuple(missing_dependencies): if dep in self.added_objects_file_mapping: file = self.added_objects_file_mapping[dep] other_files_dependencies[file].append(dep) else: original_dependencies.append(dep) # Sort all lists according to the order in their respective file all_dependencies = [] for file, dependencies in other_files_dependencies.items(): sorted_dependencies = sorted(dependencies, key=lambda x: self.start_lines_file_mapping[file][x]) all_dependencies += sorted_dependencies all_dependencies += sorted(original_dependencies, key=lambda x: self.start_lines[x]) # Add all original node first, then merged ones (one file at a time) for dep in all_dependencies: relative_order[dep] = idx idx += 1 return relative_order def infer_new_model_name(self) -> dict: """Infer whether we are using a model name prefix different from the usual model name as defined from the filename. This is useful e.g. when we define a new multi-modal model, and only the text part inherits from `LlamaModel`, so we have something like: ```python class NewModelNameTextDecoderLayer(LlamaDecoderLayer): pass ``` with the `Text` prefix added to the model name. However, in case of multiple prefix used, we raise a warning and use the most frequent prefix, to avoid parsing the same file multiple times and inconsistencies in the objects added from dependencies. If the new prefix collides with a prefix of another class in the file where we are importing from, then we also raise a warning, and use the default prefix (model name) to avoid collisions in dependencies. """ prefix_model_name_mapping = defaultdict(Counter) cased_default_name = get_cased_name(self.model_name) # Iterate over all new classes to get modeling super classes for class_name, class_node in self.classes.items(): modeling_bases = [ k.value.value for k in class_node.bases if k.value.value in self.model_specific_imported_objects ] if len(modeling_bases) > 1: raise ValueError( f"{class_name} was defined with more than 1 model-specific super class. This is unsupported. We found {(*modeling_bases,)}." ) if len(modeling_bases) == 1: filename = self.model_specific_imported_objects[modeling_bases[0]] cased_model_name = cased_default_name # the default name prefix suffix = common_partial_suffix(class_name, modeling_bases[0]) if len(suffix) > 0 and suffix[0].isupper(): cased_model_name = class_name.replace(suffix, "") prefix_model_name_mapping[filename].update([cased_model_name]) # Check if we found multiple prefixes for some modeling files final_name_mapping = {} for file, prefixes_counter in prefix_model_name_mapping.items(): if len(prefixes_counter) > 1: _, total = prefixes_counter.most_common(1)[0] most_used_entities = [name for name, count in prefixes_counter.most_common() if count == total] # if the default name is in the pool of equally used prefixes, use it, otherwise last encountered final_name = cased_default_name if cased_default_name in most_used_entities else most_used_entities[-1] else: final_name = list(prefixes_counter)[0] # Check if the prefix can be used without collisions in the names old_cased_model_name = get_cased_name(file.split(".")[-2]) old_model_name_prefix = final_name.replace(cased_default_name, old_cased_model_name) # Raise adequate warning depending on the situation has_prefix_collision = f"\nclass {old_model_name_prefix}" in get_module_source_from_name(file) if final_name != cased_default_name and has_prefix_collision: if len(prefixes_counter) > 1: logger.warning( f"We detected multiple prefix names when inheriting from {file}: {(*set(prefixes_counter),)}. However, the " f"most used one, '{final_name}', is already present in the source file and will likely cause consistency " f"issues. For this reason we fallback to the default prefix '{cased_default_name}' when grabbing args " "and dependencies. Make sure to subclass the intermediate classes with the prefix you want (if different " f"from '{cased_default_name}') or use a single prefix in all the modular (best)." ) else: logger.warning( f"We detected the use of the new default prefix {final_name} when inheriting from {file}. However, it is " "already present in the source file and will likely cause consistency issues. For this reason we fallback " f"to the default prefix '{cased_default_name}' when grabbing args and dependencies. Make sure to subclass " f"the intermediate classes with the prefix you want (if different from '{cased_default_name}')" ) final_name = cased_default_name elif len(prefixes_counter) > 1: logger.warning( f"We detected multiple prefix names when inheriting from {file}: {(*set(prefixes_counter),)}. We will only " f"use the most used '{final_name}' prefix when grabbing args and dependencies. Make sure to subclass the " f"intermediate classes with the prefix you want (if different from '{final_name}') or use a single prefix " "in all the modular (best)." ) final_name_mapping[file] = get_lowercase_name(final_name) # Check we are not missing imported files for file in self.model_specific_modules.keys(): if file not in final_name_mapping.keys(): final_name_mapping[file] = self.model_name return final_name_mapping def check_dependencies_and_create_import_node( file_type: str, new_dependencies: set[str], mapper: ModuleMapper, new_name: str ) -> tuple[set[str], dict[str, cst.CSTNode]]: """Check that all class nodes in the `new_dependencies` belong to the correct `file_type`. If this is not the case, we need to remove it from the dependencies, and create a new import to it instead. This scenario may appear in the following case: If a new class in the `modular_xxx.py` file does not belong to `type_xxx.py`, but is used somewhere in `other_type_xxx.py` (e.g. as a type hint), but none of the visited files had a similar class, then it would be imported in `type_xxx.py` as part of the standard dependency graph (because we never encountered an import towards this new class in any file). For example imagine the following `modular.py`: ``` from ..llama.modeling_llama import LlamaModel class NewNameTextConfig(PretrainedConfig): ... class NewNameConfig(PretrainedConfig): ... class NewNameModel(LlamaModel): config = NewNameConfig() text_config = NewNameTextConfig() ... ``` then without the help of this function, `NewNameTextConfig` would be imported in the `modeling_newname.py` as well as `configuration_newname.py`, because `modeling_llama.py` tells us to not import `NewNameConfig`, but has no knowledge of `NewNameTextConfig`. """ class_dependencies = {dep for dep in new_dependencies if m.matches(mapper.global_nodes[dep], m.ClassDef())} corrected_dependencies = new_dependencies.copy() new_imports = {} for class_name in class_dependencies: class_file_type = find_file_type(class_name) # In this case, we need to remove it from the dependencies and create a new import instead if class_file_type != file_type: corrected_dependencies.remove(class_name) import_statement = f"from .{class_file_type}_{new_name} import {class_name}" new_imports[class_name] = cst.parse_statement(import_statement) return corrected_dependencies, new_imports def get_class_node_and_dependencies( modular_mapper: ModularFileMapper, class_name: str, node: cst.CSTNode, files: dict[str, dict] ) -> tuple[dict, str, dict]: """Return a single class node (and all its dependency nodes), to be added to the `files`. It creates the new class node based on the inherited classes if needed. Also returns any new imports of a new class defined in the modular that we nay need. """ # An exception was already raised if this has len > 1 model_specific_bases = [ k.value.value for k in node.bases if k.value.value in modular_mapper.model_specific_imported_objects ] super_class = model_specific_bases[0] if len(model_specific_bases) == 1 else None file_type = find_file_type(class_name) file_to_update = files[file_type] model_name = modular_mapper.model_name # This is used to avoid adding objects to the dependencies graph if they will be imported already imported_objects = modular_mapper.imported_objects_per_file[file_type] # We need to replace the class node with the transformers (modeling file) super class node if super_class is not None: super_file_name = modular_mapper.model_specific_imported_objects[super_class] # Get the mapper corresponding to the inherited class mapper = modular_mapper.visited_modules[super_file_name] # Rename the super class according to the exact same rule we used when renaming the whole module renamer = modular_mapper.renamers[super_file_name] renamed_super_class = preserve_case_replace(super_class, renamer.patterns, renamer.cased_new_name) # Create the new class node updated_node = replace_class_node(mapper, node, renamed_super_class, super_class) # Grab all immediate dependencies of the new node new_node_dependencies = augmented_dependencies_for_class_node(updated_node, mapper, imported_objects) # At this point, if any class dependency is found, but belongs to another file, it means that we need to remove # it from the dependencies, and add a new import of it instead new_node_dependencies, new_imports = check_dependencies_and_create_import_node( file_type, new_node_dependencies, mapper, model_name ) # The node was modified -> look for all recursive dependencies of the new node all_dependencies_to_add = find_all_dependencies( dependency_mapping=mapper.class_dependency_mapping, initial_dependencies=new_node_dependencies, initial_checked_dependencies=set(file_to_update.keys()), ) relative_dependency_order = mapper.compute_relative_order(all_dependencies_to_add) nodes_to_add = { dep: (relative_dependency_order[dep], mapper.global_nodes[dep]) for dep in all_dependencies_to_add } # No transformers (modeling file) super class, just check functions and assignments dependencies else: updated_node = node # The node was NOT modified -> no need to look recursively for other class dependencies. Indeed, even if they are not # already defined (which would mean a weird order of the code in the modular...), they will be in the future all_dependencies_to_add = augmented_dependencies_for_class_node(updated_node, modular_mapper, imported_objects) # At this point, if any class dependency is found, but belongs to another file, it means that we need to remove # it from the dependencies, and add a new import of it instead all_dependencies_to_add, new_imports = check_dependencies_and_create_import_node( file_type, all_dependencies_to_add, modular_mapper, model_name ) relative_dependency_order = modular_mapper.compute_relative_order(all_dependencies_to_add) nodes_to_add = { dep: (relative_dependency_order[dep], modular_mapper.global_nodes[dep]) for dep in all_dependencies_to_add if dep not in file_to_update.keys() } # Add the class node itself to the nodes to add class_idx = max(relative_dependency_order.values()) + 1 if len(relative_dependency_order) > 0 else 0 nodes_to_add[class_name] = (class_idx, updated_node) return nodes_to_add, file_type, new_imports def create_modules(modular_mapper: ModularFileMapper) -> dict[str, cst.Module]: """Create all the new modules based on visiting the modular file. It replaces all classes as necesary.""" files = defaultdict(dict) current_file_indices = defaultdict(lambda: 0) # For each class defined in modular, potentially replace the node and add it with its dependencies for class_name, node in modular_mapper.classes.items(): nodes_to_add, file_type, new_imports = get_class_node_and_dependencies(modular_mapper, class_name, node, files) # Add the new potential new imports that we may need to the `modular_mapper` variable modular_mapper.imported_objects_per_file[file_type].update(new_imports.keys()) modular_mapper.imports.extend(list(new_imports.values())) # Sort the nodes according to their relative order nodes_to_add = sorted(nodes_to_add.items(), key=lambda x: x[1][0]) # Write all nodes to file for dependency, (_, node) in nodes_to_add: # This is used to keep certain variables at the beginning of the file try: # The -1000 is arbitrary -> just keep it bigger than the list idx = -1000 + VARIABLES_AT_THE_BEGINNING.index(dependency) except ValueError: idx = current_file_indices[file_type] current_file_indices[file_type] += 1 files[file_type][dependency] = {"insert_idx": idx, "node": node} # Add the __all__ statement to files at the end for file_type, node in modular_mapper.all_all_to_add.items(): idx = current_file_indices[file_type] files[file_type]["__all__"] = {"insert_idx": idx, "node": node} # Aggregate all the imports statements (we look for duplicates with the code_for_node, not the nodes themselves because # they are wrapped in SimpleStatementLine or If which could have different newlines, blanks etc) all_imports = modular_mapper.imports.copy() all_imports_code = {modular_mapper.python_module.code_for_node(node).strip() for node in all_imports} for file, mapper in modular_mapper.visited_modules.items(): new_imports = [ node for node in mapper.imports if mapper.python_module.code_for_node(node).strip() not in all_imports_code ] new_imports_code = {mapper.python_module.code_for_node(node).strip() for node in new_imports} all_imports.extend(new_imports) all_imports_code.update(new_imports_code) # Find the correct imports, and write the new modules for file, body in files.items(): new_body = [k[1]["node"] for k in sorted(body.items(), key=lambda x: x[1]["insert_idx"])] needed_imports = get_needed_imports(body, all_imports) full_module = needed_imports + new_body new_module = cst.Module(body=full_module, header=modular_mapper.python_module.header) files[file] = new_module return files def convert_modular_file(modular_file): pattern = re.search(r"modular_(.*)(?=\.py$)", modular_file) output = {} if pattern is not None: model_name = pattern.groups()[0] # Parse the Python file with open(modular_file, "r", encoding="utf-8") as file: code = file.read() module = cst.parse_module(code) wrapper = MetadataWrapper(module) cst_transformers = ModularFileMapper(module, model_name) wrapper.visit(cst_transformers) for file, module in create_modules(cst_transformers).items(): if module != {}: # Get relative path starting from src/transformers/ relative_path = re.search( r"(src/transformers/.*|examples/.*)", os.path.abspath(modular_file).replace("\\", "/") ).group(1) header = AUTO_GENERATED_MESSAGE.format( relative_path=relative_path, short_name=os.path.basename(relative_path) ) ruffed_code = run_ruff(header + module.code, True) formatted_code = run_ruff(ruffed_code, False) output[file] = [formatted_code, ruffed_code] return output else: print(f"modular pattern not found in {modular_file}, exiting") return {} def save_modeling_file(modular_file, converted_file): for file_type in converted_file.keys(): file_name_prefix = file_type.split("*")[0] file_name_suffix = file_type.split("*")[-1] if "*" in file_type else "" new_file_name = modular_file.replace("modular_", f"{file_name_prefix}_").replace( ".py", f"{file_name_suffix}.py" ) non_comment_lines = len( [line for line in converted_file[file_type][0].strip().split("\n") if not line.strip().startswith("#")] ) if len(converted_file[file_type][0].strip()) > 0 and non_comment_lines > 0: with open(new_file_name, "w", encoding="utf-8") as f: f.write(converted_file[file_type][0]) else: non_comment_lines = len( [line for line in converted_file[file_type][0].strip().split("\n") if not line.strip().startswith("#")] ) if len(converted_file[file_type][1].strip()) > 0 and non_comment_lines > 0: logger.warning("The modeling code contains errors, it's written without formatting") with open(new_file_name, "w", encoding="utf-8") as f: f.write(converted_file[file_type][1]) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--files_to_parse", default=["all"], nargs="+", help="A list of `modular_xxxx` files that should be converted to single model file", ) args = parser.parse_args() if args.files_to_parse == ["all"]: args.files_to_parse = glob.glob("src/transformers/models/**/modular_*.py", recursive=True) if args.files_to_parse == ["examples"]: args.files_to_parse = glob.glob("examples/**/modular_*.py", recursive=True) priority_list = find_priority_list(args.files_to_parse) assert len(priority_list) == len(args.files_to_parse), "Some files will not be converted" for file_name in priority_list: print(f"Converting {file_name} to a single model single file format") module_path = file_name.replace("/", ".").replace(".py", "").replace("src.", "") converted_files = convert_modular_file(file_name) converter = save_modeling_file(file_name, converted_files)
transformers/utils/modular_model_converter.py/0
{ "file_path": "transformers/utils/modular_model_converter.py", "repo_id": "transformers", "token_count": 38069 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is used to get the files against which we will run doc testing. This uses `tests_fetcher.get_all_doctest_files` then groups the test files by their directory paths. The files in `docs/source/en/model_doc` or `docs/source/en/tasks` are **NOT** grouped together with other files in the same directory: the objective is to run doctest against them in independent GitHub Actions jobs. Assume we are under `transformers` root directory: To get a map (dictionary) between directory (or file) paths and the corresponding files ```bash python utils/split_doctest_jobs.py ``` or to get a list of lists of directory (or file) paths ```bash python utils/split_doctest_jobs.py --only_return_keys --num_splits 4 ``` (this is used to allow GitHub Actions to generate more than 256 jobs using matrix) """ import argparse from collections import defaultdict from pathlib import Path from tests_fetcher import get_all_doctest_files if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--only_return_keys", action="store_true", help="if to only return the keys (which is a list of list of files' directory or file paths).", ) parser.add_argument( "--num_splits", type=int, default=1, help="the number of splits into which the (flat) list of direcotry/file paths will be split. This has effect only if `only_return_keys` is `True`.", ) args = parser.parse_args() all_doctest_files = get_all_doctest_files() raw_test_collection_map = defaultdict(list) for file in all_doctest_files: file_dir = "/".join(Path(file).parents[0].parts) raw_test_collection_map[file_dir].append(file) refined_test_collection_map = {} for file_dir in raw_test_collection_map.keys(): if file_dir in ["docs/source/en/model_doc", "docs/source/en/tasks"]: for file in raw_test_collection_map[file_dir]: refined_test_collection_map[file] = file else: refined_test_collection_map[file_dir] = " ".join(sorted(raw_test_collection_map[file_dir])) sorted_file_dirs = sorted(refined_test_collection_map.keys()) test_collection_map = {} for file_dir in sorted_file_dirs: test_collection_map[file_dir] = refined_test_collection_map[file_dir] num_jobs = len(test_collection_map) num_jobs_per_splits = num_jobs // args.num_splits file_directory_splits = [] end = 0 for idx in range(args.num_splits): start = end end = start + num_jobs_per_splits + (1 if idx < num_jobs % args.num_splits else 0) file_directory_splits.append(sorted_file_dirs[start:end]) if args.only_return_keys: print(file_directory_splits) else: print(dict(test_collection_map))
transformers/utils/split_doctest_jobs.py/0
{ "file_path": "transformers/utils/split_doctest_jobs.py", "repo_id": "transformers", "token_count": 1208 }
# TRL - Transformer Reinforcement Learning <div style="text-align: center"> <img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/trl_banner_dark.png" alt="TRL Banner"> </div> <hr> <br> <h3 align="center"> <p>A comprehensive library to post-train foundation models</p> </h3> <p align="center"> <a href="https://github.com/huggingface/trl/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/trl.svg?color=blue"></a> <a href="https://huggingface.co/docs/trl/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/trl/index.svg?down_color=red&down_message=offline&up_color=blue&up_message=online"></a> <a href="https://github.com/huggingface/trl/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/trl.svg"></a> </p> ## Overview TRL is a cutting-edge library designed for post-training foundation models using advanced techniques like Supervised Fine-Tuning (SFT), Proximal Policy Optimization (PPO), and Direct Preference Optimization (DPO). Built on top of the [🤗 Transformers](https://github.com/huggingface/transformers) ecosystem, TRL supports a variety of model architectures and modalities, and can be scaled-up across various hardware setups. ## Highlights - **Efficient and scalable**: - Leverages [🤗 Accelerate](https://github.com/huggingface/accelerate) to scale from single GPU to multi-node clusters using methods like DDP and DeepSpeed. - Full integration with [`PEFT`](https://github.com/huggingface/peft) enables training on large models with modest hardware via quantization and LoRA/QLoRA. - Integrates [Unsloth](https://github.com/unslothai/unsloth) for accelerating training using optimized kernels. - **Command Line Interface (CLI)**: A simple interface lets you fine-tune and interact with models without needing to write code. - **Trainers**: Various fine-tuning methods are easily accessible via trainers like [`SFTTrainer`](https://huggingface.co/docs/trl/sft_trainer), [`DPOTrainer`](https://huggingface.co/docs/trl/dpo_trainer), [`RewardTrainer`](https://huggingface.co/docs/trl/reward_trainer), [`ORPOTrainer`](https://huggingface.co/docs/trl/orpo_trainer) and more. - **AutoModels**: Use pre-defined model classes like [`AutoModelForCausalLMWithValueHead`](https://huggingface.co/docs/trl/models#trl.AutoModelForCausalLMWithValueHead) to simplify reinforcement learning (RL) with LLMs. ## Installation ### Python Package Install the library using `pip`: ```bash pip install trl ``` ### From source If you want to use the latest features before an official release, you can install TRL from source: ```bash pip install git+https://github.com/huggingface/trl.git ``` ### Repository If you want to use the examples you can clone the repository with the following command: ```bash git clone https://github.com/huggingface/trl.git ``` ## Command Line Interface (CLI) You can use the TRL Command Line Interface (CLI) to quickly get started with Supervised Fine-tuning (SFT) and Direct Preference Optimization (DPO), or vibe check your model with the chat CLI: **SFT:** ```bash trl sft --model_name_or_path Qwen/Qwen2.5-0.5B \ --dataset_name trl-lib/Capybara \ --output_dir Qwen2.5-0.5B-SFT ``` **DPO:** ```bash trl dpo --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \ --dataset_name argilla/Capybara-Preferences \ --output_dir Qwen2.5-0.5B-DPO ``` **Chat:** ```bash trl chat --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct ``` Read more about CLI in the [relevant documentation section](https://huggingface.co/docs/trl/main/en/clis) or use `--help` for more details. ## How to use For more flexibility and control over training, TRL provides dedicated trainer classes to post-train language models or PEFT adapters on a custom dataset. Each trainer in TRL is a light wrapper around the 🤗 Transformers trainer and natively supports distributed training methods like DDP, DeepSpeed ZeRO, and FSDP. ### `SFTTrainer` Here is a basic example of how to use the `SFTTrainer`: ```python from trl import SFTConfig, SFTTrainer from datasets import load_dataset dataset = load_dataset("trl-lib/Capybara", split="train") training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT") trainer = SFTTrainer( args=training_args, model="Qwen/Qwen2.5-0.5B", train_dataset=dataset, ) trainer.train() ``` ### `RewardTrainer` Here is a basic example of how to use the `RewardTrainer`: ```python from trl import RewardConfig, RewardTrainer from datasets import load_dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") model = AutoModelForSequenceClassification.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct", num_labels=1 ) model.config.pad_token_id = tokenizer.pad_token_id dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train") training_args = RewardConfig(output_dir="Qwen2.5-0.5B-Reward", per_device_train_batch_size=2) trainer = RewardTrainer( args=training_args, model=model, processing_class=tokenizer, train_dataset=dataset, ) trainer.train() ``` ### `GRPOTrainer` `GRPOTrainer` implements the [Group Relative Policy Optimization (GRPO) algorithm](https://huggingface.co/papers/2402.03300) that is more memory-efficient than PPO and was used to train [Deepseek AI's R1](https://huggingface.co/deepseek-ai/DeepSeek-R1). ```python from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer dataset = load_dataset("trl-lib/tldr", split="train") # Dummy reward function: rewards completions that are close to 20 characters def reward_len(completions, **kwargs): return [-abs(20 - len(completion)) for completion in completions] training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=10) trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs=reward_len, args=training_args, train_dataset=dataset, ) trainer.train() ``` ### `DPOTrainer` `DPOTrainer` implements the popular [Direct Preference Optimization (DPO) algorithm](https://huggingface.co/papers/2305.18290) that was used to post-train Llama 3 and many other models. Here is a basic example of how to use the `DPOTrainer`: ```python from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from trl import DPOConfig, DPOTrainer model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train") training_args = DPOConfig(output_dir="Qwen2.5-0.5B-DPO") trainer = DPOTrainer(model=model, args=training_args, train_dataset=dataset, processing_class=tokenizer) trainer.train() ``` ## Development If you want to contribute to `trl` or customize it to your needs make sure to read the [contribution guide](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) and make sure you make a dev install: ```bash git clone https://github.com/huggingface/trl.git cd trl/ pip install -e .[dev] ``` ## Citation ```bibtex @misc{vonwerra2022trl, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, title = {TRL: Transformer Reinforcement Learning}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/trl}} } ``` ## License This repository's source code is available under the [Apache-2.0 License](LICENSE).
trl/README.md/0
{ "file_path": "trl/README.md", "repo_id": "trl", "token_count": 2668 }
# Denoising Diffusion Policy Optimization [![](https://img.shields.io/badge/All_models-DDPO-blue)](https://huggingface.co/models?other=ddpo,trl) ## The why | Before | After DDPO finetuning | | --- | --- | | <div style="text-align: center"><img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/pre_squirrel.png"/></div> | <div style="text-align: center"><img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/post_squirrel.png"/></div> | | <div style="text-align: center"><img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/pre_crab.png"/></div> | <div style="text-align: center"><img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/post_crab.png"/></div> | | <div style="text-align: center"><img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/pre_starfish.png"/></div> | <div style="text-align: center"><img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/post_starfish.png"/></div> | ## Getting started with Stable Diffusion finetuning with reinforcement learning The machinery for finetuning of Stable Diffusion models with reinforcement learning makes heavy use of HuggingFace's `diffusers` library. A reason for stating this is that getting started requires a bit of familiarity with the `diffusers` library concepts, mainly two of them - pipelines and schedulers. Right out of the box (`diffusers` library), there isn't a `Pipeline` nor a `Scheduler` instance that is suitable for finetuning with reinforcement learning. Some adjustments need to be made. There is a pipeline interface that is provided by this library that is required to be implemented to be used with the `DDPOTrainer`, which is the main machinery for fine-tuning Stable Diffusion with reinforcement learning. **Note: Only the StableDiffusion architecture is supported at this point.** There is a default implementation of this interface that you can use out of the box. Assuming the default implementation is sufficient and/or to get things moving, refer to the training example alongside this guide. The point of the interface is to fuse the pipeline and the scheduler into one object which allows for minimalness in terms of having the constraints all in one place. The interface was designed in hopes of catering to pipelines and schedulers beyond the examples in this repository and elsewhere at this time of writing. Also the scheduler step is a method of this pipeline interface and this may seem redundant given that the raw scheduler is accessible via the interface but this is the only way to constrain the scheduler step output to an output type befitting of the algorithm at hand (DDPO). For a more detailed look into the interface and the associated default implementation, go [here](https://github.com/lvwerra/trl/tree/main/trl/models/modeling_sd_base.py) Note that the default implementation has a LoRA implementation path and a non-LoRA based implementation path. The LoRA flag enabled by default and this can be turned off by passing in the flag to do so. LORA based training is faster and the LORA associated model hyperparameters responsible for model convergence aren't as finicky as non-LORA based training. Also in addition, there is the expectation of providing a reward function and a prompt function. The reward function is used to evaluate the generated images and the prompt function is used to generate the prompts that are used to generate the images. ## Getting started with `examples/scripts/ddpo.py` The `ddpo.py` script is a working example of using the `DDPO` trainer to finetune a Stable Diffusion model. This example explicitly configures a small subset of the overall parameters associated with the config object (`DDPOConfig`). **Note:** one A100 GPU is recommended to get this running. Anything below a A100 will not be able to run this example script and even if it does via relatively smaller sized parameters, the results will most likely be poor. Almost every configuration parameter has a default. There is only one commandline flag argument that is required of the user to get things up and running. The user is expected to have a [huggingface user access token](https://huggingface.co/docs/hub/security-tokens) that will be used to upload the model post finetuning to HuggingFace hub. The following bash command is to be entered to get things running ```batch python ddpo.py --hf_user_access_token <token> ``` To obtain the documentation of `stable_diffusion_tuning.py`, please run `python stable_diffusion_tuning.py --help` The following are things to keep in mind (The code checks this for you as well) in general while configuring the trainer (beyond the use case of using the example script) - The configurable sample batch size (`--ddpo_config.sample_batch_size=6`) should be greater than or equal to the configurable training batch size (`--ddpo_config.train_batch_size=3`) - The configurable sample batch size (`--ddpo_config.sample_batch_size=6`) must be divisible by the configurable train batch size (`--ddpo_config.train_batch_size=3`) - The configurable sample batch size (`--ddpo_config.sample_batch_size=6`) must be divisible by both the configurable gradient accumulation steps (`--ddpo_config.train_gradient_accumulation_steps=1`) and the configurable accelerator processes count ## Setting up the image logging hook function Expect the function to be given a list of lists of the form ```python [[image, prompt, prompt_metadata, rewards, reward_metadata], ...] ``` and `image`, `prompt`, `prompt_metadata`, `rewards`, `reward_metadata` are batched. The last list in the lists of lists represents the last sample batch. You are likely to want to log this one While you are free to log however you want the use of `wandb` or `tensorboard` is recommended. ### Key terms - `rewards` : The rewards/score is a numerical associated with the generated image and is key to steering the RL process - `reward_metadata` : The reward metadata is the metadata associated with the reward. Think of this as extra information payload delivered alongside the reward - `prompt` : The prompt is the text that is used to generate the image - `prompt_metadata` : The prompt metadata is the metadata associated with the prompt. A situation where this will not be empty is when the reward model comprises of a [`FLAVA`](https://huggingface.co/docs/transformers/model_doc/flava) setup where questions and ground answers (linked to the generated image) are expected with the generated image (See here: https://github.com/kvablack/ddpo-pytorch/blob/main/ddpo_pytorch/rewards.py#L45) - `image` : The image generated by the Stable Diffusion model Example code for logging sampled images with `wandb` is given below. ```python # for logging these images to wandb def image_outputs_hook(image_data, global_step, accelerate_logger): # For the sake of this example, we only care about the last batch # hence we extract the last element of the list result = {} images, prompts, _, rewards, _ = image_data[-1] for i, image in enumerate(images): pil = Image.fromarray( (image.cpu().numpy().transpose(1, 2, 0) * 255).astype(np.uint8) ) pil = pil.resize((256, 256)) result[f"{prompts[i]:.25} | {rewards[i]:.2f}"] = [pil] accelerate_logger.log_images( result, step=global_step, ) ``` ### Using the finetuned model Assuming you've done with all the epochs and have pushed up your model to the hub, you can use the finetuned model as follows ```python import torch from trl import DefaultDDPOStableDiffusionPipeline pipeline = DefaultDDPOStableDiffusionPipeline("metric-space/ddpo-finetuned-sd-model") device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") # memory optimization pipeline.vae.to(device, torch.float16) pipeline.text_encoder.to(device, torch.float16) pipeline.unet.to(device, torch.float16) prompts = ["squirrel", "crab", "starfish", "whale","sponge", "plankton"] results = pipeline(prompts) for prompt, image in zip(prompts,results.images): image.save(f"{prompt}.png") ``` ## Credits This work is heavily influenced by the repo [here](https://github.com/kvablack/ddpo-pytorch) and the associated paper [Training Diffusion Models with Reinforcement Learning by Kevin Black, Michael Janner, Yilan Du, Ilya Kostrikov, Sergey Levine](https://huggingface.co/papers/2305.13301). ## DDPOTrainer [[autodoc]] DDPOTrainer ## DDPOConfig [[autodoc]] DDPOConfig
trl/docs/source/ddpo_trainer.md/0
{ "file_path": "trl/docs/source/ddpo_trainer.md", "repo_id": "trl", "token_count": 2475 }
# Models With the `AutoModelForCausalLMWithValueHead` class TRL supports all decoder model architectures in transformers such as GPT-2, OPT, and GPT-Neo. In addition, with `AutoModelForSeq2SeqLMWithValueHead` you can use encoder-decoder architectures such as T5. TRL also requires reference models which are frozen copies of the model that is trained. With `create_reference_model` you can easily create a frozen copy and also share layers between the two models to save memory. ## PreTrainedModelWrapper [[autodoc]] PreTrainedModelWrapper ## AutoModelForCausalLMWithValueHead [[autodoc]] AutoModelForCausalLMWithValueHead - __init__ - forward - generate - _init_weights ## AutoModelForSeq2SeqLMWithValueHead [[autodoc]] AutoModelForSeq2SeqLMWithValueHead - __init__ - forward - generate - _init_weights ## create_reference_model [[autodoc]] create_reference_model
trl/docs/source/models.md/0
{ "file_path": "trl/docs/source/models.md", "repo_id": "trl", "token_count": 283 }
# Text Environments Text environments provide a learning ground for language agents. It allows a language model to use tools to accomplish a task such as using a Python interpreter to answer math questions or using a search index for trivia questions. Having access to tools allows language models to solve tasks that would be very hard for the models itself but can be trivial for the appropriate tools. A good example is arithmetics of large numbers that become a simple copy-paste task once you have access to a calculator. <div style="text-align: center"> <img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/textenv.png"> </div> Let's dive into how text environments work and start with tools! ## Tools One of the core building blocks of text environments are tools that the model can use to solve tasks. In general tools can be any Python function that takes a string as input and returns string. The `TextEnvironment` offers two options for tools: either go with predefined tools from `transformers.Tool` or define your own function or class with `__call__` method. Let's have a look at both! ### `transformers.Tool` Text environments fully support tools of the class `transformers.Tool`. The advantage of building tools in that framework is that they can easily be shared ```Python from transformers import load_tool # simple calculator tool that runs +-/* operations calc_tool = load_tool("ybelkada/simple-calculator") # python interpreter that executes program and returns outputs py_tool = load_tool("lvwerra/python-interpreter") # wikipedia search index that returns best search match wiki_tool = load_tool("vwxyzjn/pyserini-wikipedia-kilt-doc") ``` These tools are either loaded from the hub or from a local folder. Using the tool is as simple as calling them with a text query: ```Python calc_tool("1/2") >>> "0.5" ``` Note that both input and return values are strings to enable easy usage with a language model. ### Custom Tools The following is an example of a tool that adds two integers: ```Python def add(text): int_1, int_2 = text.split("+") result = int(int_1) + int(int_2) return str(result) print(add("1+1")) >>> "2" ``` We looked at basic examples such as a calculator but the principle holds for more complex tools as well such as a web search tool where you input the query and get the search results in return. Now let's look at how the model can use the tools with the call syntax. ### Call syntax In order to have a unified way for the model to call a tool we created a simple syntax that looks as follows: ```python "<request><TOOL_NAME>QUERY<call>TOOL_RESPONSE<response>" ``` There are a few special tokens involved so let's decompose it: First the model can signal that it wants to use a tool by emitting the `<request>` token. After that we want to know the name of the tool to call which is done by enclosing the tool name with `<>` brackets. Once we know which tool to call the tool query follows which is in free text form. The `<call>` tokens signifies the end of the query and stops the model generation. At this point the model output is parsed and the query sent to the tool. The environment appends the tool response to the string followed by the `<response>` token to show the end the tool output. Let's look at the concrete example of the calculator and assume its name is `Calculator` (more on how the name of a tool is inferred later): ```python "<request><Calculator>1/2<call>0.5<response>" ``` Finally, the episode is ended and generation stops when the model generates `<submit>` which marks the interaction as completed. Now let's have a look how we can create a new text environment! ## Create a `TextEnvironment` ```python prompt = """\ What is 13-3? <request><SimpleCalculatorTool>13-3<call>10.0<response> Result=10<submit> """ def reward_fn(result, answer): """Simplified reward function returning 1 if result matches answer and 0 otherwise.""" result_parsed = result.split("=")[1].split("<")[0] return int(result_parsed==answer) text_env = TextEnvironemnt( model=model, tokenizer=tokenizer, tools= {"SimpleCalculatorTool": load_tool("ybelkada/simple-calculator")}, reward_fn=exact_match_reward, prompt=prompt, max_turns=1 max_tool_response=100 generation_kwargs={"do_sample": "true"} ) ``` Let's decompose the settings: | Argument | Description | |:-------------------|:----------------| | `model` | Language model to interact with the environment and generate requests. | | `tokenizer` | Tokenizer of language model handling tokenization of strings. | | `tools` | `list` of `dict` of tools. If former the name of the tool is inferred from class name and otherwise it's the keys of the dictionary.| | `reward_fn` | A function that takes a string as input and returns. Can have extra arguments that are passed to `.run()` such as ground truth.| | `prompt` | Prompt to prepend to every task. Usually a few examples to demonstrate to the model how to use the tools in a few-shot fashion. | | `max_turns` | Maximum number of interactions between model and tools before episode ends.| | `max_tool_response`| The tool response is truncated to this number to avoid running out of model context.| | `max_length` | The maximum number of tokens to allow in an episode. | | `generation_kwargs`| Generation settings used by the language model. | You can customize the environment to your needs and add custom tools and settings. Let's see how you can use the environment to have the model interact with the available tools! ## Run an Episode To run a set of queries through the text environment one can simply use the `run` method. ```python queries = ["What is 1/2?"] answers = ["0.5"] queries, responses, masks, rewards, histories = text_env.run(queries, answers=answers) ``` This will execute the model/tool feedback loop for each query until either no tool is called anymore, the maximum number of turns is reached or to maximum number of tokens in an episode is exceeded. The extra `kwargs` (e.g. `answers=answers` above) passed to `run` will be passed on to the reward function. There are five objects that are returned by `run`: - `queries`: a list of the tokenized queries - `responses`: all tokens that have been generated withing the environment including model and tool tokens - `masks`: mask that indicates which tokens have been generated by the model and which tokens are generated by the tool - `rewards`: a list of reward for each query/response - `histories`: list of `TextHistory` objects, which are useful objects containing all the above and also the text equivalents The masks are crucial for training as we don't want to optimize tokens that the model has not generated which are tokens produced by the tools. Next, we'll train a PPO step with the generated responses! ### Train Training on episodes from the `TextEnvironment` is straight forward and simply requires forwarding all the returned variables except the `TextHistory` objects to the `step` method: ```python train_stats = ppo_trainer.step(queries, responses, rewards, masks) ``` ## `TextHistory` The `TextHistory` object stores the interactions between the model and the text environment. It stores tokens and text generated in each turn and their source in each turn (model or system) as well as rewards. Let's go through the class attributes and methods. ### Attributes The following table summarises the available attributes of the `TextEnvironment` class: | Attribute | Description | |:-------------------|:----------------| | `text` | The full string of the text generated in the text environment with both model and system generated text. | | `text_spans` | A list of tuples with the spans for each model or system generated text segment. | | `system_spans` | A list of boolean values indicating if the segment is model or system generated. | | `tokens` | All tokens generated in text environment with both model and system generated tokens. | | `token_spans` | Similar to `text_spans` the `token_spans` indicate the boundaries of model andsystem generated tokens. | | `token_masks` | The token masks can be used to ignore system generated tokens by masking them. | | `completed` | Indicates if the interaction with the environment has completed. | | `truncated` | Indicates if the interaction with the environment has completed because max length was reached. | With these attributes you can reconstruct every interaction of the model with the `TextEnvironment`. The `TextHistory` also lets you visualize the text history. Let's have a look! ### Visualization When the model interacts inside the `TextEnvironment` it can be useful to visualize and separate which parts of the text outputs were generated by the model and which parts come from the system and tools. For that purpose there are the two methods [`TextHistory.show_text`] and [`TextHistory.show_tokens`]. They print the text and tokens respectively and highlight the various segments using the [`rich` library](https://github.com/Textualize/rich) (make sure to install it before using these methods). You can see that the prompt is highlighted in gray, whereas system segments such as query and tool responses are highlighted in green. All segments generated by the model are highlighted in blue and in addition to the pure text output the reward is displayed as additional text in plum. Here an example of `show_text`: <div style="text-align: center"> <img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/textenv_show_text.png" width=600> </div> Sometimes there can be tricky tokenization related issues that are hidden when showing the decoded text. Thus `TextHistory` also offers an option to display the same highlighting on the tokens directly with `show_tokens`: <div style="text-align: center"> <img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/textenv_show_tokens.png" width=800> </div> Note that you can turn on the colour legend by passing `show_legend=True`. ## API Documentation [[autodoc]] TextEnvironment [[autodoc]] TextHistory
trl/docs/source/text_environments.md/0
{ "file_path": "trl/docs/source/text_environments.md", "repo_id": "trl", "token_count": 2816 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import re from dataclasses import dataclass, field from itertools import chain from typing import Optional from datasets import load_dataset from huggingface_hub import ModelCard from transformers import HfArgumentParser @dataclass class ScriptArguments: r""" Arguments for the script. Args: push_to_hub (`bool`, *optional*, defaults to `False`): Whether to push the dataset to the Hugging Face Hub. repo_id (`str`, *optional*, defaults to `"trl-lib/math_shepherd"`): Hugging Face repository ID to push the dataset to. dataset_num_proc (`int` or `None`, *optional*, defaults to `None`): Number of workers to use for dataset processing. """ push_to_hub: bool = field( default=False, metadata={"help": "Whether to push the dataset to the Hugging Face Hub."}, ) repo_id: str = field( default="trl-lib/math_shepherd", metadata={"help": "Hugging Face repository ID to push the dataset to."}, ) dataset_num_proc: Optional[int] = field( default=None, metadata={"help": "Number of workers to use for dataset processing."}, ) def process_example(example): # Replace "ки" with "ⶻ" so that the size of the "input" matches the size of the "label" inputs = example["input"].replace("ки", "ⶻ") # Find the indices of the "ⶻ" characters (that should match with the indexes of the "+" or "-" in the label) indexes = [m.start() for m in re.finditer("ⶻ", inputs)] # Sanity that all indexes are either "+" or "-" assert all(example["label"][idx] in ["+", "-"] for idx in indexes) # Get the labels labels = [example["label"][idx] == "+" for idx in indexes] # Split the inputs into steps (caution, the first step is missing here, it is the prompt) steps = [inputs[i:j] for i, j in zip(chain([0], indexes), chain(indexes, [None]))] # Remove the last step (single ⶻ) steps = steps[:-1] # Get the prompt (first part) and completions (rest) prompt = steps[0] completions = steps[1:] # Remove the heading "ⶻ" and the final whitespace from the completions assert all(completion.startswith("ⶻ") for completion in completions) completions = [completion[1:].strip() for completion in completions] # At this point, we need to retrieve the first step from the prompt. # First, we handle particular cases (annotation error) where we have a first label before the end of the prompt. if prompt.startswith( ( "Mr. Rocky", "Parker", "What is the smallest positive", " The Myth", "Let $\\mathbf{a}$", "Find the arithmetic", "Determine an ordered pair", "Determine the ordered pair", "At the Quill and Scroll stationery", "Round to the nearest", r"Calculate $\sqrt{10p}", r"Simplify $\sqrt{28x}", ) ): # Some spotted datasets errors where there is an annotation in the prompt: we remove it labels = labels[1:] # Then we handle the general case: we get the first step from the prompt by looking for "Step 1:" or "step 1:" or # (less common) "?". elif "Step 1:" in prompt: prompt, first_step = prompt.split("Step 1:") first_step = "Step 1:" + first_step completions = [first_step.strip()] + completions elif "step 1:" in prompt: prompt, first_step = prompt.split("step 1:") first_step = "step 1:" + first_step completions = [first_step.strip()] + completions elif "?" in prompt: prompt, first_step = prompt.split("?") prompt = prompt + "?" completions = [first_step.strip()] + completions else: raise ValueError(f"Prompt can't be processed: {prompt}") # Strip the prompt prompt = prompt.strip() # Sanity check that the length of the completions is the same as the length of the labels assert len(completions) == len(labels) return {"prompt": prompt, "completions": completions, "labels": labels} model_card = ModelCard(""" --- tags: [trl] --- # Math-Shepherd Dataset ## Summary The Math-Shepherd dataset is a processed version of [Math-Shepherd dataset](peiyi9979/Math-Shepherd), designed to train models using the [TRL library](https://github.com/huggingface/trl) for stepwise supervision tasks. It provides step-by-step solutions to mathematical problems, enabling models to learn and verify each step of a solution, thereby enhancing their reasoning capabilities. ## Data Structure - **Format**: [Standard](https://huggingface.co/docs/trl/main/dataset_formats#standard) - **Type**: [Stepwise supervision](https://huggingface.co/docs/trl/main/dataset_formats#stepwise-supervision) Columns: - `"prompt"`: The problem statement. - `"completions"`: A list of reasoning steps generated to solve the problem. - `"labels"`: A list of booleans or floats indicating the correctness of each corresponding reasoning step. This structure allows models to learn the correctness of each step in a solution, facilitating improved reasoning and problem-solving abilities. ## Generation script The script used to generate this dataset can be found [here](https://github.com/huggingface/trl/blob/main/examples/datasets/math_shepherd.py). """) if __name__ == "__main__": parser = HfArgumentParser(ScriptArguments) script_args = parser.parse_args_into_dataclasses()[0] dataset = load_dataset("peiyi9979/Math-Shepherd", split="train") dataset = dataset.map( process_example, remove_columns=["input", "label", "task"], num_proc=script_args.dataset_num_proc, ) dataset = dataset.train_test_split(test_size=0.05, seed=42) if script_args.push_to_hub: dataset.push_to_hub(script_args.repo_id) model_card.push_to_hub(script_args.repo_id, repo_type="dataset")
trl/examples/datasets/math_shepherd.py/0
{ "file_path": "trl/examples/datasets/math_shepherd.py", "repo_id": "trl", "token_count": 2324 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os from accelerate import Accelerator from datasets import load_dataset from peft import LoraConfig from tqdm import tqdm from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, logging, set_seed from trl import SFTTrainer from trl.trainer import ConstantLengthDataset """ Fine-Tune Llama-7b on SE paired dataset """ def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--model_path", type=str, default="") parser.add_argument("--dataset_name", type=str, default="lvwerra/stack-exchange-paired") parser.add_argument("--subset", type=str, default="data/finetune") parser.add_argument("--split", type=str, default="train") parser.add_argument("--size_valid_set", type=int, default=4000) parser.add_argument("--streaming", action="store_true") parser.add_argument("--shuffle_buffer", type=int, default=5000) parser.add_argument("--seq_length", type=int, default=1024) parser.add_argument("--max_steps", type=int, default=10000) parser.add_argument("--batch_size", type=int, default=4) parser.add_argument("--gradient_accumulation_steps", type=int, default=1) parser.add_argument("--eos_token_id", type=int, default=49152) parser.add_argument("--learning_rate", type=float, default=1e-4) parser.add_argument("--lr_scheduler_type", type=str, default="cosine") parser.add_argument("--num_warmup_steps", type=int, default=100) parser.add_argument("--weight_decay", type=float, default=0.05) parser.add_argument("--local_rank", type=int, default=0) parser.add_argument("--fp16", action="store_true", default=False) parser.add_argument("--bf16", action="store_true", default=False) parser.add_argument("--gradient_checkpointing", action="store_true", default=False) parser.add_argument("--seed", type=int, default=0) parser.add_argument("--num_workers", type=int, default=None) parser.add_argument("--output_dir", type=str, default="./checkpoints") parser.add_argument("--log_freq", default=1, type=int) parser.add_argument("--eval_freq", default=1000, type=int) parser.add_argument("--save_freq", default=1000, type=int) return parser.parse_args() def chars_token_ratio(dataset, tokenizer, nb_examples=400): """ Estimate the average number of characters per token in the dataset. """ total_characters, total_tokens = 0, 0 for _, example in tqdm(zip(range(nb_examples), iter(dataset)), total=nb_examples): text = prepare_sample_text(example) total_characters += len(text) if tokenizer.is_fast: total_tokens += len(tokenizer(text).tokens()) else: total_tokens += len(tokenizer.tokenize(text)) return total_characters / total_tokens def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) def prepare_sample_text(example): """Prepare the text from a sample of the dataset.""" text = f"Question: {example['question']}\n\nAnswer: {example['response_j']}" return text def create_datasets(tokenizer, args): dataset = load_dataset( args.dataset_name, data_dir=args.subset, split=args.split, use_auth_token=True, num_proc=args.num_workers if not args.streaming else None, streaming=args.streaming, ) if args.streaming: print("Loading the dataset in streaming mode") valid_data = dataset.take(args.size_valid_set) train_data = dataset.skip(args.size_valid_set) train_data = train_data.shuffle(buffer_size=args.shuffle_buffer, seed=args.seed) else: dataset = dataset.train_test_split(test_size=0.005, seed=args.seed) train_data = dataset["train"] valid_data = dataset["test"] print(f"Size of the train set: {len(train_data)}. Size of the validation set: {len(valid_data)}") chars_per_token = chars_token_ratio(train_data, tokenizer) print(f"The character to token ratio of the dataset is: {chars_per_token:.2f}") train_dataset = ConstantLengthDataset( tokenizer, train_data, formatting_func=prepare_sample_text, infinite=True, seq_length=args.seq_length, chars_per_token=chars_per_token, ) valid_dataset = ConstantLengthDataset( tokenizer, valid_data, formatting_func=prepare_sample_text, infinite=False, seq_length=args.seq_length, chars_per_token=chars_per_token, ) return train_dataset, valid_dataset def run_training(args, train_data, val_data): print("Loading the model") lora_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) train_data.start_iteration = 0 print("Starting main loop") training_args = TrainingArguments( output_dir=args.output_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=args.max_steps, eval_steps=args.eval_freq, save_steps=args.save_freq, logging_steps=args.log_freq, per_device_train_batch_size=args.batch_size, per_device_eval_batch_size=args.batch_size, learning_rate=args.learning_rate, lr_scheduler_type=args.lr_scheduler_type, warmup_steps=args.num_warmup_steps, gradient_accumulation_steps=args.gradient_accumulation_steps, gradient_checkpointing=args.gradient_checkpointing, fp16=args.fp16, bf16=args.bf16, weight_decay=args.weight_decay, run_name="llama-7b-finetuned", report_to="wandb", ddp_find_unused_parameters=False, ) model = AutoModelForCausalLM.from_pretrained( args.model_path, load_in_8bit=True, device_map={"": Accelerator().process_index} ) trainer = SFTTrainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=val_data, peft_config=lora_config, packing=True, ) print_trainable_parameters(trainer.model) print("Training...") trainer.train() print("Saving last checkpoint of the model") trainer.model.save_pretrained(os.path.join(args.output_dir, "final_checkpoint/")) def main(args): tokenizer = AutoTokenizer.from_pretrained(args.model_path) train_dataset, eval_dataset = create_datasets(tokenizer, args) run_training(args, train_dataset, eval_dataset) if __name__ == "__main__": args = get_args() assert args.model_path != "", "Please provide the llama model path" set_seed(args.seed) os.makedirs(args.output_dir, exist_ok=True) logging.set_verbosity_error() main(args)
trl/examples/research_projects/stack_llama/scripts/supervised_finetuning.py/0
{ "file_path": "trl/examples/research_projects/stack_llama/scripts/supervised_finetuning.py", "repo_id": "trl", "token_count": 3065 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. MODELS_TO_TEST = [ "trl-internal-testing/tiny-LlamaForCausalLM-3.2", "trl-internal-testing/tiny-MistralForCausalLM-0.2", ] # We could have also not declared these variables but let's be verbose PACKING_OPTIONS = [True, False] GRADIENT_CHECKPOINTING_KWARGS = [None, {"use_reentrant": False}, {"use_reentrant": True}] DEVICE_MAP_OPTIONS = [{"": 0}, "auto"] DPO_LOSS_TYPES = ["sigmoid", "ipo"] DPO_PRECOMPUTE_LOGITS = [True, False]
trl/tests/slow/testing_constants.py/0
{ "file_path": "trl/tests/slow/testing_constants.py", "repo_id": "trl", "token_count": 341 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import tempfile import unittest import torch import torch.nn.functional as F from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig from trl import GKDConfig, GKDTrainer from trl.trainer.utils import SIMPLE_CHAT_TEMPLATE class TestGKDTrainer(unittest.TestCase): @classmethod def setUpClass(cls): model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5" cls.tokenizer = AutoTokenizer.from_pretrained(model_id) cls.tokenizer.pad_token = cls.tokenizer.eos_token cls.model = AutoModelForCausalLM.from_pretrained(model_id) cls.generation_config = GenerationConfig( max_new_tokens=20, num_return_sequences=1, pad_token_id=cls.tokenizer.pad_token_id, eos_token_id=cls.tokenizer.eos_token_id, ) def test_generate_on_policy_outputs_deterministic(self): prompts = ["Hello, how are you?", "What's the weather like today?"] tokenized_prompts = self.tokenizer(prompts, return_tensors="pt", padding=True) inputs = { "prompts": tokenized_prompts["input_ids"], "prompt_attention_mask": tokenized_prompts["attention_mask"], } # Set temperature to 0 for deterministic output deterministic_generation_config = GenerationConfig( max_new_tokens=30, num_return_sequences=1, pad_token_id=self.tokenizer.pad_token_id, eos_token_id=self.tokenizer.eos_token_id, temperature=0.0, ) outputs = GKDTrainer.generate_on_policy_outputs( self.model, inputs, deterministic_generation_config, self.tokenizer.pad_token_id ) new_input_ids, new_attention_mask, new_labels = outputs # Decode the generated outputs generated_texts = self.tokenizer.batch_decode(new_input_ids, skip_special_tokens=True) # Check if the generated texts start with the original prompts for prompt, generated_text in zip(prompts, generated_texts): self.assertTrue( generated_text.startswith(prompt), f"Generated text '{generated_text}' does not start with prompt '{prompt}'", ) # Run the generation twice and check if the outputs are identical outputs2 = GKDTrainer.generate_on_policy_outputs( self.model, inputs, deterministic_generation_config, self.tokenizer.pad_token_id ) new_input_ids2, new_attention_mask2, new_labels2 = outputs2 # Check if the two generations are identical self.assertTrue(torch.all(new_input_ids.eq(new_input_ids2)), "Deterministic generations are not identical") self.assertTrue( torch.all(new_attention_mask.eq(new_attention_mask2)), "Attention masks for deterministic generations are not identical", ) self.assertTrue( torch.all(new_labels.eq(new_labels2)), "Labels for deterministic generations are not identical", ) def test_generate_on_policy_outputs(self): prompts = ["Hello, how are you?", "What's the weather like today?"] tokenized_prompts = self.tokenizer(prompts, return_tensors="pt", padding=True) inputs = { "prompts": tokenized_prompts["input_ids"], "attention_mask": tokenized_prompts["attention_mask"], } outputs = GKDTrainer.generate_on_policy_outputs( self.model, inputs, self.generation_config, self.tokenizer.pad_token_id ) # Check that outputs is a tuple of three tensors self.assertIsInstance(outputs, tuple) self.assertEqual(len(outputs), 3) new_input_ids, new_attention_mask, new_labels = outputs # Check shapes batch_size = len(prompts) self.assertEqual(new_input_ids.shape[0], batch_size) self.assertEqual(new_attention_mask.shape[0], batch_size) self.assertEqual(new_labels.shape[0], batch_size) # Check types self.assertIsInstance(new_input_ids, torch.Tensor) self.assertIsInstance(new_attention_mask, torch.Tensor) self.assertIsInstance(new_labels, torch.Tensor) # Check that new_input_ids and new_attention_mask have the same shape self.assertEqual(new_input_ids.shape, new_attention_mask.shape) self.assertEqual(new_labels.shape, new_attention_mask.shape) class TestGeneralizedJSDLoss(unittest.TestCase): def setUp(self): self.batch_size = 2 self.seq_length = 3 self.vocab_size = 5 self.student_logits = torch.randn(self.batch_size, self.seq_length, self.vocab_size) self.teacher_logits = torch.randn(self.batch_size, self.seq_length, self.vocab_size) def test_uniform_distribution(self): logits = torch.ones(1, 1, self.vocab_size) loss = GKDTrainer.generalized_jsd_loss(logits, logits) self.assertAlmostEqual(loss.item(), 0, places=5) def test_generalized_jsd_loss_edge_cases(self): # Setup student_logits = torch.log(torch.tensor([[0.1, 0.9]])).unsqueeze(0) teacher_logits = torch.log(torch.tensor([[0.9, 0.1]])).unsqueeze(0) # Case 1: beta = 1 (should be equivalent to KL(student || teacher)) loss_beta_1 = GKDTrainer.generalized_jsd_loss(student_logits, teacher_logits, beta=1) expected_loss_beta_1 = F.kl_div( F.log_softmax(student_logits, dim=-1), F.softmax(teacher_logits, dim=-1), reduction="batchmean" ) self.assertAlmostEqual(loss_beta_1.item(), expected_loss_beta_1.item(), places=5) # Case 2: beta = 0 (should be equivalent to KL(teacher || student)) loss_beta_0 = GKDTrainer.generalized_jsd_loss(student_logits, teacher_logits, beta=0) expected_loss_beta_0 = F.kl_div( F.log_softmax(teacher_logits, dim=-1), F.softmax(student_logits, dim=-1), reduction="batchmean" ) self.assertAlmostEqual(loss_beta_0.item(), expected_loss_beta_0.item(), places=5) def test_output_shape(self): loss = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits) self.assertTrue(torch.is_tensor(loss)) self.assertEqual(loss.shape, torch.Size([])) def test_beta_values(self): loss_beta_0 = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, beta=0) loss_beta_1 = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, beta=1) self.assertNotEqual(loss_beta_0, loss_beta_1) def test_temperature_scaling(self): loss_temp_1 = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, temperature=1) loss_temp_2 = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, temperature=2) self.assertNotEqual(loss_temp_1, loss_temp_2) def test_reduction_methods(self): loss_batchmean = GKDTrainer.generalized_jsd_loss( self.student_logits, self.teacher_logits, reduction="batchmean" ) loss_sum = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, reduction="sum") loss_mean = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, reduction="mean") loss_none = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, reduction="none") self.assertEqual(loss_batchmean.shape, torch.Size([])) self.assertEqual(loss_sum.shape, torch.Size([])) self.assertEqual(loss_mean.shape, torch.Size([])) self.assertEqual(loss_none.shape, self.student_logits.shape) def test_symmetry(self): student_teacher = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, beta=0.1) teacher_student = GKDTrainer.generalized_jsd_loss(self.teacher_logits, self.student_logits, beta=0.1) self.assertNotEqual(student_teacher, teacher_student) student_teacher = GKDTrainer.generalized_jsd_loss(self.student_logits, self.teacher_logits, beta=0.5) teacher_student = GKDTrainer.generalized_jsd_loss(self.teacher_logits, self.student_logits, beta=0.5) self.assertEqual(student_teacher, teacher_student) def test_zero_loss_for_identical_inputs(self): identical_logits = torch.randn(self.batch_size, self.seq_length, self.vocab_size) loss = GKDTrainer.generalized_jsd_loss(identical_logits, identical_logits) self.assertAlmostEqual(loss.item(), 0, places=6) class GKDTrainerTester(unittest.TestCase): def setUp(self): self.model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5" self.model = AutoModelForCausalLM.from_pretrained(self.model_id) self.teacher_model = AutoModelForCausalLM.from_pretrained(self.model_id) self.tokenizer = AutoTokenizer.from_pretrained(self.model_id) self.tokenizer.pad_token = self.tokenizer.eos_token # Ensure the tokenizer has a chat template if not hasattr(self.tokenizer, "chat_template") or self.tokenizer.chat_template is None: self.tokenizer.chat_template = SIMPLE_CHAT_TEMPLATE def test_gkd_trainer(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = GKDConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, per_device_eval_batch_size=2, report_to="none", ) dummy_dataset = load_dataset("trl-internal-testing/zen", "conversational_language_modeling") trainer = GKDTrainer( model=self.model_id, teacher_model=self.model_id, args=training_args, train_dataset=dummy_dataset["train"], eval_dataset=dummy_dataset["test"], processing_class=self.tokenizer, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) def test_generation_config_init(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = GKDConfig(output_dir=tmp_dir) dummy_dataset = load_dataset("trl-internal-testing/zen", "conversational_language_modeling") trainer = GKDTrainer( model=self.model_id, teacher_model=self.model_id, args=training_args, train_dataset=dummy_dataset["train"], eval_dataset=dummy_dataset["test"], processing_class=self.tokenizer, ) self.assertEqual(trainer.generation_config.pad_token_id, self.tokenizer.eos_token_id) self.assertEqual(trainer.generation_config.eos_token_id, self.model.generation_config.eos_token_id) self.assertEqual(trainer.generation_config.max_new_tokens, training_args.max_new_tokens) self.assertEqual(trainer.generation_config.temperature, training_args.temperature) self.assertEqual(trainer.generation_config.top_k, 0)
trl/tests/test_gkd_trainer.py/0
{ "file_path": "trl/tests/test_gkd_trainer.py", "repo_id": "trl", "token_count": 5292 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import os import tempfile import unittest import numpy as np import torch from datasets import Dataset, Image, Sequence, load_dataset from transformers import ( AutoModelForCausalLM, AutoProcessor, AutoTokenizer, LlavaForConditionalGeneration, TrainingArguments, is_vision_available, ) from transformers.testing_utils import require_peft, require_vision from transformers.utils import is_peft_available from trl import SFTConfig, SFTTrainer from trl.trainer import ConstantLengthDataset, DataCollatorForCompletionOnlyLM def formatting_prompts_func(example): text = f"### Question: {example['question']}\n ### Answer: {example['answer']}" return text def formatting_func_for_pretokenized(example): return example["input_ids"] def formatting_prompts_func_batched(example): output_text = [] for i, question in enumerate(example["question"]): text = f"### Question: {question}\n ### Answer: {example['answer'][i]}" output_text.append(text) return output_text if is_peft_available(): from peft import LoraConfig, PeftModel, get_peft_model if is_vision_available(): from PIL import Image as PILImage class SFTTrainerTester(unittest.TestCase): r""" """ def setUp(self): self.model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5" self.model = AutoModelForCausalLM.from_pretrained(self.model_id) self.tokenizer = AutoTokenizer.from_pretrained(self.model_id) self.tokenizer.pad_token = self.tokenizer.eos_token self.dummy_dataset = Dataset.from_dict( { "question": [ "Does llamas know how to code?", "Does llamas know how to fly?", "Does llamas know how to talk?", "Does llamas know how to code?", "Does llamas know how to fly?", "Does llamas know how to talk?", "Does llamas know how to swim?", ], "answer": [ "Yes, llamas are very good at coding.", "No, llamas can't fly.", "Yes, llamas are very good at talking.", "Yes, llamas are very good at coding.", "No, llamas can't fly.", "Yes, llamas are very good at talking.", "No, llamas can't swim.", ], "text": [ "### Question: Does llamas know how to code?\n ### Answer: Yes, llamas are very good at coding.", "### Question: Does llamas know how to fly?\n ### Answer: No, llamas can't fly.", "### Question: Does llamas know how to talk?\n ### Answer: Yes, llamas are very good at talking.", "### Question: Does llamas know how to code?\n ### Answer: Yes, llamas are very good at coding.", "### Question: Does llamas know how to fly?\n ### Answer: No, llamas can't fly.", "### Question: Does llamas know how to talk?\n ### Answer: Yes, llamas are very good at talking.", "### Question: Does llamas know how to swim?\n ### Answer: No, llamas can't swim.", ], } ) self.dummy_tokenized_dataset = Dataset.from_dict( { "input_ids": [ self.tokenizer.encode( "TRL is a library to post-train LLMs and diffusion models with methods such as Supervised Fine-tuning (SFT), Proximal Policy Optimization (PPO), and Direct Preference Optimization (DPO)." ) ] * 10 } ) self.conversational_lm_dataset = load_dataset("trl-internal-testing/zen", "conversational_language_modeling") self.standard_prompt_completion_dataset = load_dataset( "trl-internal-testing/zen", "standard_prompt_completion" ) if is_vision_available(): self.dummy_vsft_instruction_dataset = Dataset.from_dict( { "messages": [ [ { "role": "user", "content": [{"type": "text", "text": "What is in this image?"}, {"type": "image"}], }, { "role": "assistant", "content": [{"type": "text", "text": "It is random noise."}], }, { "role": "user", "content": [{"type": "text", "text": "Oh ye, you are right, what is 1+1"}], }, { "role": "assistant", "content": [{"type": "text", "text": "2"}], }, ], [ { "role": "user", "content": [{"type": "text", "text": "What is in this image?"}, {"type": "image"}], }, { "role": "assistant", "content": [{"type": "text", "text": "It is random noise."}], }, ], ], "images": [ [PILImage.fromarray((np.random.rand(40, 50, 3) * 255).astype("uint8")).convert("RGBA")], [PILImage.fromarray((np.random.rand(50, 60, 3) * 255).astype("uint8")).convert("RGBA")], ], } ) self.dummy_vsft_instruction_dataset.cast_column("images", Sequence(Image())) self.dummy_vsft_instruction_dataset = self.dummy_vsft_instruction_dataset.cast_column( "images", Sequence(Image()) ) self.train_dataset = ConstantLengthDataset( self.tokenizer, self.dummy_dataset, formatting_func=formatting_prompts_func, seq_length=16, num_of_sequences=16, ) self.eval_dataset = ConstantLengthDataset( self.tokenizer, self.dummy_dataset, formatting_func=formatting_prompts_func, seq_length=16, num_of_sequences=16, ) self.train_dataset_from_pretokenized = ConstantLengthDataset( self.tokenizer, self.dummy_tokenized_dataset, seq_length=16, num_of_sequences=16, formatting_func=formatting_func_for_pretokenized, ) self.eval_dataset_from_pretokenized = ConstantLengthDataset( self.tokenizer, self.dummy_tokenized_dataset, seq_length=16, num_of_sequences=16, formatting_func=formatting_func_for_pretokenized, ) def test_constant_length_dataset_with_pretokenized_data(self): constant_len_dataset = ConstantLengthDataset( self.tokenizer, self.dummy_tokenized_dataset, formatting_func=formatting_func_for_pretokenized, ) assert len(constant_len_dataset) == len(self.dummy_tokenized_dataset) assert len(constant_len_dataset) > 0 for example in constant_len_dataset: assert "input_ids" in example assert "labels" in example assert len(example["input_ids"]) == constant_len_dataset.seq_length assert len(example["labels"]) == constant_len_dataset.seq_length decoded_text = self.tokenizer.decode(example["input_ids"]) assert ("TRL" in decoded_text) and ("(DPO)" in decoded_text) def test_constant_length_dataset(self): formatted_dataset = ConstantLengthDataset( self.tokenizer, self.dummy_dataset, formatting_func=formatting_prompts_func, ) self.assertEqual(len(formatted_dataset), len(self.dummy_dataset)) self.assertGreater(len(formatted_dataset), 0) for example in formatted_dataset: self.assertIn("input_ids", example) self.assertIn("labels", example) self.assertEqual(len(example["input_ids"]), formatted_dataset.seq_length) self.assertEqual(len(example["labels"]), formatted_dataset.seq_length) decoded_text = self.tokenizer.decode(example["input_ids"]) self.assertTrue(("Question" in decoded_text) and ("Answer" in decoded_text)) def test_sft_trainer_backward_compatibility(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = TrainingArguments( output_dir=tmp_dir, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, hub_token="not_a_real_token", report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, formatting_func=formatting_prompts_func, ) self.assertEqual(trainer.args.hub_token, training_args.hub_token) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) def test_sft_trainer(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) def test_sft_trainer_with_pretokenzied_data_packing(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset_from_pretokenized, eval_dataset=self.eval_dataset_from_pretokenized, ) trainer.train() assert trainer.state.log_history[(-1)]["train_loss"] is not None assert trainer.state.log_history[0]["eval_loss"] is not None assert "model.safetensors" in os.listdir(tmp_dir + "/checkpoint-2") def test_sft_trainer_uncorrect_data(self): with tempfile.TemporaryDirectory() as tmp_dir: # Shoud work as SFTTrainer natively supports conversational lm dataset training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, max_seq_length=32, # make sure there is at least 1 packed sequence packing=True, report_to="none", ) _ = SFTTrainer( model=self.model, args=training_args, train_dataset=self.conversational_lm_dataset["train"], ) # Same, but without packing training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, packing=False, report_to="none", ) _ = SFTTrainer( model=self.model, args=training_args, train_dataset=self.conversational_lm_dataset["train"], ) # Same, but with packing with `max_seq_length` training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, max_seq_length=16, # make sure there is at least 1 packed sequence packing=True, report_to="none", ) _ = SFTTrainer( model=self.model, args=training_args, train_dataset=self.standard_prompt_completion_dataset["train"], ) # Same but with prompt completion dataset training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, packing=False, report_to="none", ) _ = SFTTrainer( model=self.model, args=training_args, train_dataset=self.standard_prompt_completion_dataset["train"], ) # Should work as dummy dataset are supported with a formatting function training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, max_seq_length=32, # make sure there is at least 1 packed sequence packing=True, report_to="none", ) _ = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, formatting_func=formatting_prompts_func, ) # but this should work training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, packing=False, report_to="none", ) _ = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, formatting_func=formatting_prompts_func_batched, ) def test_sft_trainer_with_model_num_train_epochs(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=2, eval_steps=1, save_steps=1, num_train_epochs=2, per_device_train_batch_size=2, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, save_steps=1, num_train_epochs=2, per_device_train_batch_size=2, max_seq_length=16, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, save_steps=1, num_train_epochs=2, per_device_train_batch_size=2, max_seq_length=16, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-1")) def test_sft_trainer_with_model(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, save_steps=1, per_device_train_batch_size=2, max_seq_length=16, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) # with formatting_func + packed with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, save_steps=1, per_device_train_batch_size=2, max_seq_length=16, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, formatting_func=formatting_prompts_func, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) # with formatting_func + packed with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, save_steps=1, per_device_train_batch_size=2, max_seq_length=16, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, formatting_func=formatting_prompts_func_batched, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, save_steps=1, per_device_train_batch_size=2, max_seq_length=16, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.dummy_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-1")) def test_sft_trainer_with_multiple_eval_datasets(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=1, eval_steps=1, save_steps=1, per_device_train_batch_size=2, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset={ "data1": self.eval_dataset, "data2": self.eval_dataset, }, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_data1_loss"]) self.assertIsNotNone(trainer.state.log_history[1]["eval_data2_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-1")) def test_data_collator_completion_lm(self): response_template = "### Response:\n" data_collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=self.tokenizer, mlm=False) text = """\n\n### Instructions:\nHello all this should be masked\n\n### Response:\nI have not been masked correctly.""" encoded_text = self.tokenizer(text) examples = [encoded_text] batch = data_collator(examples) labels = batch["labels"] last_pad_idx = np.where(labels == -100)[1][-1] result_text = self.tokenizer.decode(batch["input_ids"][0, last_pad_idx + 1 :]) self.assertEqual(result_text, "I have not been masked correctly.") def test_data_collator_completion_lm_with_multiple_text(self): tokenizer = copy.deepcopy(self.tokenizer) tokenizer.padding_side = "left" response_template = "### Response:\n" data_collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=tokenizer, mlm=False) text1 = """\n\n### Instructions:\nHello all this should be masked\n\n### Response:\nI have not been masked correctly.""" text2 = """\n\n### Instructions:\nThis is another longer text that should also be masked. This text is significantly longer than the previous one.\n\n### Response:\nI have not been masked correctly.""" encoded_text1 = tokenizer(text1) encoded_text2 = tokenizer(text2) examples = [encoded_text1, encoded_text2] batch = data_collator(examples) for i in range(2): labels = batch["labels"][i] last_pad_idx = np.where(labels == -100)[0][-1] result_text = tokenizer.decode(batch["input_ids"][i, last_pad_idx + 1 :]) self.assertEqual(result_text, "I have not been masked correctly.") def test_data_collator_chat_completion_lm(self): instruction_template = "### Human:" assistant_template = "### Assistant:" data_collator = DataCollatorForCompletionOnlyLM( response_template=assistant_template, instruction_template=instruction_template, tokenizer=self.tokenizer, mlm=False, ) text = """### Human: Hello all this should be masked.### Assistant: I should not be masked.### Human: All this should be masked too.### Assistant: I should not be masked too.""" encoded_text = self.tokenizer(text) examples = [encoded_text] batch = data_collator(examples) labels = batch["labels"] non_masked_tokens = batch["input_ids"][labels != -100] result_text = self.tokenizer.decode(non_masked_tokens) self.assertEqual(result_text, " I should not be masked. I should not be masked too.") def test_data_collator_chat_completion_lm_with_multiple_text(self): tokenizer = copy.deepcopy(self.tokenizer) tokenizer.padding_side = "left" instruction_template = "### Human:" assistant_template = "### Assistant:" data_collator = DataCollatorForCompletionOnlyLM( response_template=assistant_template, instruction_template=instruction_template, tokenizer=tokenizer, mlm=False, ) text1 = """### Human: Hello all this should be masked.### Assistant: I should not be masked.""" text2 = """### Human: Hello all this should be masked.### Assistant: I should not be masked.### Human: All this should be masked too.### Assistant: I should not be masked too.""" encoded_text1 = tokenizer(text1) encoded_text2 = tokenizer(text2) examples = [encoded_text1, encoded_text2] batch = data_collator(examples) labels = batch["labels"] input_ids = batch["input_ids"] non_masked_tokens1 = input_ids[0][labels[0] != -100] result_text1 = tokenizer.decode(non_masked_tokens1) self.assertEqual(result_text1, " I should not be masked.") non_masked_tokens2 = input_ids[1][labels[1] != -100] result_text2 = tokenizer.decode(non_masked_tokens2) self.assertEqual(result_text2, " I should not be masked. I should not be masked too.") def test_sft_trainer_infinite_with_model(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=5, eval_steps=1, save_steps=1, per_device_train_batch_size=2, packing=True, max_seq_length=500, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) # make sure the trainer did 5 steps self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-5")) def test_sft_trainer_infinite_with_model_epochs(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, num_train_epochs=1, per_device_train_batch_size=2, save_strategy="epoch", packing=True, max_seq_length=500, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) # make sure the trainer did 5 steps self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-4")) def test_sft_trainer_with_model_neftune(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=2, eval_steps=1, save_steps=1, per_device_train_batch_size=2, neftune_noise_alpha=5, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) trainer.model = trainer._activate_neftune(trainer.model) device = trainer.model.get_input_embeddings().weight.device trainer.model.train() torch.random.manual_seed(42) embeds_neftune = trainer.model.get_input_embeddings()(torch.LongTensor([[1, 0, 1]]).to(device)) torch.random.manual_seed(24) embeds_neftune_2 = trainer.model.get_input_embeddings()(torch.LongTensor([[1, 0, 1]]).to(device)) self.assertFalse(torch.allclose(embeds_neftune, embeds_neftune_2)) self.assertGreater(len(trainer.model.get_input_embeddings()._forward_hooks), 0) trainer.neftune_hook_handle.remove() trainer.train() # Make sure forward pass works fine _ = trainer.model(torch.LongTensor([[1, 0, 1]]).to(device)) self.assertEqual(len(trainer.model.get_input_embeddings()._forward_hooks), 0) @require_peft def test_peft_sft_trainer_str(self): with tempfile.TemporaryDirectory() as tmp_dir: peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) training_args = SFTConfig( packing=True, output_dir=tmp_dir, report_to="none", ) _ = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, peft_config=peft_config, ) @require_peft def test_peft_sft_trainer(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, packing=True, report_to="none", ) peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, peft_config=peft_config, ) self.assertTrue(isinstance(trainer.model, PeftModel)) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("adapter_model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) self.assertIn("adapter_config.json", os.listdir(tmp_dir + "/checkpoint-2")) self.assertNotIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) @require_peft def test_peft_sft_trainer_gc(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, packing=True, report_to="none", ) peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, peft_config=peft_config, ) self.assertIsInstance(trainer.model, PeftModel) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("adapter_model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) self.assertIn("adapter_config.json", os.listdir(tmp_dir + "/checkpoint-2")) self.assertNotIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) @require_peft def test_peft_sft_trainer_neftune(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, neftune_noise_alpha=5, packing=True, report_to="none", ) peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, peft_config=peft_config, ) trainer.model = trainer._activate_neftune(trainer.model) self.assertIsInstance(trainer.model, PeftModel) device = trainer.model.get_input_embeddings().weight.device trainer.model.train() torch.random.manual_seed(42) embeds_neftune = trainer.model.get_input_embeddings()(torch.LongTensor([[1, 0, 1]]).to(device)) torch.random.manual_seed(24) embeds_neftune_2 = trainer.model.get_input_embeddings()(torch.LongTensor([[1, 0, 1]]).to(device)) self.assertFalse(torch.allclose(embeds_neftune, embeds_neftune_2)) self.assertGreater(len(trainer.model.get_input_embeddings()._forward_hooks), 0) trainer.neftune_hook_handle.remove() trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("adapter_model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) self.assertIn("adapter_config.json", os.listdir(tmp_dir + "/checkpoint-2")) self.assertNotIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) # Make sure forward pass works fine to check if embeddings forward is not broken. _ = trainer.model(torch.LongTensor([[1, 0, 1]]).to(device)) self.assertEqual(len(trainer.model.get_input_embeddings()._forward_hooks), 0) @require_peft def test_peft_sft_trainer_tag(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, packing=True, report_to="none", ) peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, peft_config=peft_config, ) for tag in ["sft", "trl"]: self.assertIn(tag, trainer.model.model_tags) @require_peft def test_sft_trainer_tag(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) for tag in ["sft", "trl"]: self.assertIn(tag, trainer.model.model_tags) def test_sft_trainer_only_train_packing(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, packing=True, max_seq_length=16, # make sure there is at least 1 packed sequence eval_packing=False, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.conversational_lm_dataset["train"], eval_dataset=self.conversational_lm_dataset["test"], ) self.assertEqual(len(trainer.train_dataset["input_ids"]), 46) # w/ this dataset, we end up with 46 seqs self.assertEqual(len(trainer.eval_dataset["input_ids"]), len(self.conversational_lm_dataset["test"])) def test_sft_trainer_eval_packing(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, max_seq_length=16, # make sure there is at least 1 packed sequence packing=True, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.conversational_lm_dataset["train"], eval_dataset=self.conversational_lm_dataset["test"], ) self.assertEqual(len(trainer.train_dataset["input_ids"]), 46) # w/ this dataset, we end up with 46 seqs self.assertEqual(len(trainer.eval_dataset["input_ids"]), 6) # w/ this dataset, we end up with 6 seqs def test_sft_trainer_no_packing(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, max_seq_length=16, # make sure there is at least 1 packed sequence packing=False, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.conversational_lm_dataset["train"], eval_dataset=self.conversational_lm_dataset["test"], ) self.assertEqual(len(trainer.train_dataset["input_ids"]), len(self.conversational_lm_dataset["train"])) self.assertEqual(len(trainer.eval_dataset["input_ids"]), len(self.conversational_lm_dataset["test"])) @require_vision def test_sft_trainer_skip_prepare_dataset(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, remove_unused_columns=False, dataset_kwargs={"skip_prepare_dataset": True}, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.dummy_vsft_instruction_dataset, eval_dataset=self.dummy_vsft_instruction_dataset, ) self.assertEqual(trainer.train_dataset.features, self.dummy_vsft_instruction_dataset.features) self.assertEqual(trainer.eval_dataset.features, self.dummy_vsft_instruction_dataset.features) def test_sft_trainer_skip_prepare_dataset_with_no_packing(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, gradient_checkpointing=True, remove_unused_columns=False, packing=False, dataset_kwargs={"skip_prepare_dataset": True}, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.dummy_dataset, ) self.assertEqual(trainer.train_dataset.features, self.dummy_dataset.features) @require_vision def test_sft_trainer_llava(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, per_device_eval_batch_size=2, remove_unused_columns=False, dataset_kwargs={"skip_prepare_dataset": True}, report_to="none", ) tiny_llava = LlavaForConditionalGeneration.from_pretrained( "trl-internal-testing/tiny-LlavaForConditionalGeneration" ) processor = AutoProcessor.from_pretrained("trl-internal-testing/tiny-LlavaForConditionalGeneration") processor.chat_template = """{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. {% for message in messages %}{% if message['role'] == 'user' %}USER: {% else %}ASSISTANT: {% endif %}{% for item in message['content'] %}{% if item['type'] == 'text' %}{{ item['text'] }}{% elif item['type'] == 'image' %}<image>{% endif %}{% endfor %}{% if message['role'] == 'user' %} {% else %}{{eos_token}}{% endif %}{% endfor %}{% if add_generation_prompt %}ASSISTANT: {% endif %}""" def collate_fn(examples): # Get the texts and images, and apply the chat template texts = [processor.apply_chat_template(example["messages"], tokenize=False) for example in examples] images = [example["images"][0] for example in examples] # Tokenize the texts and process the images batch = processor(texts, images, return_tensors="pt", padding=True) # The labels are the input_ids, and we mask the padding tokens in the loss computation labels = batch["input_ids"].clone() labels[labels == processor.tokenizer.pad_token_id] = -100 batch["labels"] = labels return batch trainer = SFTTrainer( model=tiny_llava, args=training_args, data_collator=collate_fn, train_dataset=self.dummy_vsft_instruction_dataset, eval_dataset=self.dummy_vsft_instruction_dataset, ) trainer.train() self.assertIsNotNone(trainer.state.log_history[(-1)]["train_loss"]) self.assertIsNotNone(trainer.state.log_history[0]["eval_loss"]) self.assertIn("model.safetensors", os.listdir(tmp_dir + "/checkpoint-2")) def test_sft_trainer_torch_dtype(self): # See https://github.com/huggingface/trl/issues/1751 with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, model_init_kwargs={"torch_dtype": torch.float16}, report_to="none", ) trainer = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, formatting_func=formatting_prompts_func, ) self.assertEqual(trainer.model.config.torch_dtype, torch.float16) # Now test when `torch_dtype` is provided but is wrong with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, eval_strategy="steps", max_steps=4, eval_steps=2, save_steps=2, per_device_train_batch_size=2, model_init_kwargs={"torch_dtype": -1}, report_to="none", ) with self.assertRaises(ValueError) as context: _ = SFTTrainer( model=self.model_id, args=training_args, train_dataset=self.train_dataset, eval_dataset=self.eval_dataset, ) self.assertIn( "Invalid `torch_dtype` passed to `SFTConfig`. Expected either 'auto' or a string representing " "a `torch.dtype` (e.g., 'float32'), but got -1.", str(context.exception), ) # This new tester aims to replace the first one at some point class SFTTrainerTester2(unittest.TestCase): def test_train(self): # Get the model and dataset model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5" model = AutoModelForCausalLM.from_pretrained(model_id) dataset = load_dataset("trl-internal-testing/zen", "standard_language_modeling", split="train") with tempfile.TemporaryDirectory() as tmp_dir: # Initialize the trainer training_args = SFTConfig(output_dir=tmp_dir, report_to="none") trainer = SFTTrainer(args=training_args, model=model, train_dataset=dataset) # Save the initial parameters to compare them later previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} # Train the model trainer.train() # Check that the training loss is not None self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.allclose(param, new_param), f"Parameter {n} has not changed") @require_peft def test_train_peft_model(self): # Get the base model model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5" model = AutoModelForCausalLM.from_pretrained(model_id) # Get the base model parameter names base_param_names = [f"base_model.model.{n}" for n, _ in model.named_parameters()] # Turn the model into a peft model lora_config = LoraConfig() model = get_peft_model(model, lora_config) # Get the dataset dataset = load_dataset("trl-internal-testing/zen", "standard_language_modeling", split="train") with tempfile.TemporaryDirectory() as tmp_dir: # Initialize the trainer training_args = SFTConfig(output_dir=tmp_dir, report_to="none") trainer = SFTTrainer(args=training_args, model=model, train_dataset=dataset) # Save the initial parameters to compare them later previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} # Train the model trainer.train() # Check that the training loss is not None self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check the peft params have changed and the base model params have not changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) if n in base_param_names: # We expect the base model parameters to be the same self.assertTrue(torch.allclose(param, new_param), f"Parameter {n} has changed") elif ( "base_layer" not in n ): # We expect the peft parameters to be different (except for the base layer) self.assertFalse(torch.allclose(param, new_param), f"Parameter {n} has not changed")
trl/tests/test_sft_trainer.py/0
{ "file_path": "trl/tests/test_sft_trainer.py", "repo_id": "trl", "token_count": 28339 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from huggingface_hub import HfApi from trl.import_utils import is_mergekit_available if is_mergekit_available(): from mergekit.config import MergeConfiguration from mergekit.merge import MergeOptions, run_merge def upload_model_to_hf(folder_path: str, repo_id: str): api = HfApi() # Create the repository if it doesn't exist repo = api.create_repo(repo_id, repo_type="model") # Upload the folder to the specified repository api.upload_folder( folder_path=folder_path, repo_id=repo.repo_id, repo_type=repo.repo_type, ) class MergeConfig: r""" Configuration class for merging two models using `mergekit`. This class provides a structured way to configure and generate merge configurations for various merge methods, such as `linear`, `ties`, `dare_ties`, and `slerp`. Args: method (`str`, *optional*, defaults to `"linear"`): Merge method to use. Supported methods include: - `"linear"`: Linearly combines two models with specified weights. - `"ties"`: Combines two models using the TIES method with density parameters. - `"dare_ties"`: A variant of TIES for domain adaptation. - `"slerp"`: Combines models using spherical linear interpolation. Note: For more details about the merge methods and how they are implemented, see the [MergeKit GitHub repository](https://github.com/arcee-ai/mergekit?tab=readme-ov-file#merge-methods). Attributes: method (`str`): The merge method to use. policy_model_path (`str` or `None`): Path to the policy model. target_model_path (`str` or `None`): Path to the target model. policy_model_weight (`float`): Weight for the policy model (for `linear` and `ties` methods). target_model_weight (`float`): Weight for the target model (for `linear` and `ties` methods). policy_model_density (`list[float]`): Density parameters for the policy model (for `ties` and `dare_ties`). target_model_density (`list[float]`): Density parameters for the target model (for `ties` and `dare_ties`). normalize (`float` or `None`): Normalization factor for the TIES method. t_values (`float` or `None`): Interpolation factor for the SLERP method. dtype (`str`): Data type to use for merging, e.g., `"float16"`. """ def __init__(self, method: str = "linear"): if not is_mergekit_available(): raise ImportError( "MergeConfig requires the `mergekit` extra. To install, run `pip install trl[mergekit]`." ) self.method = method self.policy_model_path = None self.target_model_path = None # Initialize relevant parameters based on the method if method == "linear": self.policy_model_weight = 0.5 self.target_model_weight = 0.5 self.dtype = "float16" elif method == "ties": self.policy_model_weight = 1.0 self.policy_model_density = [1.0, 0.7, 0.1] self.target_model_weight = 1.0 self.target_model_density = [1.0] self.normalize = 1.0 self.dtype = "float16" elif method == "dare_ties": self.policy_model_weight = 1.0 self.policy_model_density = [1.0, 0.7, 0.1] self.target_model_weight = 1.0 self.target_model_density = [1.0] self.normalize = 1.0 self.dtype = "float16" elif method == "slerp": self.t_values = 0.5 self.dtype = "float16" else: raise ValueError(f"Unsupported merge method: {method}") def create_merge_config_linear(self) -> "MergeConfiguration": """ Creates a merge configuration for a linear merge of two models with specified weights. """ # Create the merge configuration dictionary merge_config_dict = { "dtype": self.dtype, "merge_method": "linear", "models": [ {"model": self.policy_model_path, "parameters": {"weight": self.policy_model_weight}}, {"model": self.target_model_path, "parameters": {"weight": self.target_model_weight}}, ], } # Create the MergeConfiguration from the dictionary merge_config = MergeConfiguration.model_validate(merge_config_dict) return merge_config def create_merge_config_ties(self) -> "MergeConfiguration": """ Creates a merge configuration for a TIES merge of two models, with specified weights and densities. """ # Create the TIES merge configuration dictionary merge_config_dict = { "merge_method": "ties", "slices": None, # Optional slices if needed "models": [ { "model": { "model": {"path": self.target_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "parameters": {"density": self.target_model_density, "weight": self.target_model_weight}, }, { "model": { "model": {"path": self.policy_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "parameters": {"density": self.policy_model_density, "weight": self.policy_model_weight}, }, ], "parameters": {"normalize": self.normalize}, "base_model": { "model": {"path": self.policy_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "dtype": self.dtype, "tokenizer_source": None, "tokenizer": None, "chat_template": None, "out_dtype": None, } # Create the MergeConfiguration from the dictionary merge_config = MergeConfiguration.model_validate(merge_config_dict) return merge_config def create_merge_config_dare_ties(self) -> "MergeConfiguration": """ Creates a merge configuration for a DARE TIES merge of two models, with specified weights and densities. """ # Create the DARE TIES merge configuration dictionary merge_config_dict = { "merge_method": "dare_ties", "slices": None, # Optional slices if needed "models": [ { "model": { "model": {"path": self.target_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "parameters": {"density": self.target_model_density, "weight": self.target_model_weight}, }, { "model": { "model": {"path": self.policy_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "parameters": {"density": self.policy_model_density, "weight": self.policy_model_weight}, }, ], "parameters": {"normalize": self.normalize}, "base_model": { "model": {"path": self.policy_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "dtype": self.dtype, "tokenizer_source": None, "tokenizer": None, "chat_template": None, "out_dtype": None, } # Create the MergeConfiguration from the dictionary merge_config = MergeConfiguration.model_validate(merge_config_dict) return merge_config def create_merge_config_slerp(self) -> "MergeConfiguration": """ Creates a merge configuration for a SLERP merge of a model with a base model. """ # Create the SLERP merge configuration dictionary merge_config_dict = { "merge_method": "slerp", "slices": None, # Optional slices if needed "models": [ { "model": { "model": {"path": self.target_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "parameters": None, # No specific parameters for SLERP model } ], "parameters": { "t": self.t_values # Set the t values for SLERP }, "base_model": { "model": {"path": self.policy_model_path, "revision": None}, "lora": None, "override_architecture": None, }, "dtype": self.dtype, "tokenizer_source": None, "tokenizer": None, "chat_template": None, "out_dtype": None, } # Create the MergeConfiguration from the dictionary merge_config = MergeConfiguration.model_validate(merge_config_dict) return merge_config def create(self) -> "MergeConfiguration": if self.method == "linear": return self.create_merge_config_linear() elif self.method == "ties": return self.create_merge_config_ties() elif self.method == "dare_ties": return self.create_merge_config_dare_ties() elif self.method == "slerp": return self.create_merge_config_slerp() def merge_models(config: MergeConfig, out_path: str): """ Merge two models using mergekit Args: config (`MergeConfig`): The merge configuration. out_path (`str`): The output path for the merged model. """ if not is_mergekit_available(): raise ImportError("merge_models requires the `mergekit` extra. To install, run `pip install trl[mergekit]`.") run_merge( config, out_path=out_path, options=MergeOptions( cuda=torch.cuda.is_available(), copy_tokenizer=True, lazy_unpickle=False, low_cpu_memory=False, ), )
trl/trl/mergekit_utils.py/0
{ "file_path": "trl/trl/mergekit_utils.py", "repo_id": "trl", "token_count": 5141 }
--- {{ card_data }} --- # Model Card for {{ model_name }} This model is a fine-tuned version of [{{ base_model }}](https://huggingface.co/{{ base_model }}){% if dataset_name %} on the [{{ dataset_name }}](https://huggingface.co/datasets/{{ dataset_name }}) dataset{% endif %}. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="{{ hub_model_id }}", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure {% if wandb_url %}[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>]({{ wandb_url }}){% endif %} {% if comet_url %}[<img src="https://raw.githubusercontent.com/comet-ml/comet-examples/master/logo/comet_badge.png" alt="Visualize in Comet" width="135" height="20"/>]({{ comet_url }}){% endif %} This model was trained with {{ trainer_name }}{% if paper_id %}, a method introduced in [{{ paper_title }}](https://huggingface.co/papers/{{ paper_id }}){% endif %}. ### Framework versions - TRL: {{ trl_version }} - Transformers: {{ transformers_version }} - Pytorch: {{ pytorch_version }} - Datasets: {{ datasets_version }} - Tokenizers: {{ tokenizers_version }} ## Citations {% if trainer_citation %}Cite {{ trainer_name }} as: ```bibtex {{ trainer_citation }} ```{% endif %} Cite TRL as: ```bibtex {% raw %}@misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} }{% endraw %} ```
trl/trl/templates/lm_model_card.md/0
{ "file_path": "trl/trl/templates/lm_model_card.md", "repo_id": "trl", "token_count": 750 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import textwrap import warnings from collections import defaultdict from typing import Any, Callable, Optional, Sized, Union from unittest.mock import patch import torch import torch.utils.data import transformers from accelerate.utils import broadcast_object_list, gather, gather_object from accelerate.utils.other import is_compiled_module from datasets import Dataset, IterableDataset from packaging import version from torch import nn from torch.utils.data import Sampler from transformers import ( AutoModelForCausalLM, AutoModelForSequenceClassification, AutoTokenizer, GenerationConfig, PreTrainedModel, PreTrainedTokenizerBase, Trainer, TrainerCallback, is_wandb_available, ) from transformers.integrations.deepspeed import is_deepspeed_zero3_enabled from transformers.utils import is_peft_available from ..data_utils import apply_chat_template, is_conversational, maybe_apply_chat_template from ..import_utils import is_vllm_available from ..models import create_reference_model, prepare_deepspeed, unwrap_model_for_generation from .callbacks import SyncRefModelCallback from .grpo_config import GRPOConfig from .utils import generate_model_card, get_comet_experiment_url, pad, selective_log_softmax if is_peft_available(): from peft import PeftConfig, get_peft_model if is_vllm_available(): from vllm import LLM, SamplingParams if is_wandb_available(): import wandb # What we call a reward function is a callable that takes a list of prompts and completions and returns a list of # rewards. When it's a string, it's a model ID, so it's loaded as a pretrained model. RewardFunc = Union[str, PreTrainedModel, Callable[[list, list], list[float]]] class RepeatRandomSampler(Sampler): """ Sampler that repeats the indices of a dataset N times. Args: data_source (`Sized`): Dataset to sample from. repeat_count (`int`): Number of times to repeat each index. Example: ```python >>> sampler = RepeatRandomSampler(["a", "b", "c", "d"], repeat_count=2) >>> list(sampler) [2, 2, 0, 0, 3, 3, 1, 1] ``` """ def __init__(self, data_source: Sized, repeat_count: int): self.data_source = data_source self.repeat_count = repeat_count self.num_samples = len(data_source) def __iter__(self): indexes = [idx for idx in torch.randperm(self.num_samples).tolist() for _ in range(self.repeat_count)] return iter(indexes) def __len__(self): return self.num_samples * self.repeat_count class GRPOTrainer(Trainer): """ Trainer for the Group Relative Policy Optimization (GRPO) method. This algorithm was initially proposed in the paper [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). Example: ```python from datasets import load_dataset from trl import GRPOTrainer dataset = load_dataset("trl-lib/tldr", split="train") trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs="weqweasdas/RM-Gemma-2B", train_dataset=dataset, ) trainer.train() ``` Args: model (`Union[str, PreTrainedModel]`): Model to be trained. Can be either: - A string, being the *model id* of a pretrained model hosted inside a model repo on huggingface.co, or a path to a *directory* containing model weights saved using [`~transformers.PreTrainedModel.save_pretrained`], e.g., `'./my_model_directory/'`. The model is loaded using [`~transformers.AutoModelForCausalLM.from_pretrained`] with the keywork arguments in `args.model_init_kwargs`. - A [`~transformers.PreTrainedModel`] object. Only causal language models are supported. reward_funcs (`Union[RewardFunc, list[RewardFunc]]`): Reward functions to be used for computing the rewards. To compute the rewards, we call all the reward functions with the prompts and completions and sum the rewards. Can be either: - A single reward function, such as: - A string: The *model ID* of a pretrained model hosted inside a model repo on huggingface.co, or a path to a *directory* containing model weights saved using [`~transformers.PreTrainedModel.save_pretrained`], e.g., `'./my_model_directory/'`. The model is loaded using [`~transformers.AutoModelForSequenceClassification.from_pretrained`] with `num_labels=1` and the keyword arguments in `args.model_init_kwargs`. - A [`~transformers.PreTrainedModel`] object: Only sequence classification models are supported. - A custom reward function: The function is provided with the prompts and the generated completions, plus any additional columns in the dataset. It should return a list of rewards. For more details, see [Using a custom reward function](#using-a-custom-reward-function). - A list of reward functions, where each item can independently be any of the above types. Mixing different types within the list (e.g., a string model ID and a custom reward function) is allowed. args ([`GRPOConfig`], *optional*, defaults to `None`): Configuration for this trainer. If `None`, a default configuration is used. train_dataset ([`~datasets.Dataset`] or [`~datasets.IterableDataset`]): Dataset to use for training. It must include a column `"prompt"`. Any additional columns in the dataset is ignored. The format of the samples can be either: - [Standard](dataset_formats#standard): Each sample contains plain text. - [Conversational](dataset_formats#conversational): Each sample contains structured messages (e.g., role and content). eval_dataset ([`~datasets.Dataset`], [`~datasets.IterableDataset`] or `dict[str, Union[Dataset, IterableDataset]]`): Dataset to use for evaluation. It must meet the same requirements as `train_dataset`. processing_class ([`~transformers.PreTrainedTokenizerBase`], *optional*, defaults to `None`): Processing class used to process the data. The padding side must be set to "left". If `None`, the processing class is loaded from the model's name with [`~transformers.AutoTokenizer.from_pretrained`]. reward_processing_classes (`Union[PreTrainedTokenizerBase, list[PreTrainedTokenizerBase]]`, *optional*, defaults to `None`): Processing classes corresponding to the reward functions specified in `reward_funcs`. Can be either: - A single processing class: Used when `reward_funcs` contains only one reward function. - A list of processing classes: Must match the order and length of the reward functions in `reward_funcs`. If set to `None`, or if an element of the list corresponding to a [`~transformers.PreTrainedModel`] is `None`, the tokenizer for the model is automatically loaded using [`~transformers.AutoTokenizer.from_pretrained`]. For elements in `reward_funcs` that are custom reward functions (not [`~transformers.PreTrainedModel`]), the corresponding entries in `reward_processing_classes` are ignored. callbacks (list of [`~transformers.TrainerCallback`], *optional*, defaults to `None`): List of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in [here](https://huggingface.co/docs/transformers/main_classes/callback). If you want to remove one of the default callbacks used, use the [`~transformers.Trainer.remove_callback`] method. optimizers (`tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]`, *optional*, defaults to `(None, None)`): A tuple containing the optimizer and the scheduler to use. Will default to an instance of [`AdamW`] on your model and a scheduler given by [`get_linear_schedule_with_warmup`] controlled by `args`. peft_config ([`~peft.PeftConfig`], *optional*, defaults to `None`): PEFT configuration used to wrap the model. If `None`, the model is not wrapped. """ _tag_names = ["trl", "grpo"] def __init__( self, model: Union[str, PreTrainedModel], reward_funcs: Union[RewardFunc, list[RewardFunc]], args: GRPOConfig = None, train_dataset: Optional[Union[Dataset, IterableDataset]] = None, eval_dataset: Optional[Union[Dataset, IterableDataset, dict[str, Union[Dataset, IterableDataset]]]] = None, processing_class: Optional[PreTrainedTokenizerBase] = None, reward_processing_classes: Optional[Union[PreTrainedTokenizerBase, list[PreTrainedTokenizerBase]]] = None, callbacks: Optional[list[TrainerCallback]] = None, optimizers: tuple[Optional[torch.optim.Optimizer], Optional[torch.optim.lr_scheduler.LambdaLR]] = (None, None), peft_config: Optional["PeftConfig"] = None, ): # Args if args is None: model_name = model if isinstance(model, str) else model.config._name_or_path model_name = model_name.split("/")[-1] args = GRPOConfig(f"{model_name}-GRPO") # Models # Trained model model_init_kwargs = args.model_init_kwargs or {} if isinstance(model, str): model_id = model torch_dtype = model_init_kwargs.get("torch_dtype") if isinstance(torch_dtype, torch.dtype) or torch_dtype == "auto" or torch_dtype is None: pass # torch_dtype is already a torch.dtype or "auto" or None elif isinstance(torch_dtype, str): # it's a str, but not "auto" torch_dtype = getattr(torch, torch_dtype) model_init_kwargs["torch_dtype"] = torch_dtype else: raise ValueError( "Invalid `torch_dtype` passed to `GRPOConfig`. Expected either 'auto' or a string representing " f"a `torch.dtype` (e.g., 'float32'), but got {torch_dtype}." ) # Disable caching if gradient checkpointing is enabled (not supported) model_init_kwargs["use_cache"] = ( False if args.gradient_checkpointing else model_init_kwargs.get("use_cache") ) model = AutoModelForCausalLM.from_pretrained(model, **model_init_kwargs) else: model_id = model.config._name_or_path if args.model_init_kwargs is not None: raise ValueError( "You passed `model_init_kwargs` to the `GRPOConfig`, but your model is already instantiated. " "This argument can only be used when the `model` argument is a string." ) if peft_config is not None: model = get_peft_model(model, peft_config) # Reference model if is_deepspeed_zero3_enabled(): self.ref_model = AutoModelForCausalLM.from_pretrained(model_id, **model_init_kwargs) elif peft_config is None: # If PEFT configuration is not provided, create a reference model based on the initial model. self.ref_model = create_reference_model(model) else: # If PEFT is used, the reference model is not needed since the adapter can be disabled # to revert to the initial model. self.ref_model = None # Processing class if processing_class is None: processing_class = AutoTokenizer.from_pretrained(model.config._name_or_path, padding_side="left") # Reward functions if not isinstance(reward_funcs, list): reward_funcs = [reward_funcs] for i, reward_func in enumerate(reward_funcs): if isinstance(reward_func, str): reward_funcs[i] = AutoModelForSequenceClassification.from_pretrained( reward_func, num_labels=1, **model_init_kwargs ) self.reward_funcs = reward_funcs # Reward processing class if reward_processing_classes is None: reward_processing_classes = [None] * len(reward_funcs) elif not isinstance(reward_processing_classes, list): reward_processing_classes = [reward_processing_classes] else: if len(reward_processing_classes) != len(reward_funcs): raise ValueError("The number of reward processing classes must match the number of reward functions.") for i, (reward_processing_class, reward_func) in enumerate(zip(reward_processing_classes, reward_funcs)): if isinstance(reward_func, PreTrainedModel): if reward_processing_class is None: reward_processing_class = AutoTokenizer.from_pretrained(reward_func.config._name_or_path) if reward_processing_class.pad_token_id is None: reward_processing_class.pad_token = reward_processing_class.eos_token # The reward model computes the reward for the latest non-padded token in the input sequence. # So it's important to set the pad token ID to the padding token ID of the processing class. reward_func.config.pad_token_id = reward_processing_class.pad_token_id reward_processing_classes[i] = reward_processing_class self.reward_processing_classes = reward_processing_classes # Data collator def data_collator(features): # No data collation is needed in GRPO return features # Training arguments self.max_prompt_length = args.max_prompt_length self.max_completion_length = args.max_completion_length # = |o_i| in the GRPO paper self.num_generations = args.num_generations # = G in the GRPO paper self.use_vllm = args.use_vllm self.beta = args.beta # The trainer estimates the number of FLOPs (floating-point operations) using the number of elements in the # input tensor associated with the key "input_ids". However, in GRPO, the sampled data does not include the # "input_ids" key. Instead, the available keys is "prompt". As a result, the trainer issues the warning: # "Could not estimate the number of tokens of the input, floating-point operations will not be computed." To # suppress this warning, we set the "estimate_tokens" key in the model's "warnings_issued" dictionary to True. # This acts as a flag to indicate that the warning has already been issued. model.warnings_issued["estimate_tokens"] = True # Initialize the metrics self._metrics = defaultdict(list) self.log_completions = args.log_completions super().__init__( model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, processing_class=processing_class, callbacks=callbacks, optimizers=optimizers, ) # Check if the per_device_train/eval_batch_size * num processes can be divided by the number of generations num_processes = self.accelerator.num_processes global_batch_size = args.per_device_train_batch_size * num_processes possible_values = [n_gen for n_gen in range(2, global_batch_size + 1) if (global_batch_size) % n_gen == 0] if self.num_generations not in possible_values: raise ValueError( f"The global train batch size ({num_processes} x {args.per_device_train_batch_size}) must be evenly " f"divisible by the number of generations per prompt ({self.num_generations}). Given the current train " f"batch size, the valid values for the number of generations are: {possible_values}." ) if self.args.eval_strategy != "no": global_batch_size = args.per_device_eval_batch_size * num_processes possible_values = [n_gen for n_gen in range(2, global_batch_size + 1) if (global_batch_size) % n_gen == 0] if self.num_generations not in possible_values: raise ValueError( f"The global eval batch size ({num_processes} x {args.per_device_eval_batch_size}) must be evenly " f"divisible by the number of generations per prompt ({self.num_generations}). Given the current " f"eval batch size, the valid values for the number of generations are: {possible_values}." ) if self.use_vllm: if not is_vllm_available(): raise ImportError( "vLLM is not available and `use_vllm` is set to True. Please install vLLM with " "`pip install vllm` to use it." ) if self.accelerator.is_main_process: vllm_device = self.args.vllm_device if vllm_device == "auto": vllm_device = f"cuda:{self.accelerator.num_processes}" # take the next GPU idx # Check that the requested device is available if vllm_device.split(":")[0] == "cuda" and int(vllm_device.split(":")[1]) >= torch.cuda.device_count(): raise ValueError( f"The requested device for vllm ({vllm_device}) is not available. You are likely using vLLM " "without restricting the number of GPUs for training. Set the `--num_processes` argument to a " "value lower than the number of GPUs available on your machine—typically, reducing it by one " f"is sufficient. In your case: `--num_processes {torch.cuda.device_count() - 1}`." ) # Check that the requested device is not also used for training if vllm_device in {f"cuda:{idx}" for idx in range(self.accelerator.num_processes)}: warnings.warn( f"The requested device {vllm_device} is also used for training. This may lead to unexpected " "behavior. It is recommended to use a dedicated device for vLLM." ) # vLLM is not compatible with accelerate. So we need to patch it to make sure we can (1) place the vLLM # model on the desired device (world_size_patch) and (2) avoid a test that is not designed for our # setting (profiling_patch). world_size_patch = patch("torch.distributed.get_world_size", return_value=1) profiling_patch = patch( "vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling", return_value=None ) with world_size_patch, profiling_patch: self.llm = LLM( model=model.name_or_path, device=vllm_device, gpu_memory_utilization=self.args.vllm_gpu_memory_utilization, dtype=self.args.vllm_dtype, # Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can # directly reuse the KV cache if it shares the same prefix with one of the existing queries. # This is particularly useful here because we generate completions from the same prompts. enable_prefix_caching=True, max_model_len=self.args.vllm_max_model_len, ) self.sampling_params = SamplingParams( temperature=args.temperature, max_tokens=self.max_completion_length, ) self._last_loaded_step = 0 # tag to avoid useless loading during grad accumulation # When using vLLM, the main process is responsible for loading the model weights. This can cause process # desynchronization and seems to lead to DeepSpeed hanging during initialization. To prevent this, we # synchronize all processes after vLLM has been fully initialized. self.accelerator.wait_for_everyone() else: self.generation_config = GenerationConfig( max_new_tokens=self.max_completion_length, do_sample=True, temperature=args.temperature, pad_token_id=processing_class.pad_token_id, ) # Gradient accumulation requires scaled loss. Normally, loss scaling in the parent class depends on whether the # model accepts loss-related kwargs. Since we compute our own loss, this check is irrelevant. We set # self.model_accepts_loss_kwargs to False to enable scaling. self.model_accepts_loss_kwargs = False # Add tags to the model self.model.add_model_tags(self._tag_names) if self.ref_model is not None: if self.is_deepspeed_enabled: self.ref_model = prepare_deepspeed(self.ref_model, self.accelerator) else: self.ref_model = self.accelerator.prepare_model(self.ref_model, evaluation_mode=True) if args.sync_ref_model: self.add_callback(SyncRefModelCallback(ref_model=self.ref_model, accelerator=self.accelerator)) for i, reward_func in enumerate(self.reward_funcs): if isinstance(reward_func, PreTrainedModel): self.reward_funcs[i] = self.accelerator.prepare_model(reward_func, evaluation_mode=True) def _set_signature_columns_if_needed(self): # If `self.args.remove_unused_columns` is True, non-signature columns are removed. # By default, this method sets `self._signature_columns` to the model's expected inputs. # In GRPOTrainer, we preprocess data, so using the model's signature columns doesn't work. # Instead, we set them to the columns expected by the `training_step` method, hence the override. if self._signature_columns is None: self._signature_columns = ["prompt"] # We need a custom sampler that samples the same prompt multiple times def _get_train_sampler(self) -> Sampler: return RepeatRandomSampler(self.train_dataset, self.num_generations) def _get_eval_sampler(self, eval_dataset) -> Sampler: return RepeatRandomSampler(eval_dataset, self.num_generations) # Get the per-token log probabilities for the completions for the model and the reference model def _get_per_token_logps(self, model, input_ids, attention_mask, logits_to_keep): # We add 1 to `logits_to_keep` because the last logits of the sequence is later excluded logits = model(input_ids=input_ids, attention_mask=attention_mask, logits_to_keep=logits_to_keep + 1).logits logits = logits[:, :-1, :] # (B, L-1, V), exclude the last logit: it corresponds to the next token pred input_ids = input_ids[:, -logits_to_keep:] # For transformers<=4.48, logits_to_keep argument isn't supported, so here we drop logits ourselves. # See https://github.com/huggingface/trl/issues/2770 logits = logits[:, -logits_to_keep:] return selective_log_softmax(logits, input_ids) # compute logprobs for the input tokens def _prepare_inputs(self, inputs: dict[str, Union[torch.Tensor, Any]]) -> dict[str, Union[torch.Tensor, Any]]: device = self.accelerator.device prompts = [x["prompt"] for x in inputs] prompts_text = [maybe_apply_chat_template(example, self.processing_class)["prompt"] for example in inputs] prompt_inputs = self.processing_class( prompts_text, return_tensors="pt", padding=True, padding_side="left", add_special_tokens=False ) prompt_inputs = super()._prepare_inputs(prompt_inputs) prompt_ids, prompt_mask = prompt_inputs["input_ids"], prompt_inputs["attention_mask"] if self.max_prompt_length is not None: prompt_ids = prompt_ids[:, -self.max_prompt_length :] prompt_mask = prompt_mask[:, -self.max_prompt_length :] # Generate completions using either vLLM or regular generation if self.args.use_vllm: # First, have main process load weights if needed if self.state.global_step != self._last_loaded_step: with unwrap_model_for_generation( self.model, self.accelerator, gather_deepspeed3_params=self.args.ds3_gather_for_generation ) as unwrapped_model: if is_compiled_module(unwrapped_model): state_dict = unwrapped_model._orig_mod.state_dict() else: state_dict = unwrapped_model.state_dict() if self.accelerator.is_main_process: llm_model = self.llm.llm_engine.model_executor.driver_worker.model_runner.model llm_model.load_weights(state_dict.items()) self._last_loaded_step = self.state.global_step # Generate completions using vLLM: gather all prompts and use them in a single call in the main process all_prompts_text = gather_object(prompts_text) if self.accelerator.is_main_process: outputs = self.llm.generate(all_prompts_text, sampling_params=self.sampling_params, use_tqdm=False) completion_ids = [out.token_ids for completions in outputs for out in completions.outputs] else: completion_ids = [None] * len(all_prompts_text) # Broadcast the completions from the main process to all processes, ensuring each process receives its # corresponding slice. completion_ids = broadcast_object_list(completion_ids, from_process=0) process_slice = slice( self.accelerator.process_index * len(prompts), (self.accelerator.process_index + 1) * len(prompts), ) completion_ids = completion_ids[process_slice] # Pad the completions, and concatenate them with the prompts completion_ids = [torch.tensor(ids, device=device) for ids in completion_ids] completion_ids = pad(completion_ids, padding_value=self.processing_class.pad_token_id) prompt_completion_ids = torch.cat([prompt_ids, completion_ids], dim=1) else: # Regular generation path with unwrap_model_for_generation(self.model, self.accelerator) as unwrapped_model: prompt_completion_ids = unwrapped_model.generate( prompt_ids, attention_mask=prompt_mask, generation_config=self.generation_config ) # Compute prompt length and extract completion ids prompt_length = prompt_ids.size(1) prompt_ids = prompt_completion_ids[:, :prompt_length] completion_ids = prompt_completion_ids[:, prompt_length:] # Mask everything after the first EOS token is_eos = completion_ids == self.processing_class.eos_token_id eos_idx = torch.full((is_eos.size(0),), is_eos.size(1), dtype=torch.long, device=device) eos_idx[is_eos.any(dim=1)] = is_eos.int().argmax(dim=1)[is_eos.any(dim=1)] sequence_indices = torch.arange(is_eos.size(1), device=device).expand(is_eos.size(0), -1) completion_mask = (sequence_indices <= eos_idx.unsqueeze(1)).int() # Concatenate prompt_mask with completion_mask for logit computation attention_mask = torch.cat([prompt_mask, completion_mask], dim=1) # (B*G, P+C) logits_to_keep = completion_ids.size(1) # we only need to compute the logits for the completion tokens with torch.inference_mode(): if self.ref_model is not None: ref_per_token_logps = self._get_per_token_logps( self.ref_model, prompt_completion_ids, attention_mask, logits_to_keep ) else: with self.accelerator.unwrap_model(self.model).disable_adapter(): ref_per_token_logps = self._get_per_token_logps( self.model, prompt_completion_ids, attention_mask, logits_to_keep ) # Decode the generated completions completions_text = self.processing_class.batch_decode(completion_ids, skip_special_tokens=True) if is_conversational(inputs[0]): completions = [[{"role": "assistant", "content": completion}] for completion in completions_text] else: completions = completions_text rewards_per_func = torch.zeros(len(prompts), len(self.reward_funcs), device=device) for i, (reward_func, reward_processing_class) in enumerate( zip(self.reward_funcs, self.reward_processing_classes) ): if isinstance(reward_func, nn.Module): # Module instead of PretrainedModel for compat with compiled models if is_conversational(inputs[0]): messages = [{"messages": p + c} for p, c in zip(prompts, completions)] texts = [apply_chat_template(x, reward_processing_class)["text"] for x in messages] else: texts = [p + c for p, c in zip(prompts, completions)] reward_inputs = reward_processing_class( texts, return_tensors="pt", padding=True, padding_side="right", add_special_tokens=False ) reward_inputs = super()._prepare_inputs(reward_inputs) with torch.inference_mode(): rewards_per_func[:, i] = reward_func(**reward_inputs).logits[:, 0] # Shape (B*G,) else: # Repeat all input columns (but "prompt" and "completion") to match the number of generations keys = [key for key in inputs[0] if key not in ["prompt", "completion"]] reward_kwargs = {key: [example[key] for example in inputs] for key in keys} output_reward_func = reward_func(prompts=prompts, completions=completions, **reward_kwargs) rewards_per_func[:, i] = torch.tensor(output_reward_func, dtype=torch.float32, device=device) # Gather the reward per function: this part is crucial, because the rewards are normalized per group and the # completions may be distributed across processes rewards_per_func = gather(rewards_per_func) # Sum the rewards from all reward functions rewards = rewards_per_func.sum(dim=1) # Compute grouped-wise rewards mean_grouped_rewards = rewards.view(-1, self.num_generations).mean(dim=1) std_grouped_rewards = rewards.view(-1, self.num_generations).std(dim=1) # Normalize the rewards to compute the advantages mean_grouped_rewards = mean_grouped_rewards.repeat_interleave(self.num_generations, dim=0) std_grouped_rewards = std_grouped_rewards.repeat_interleave(self.num_generations, dim=0) advantages = (rewards - mean_grouped_rewards) / (std_grouped_rewards + 1e-4) # Slice to keep only the local part of the data process_slice = slice( self.accelerator.process_index * len(prompts), (self.accelerator.process_index + 1) * len(prompts), ) advantages = advantages[process_slice] # Log the metrics reward_per_func = rewards_per_func.mean(0) for i, reward_func in enumerate(self.reward_funcs): if isinstance(reward_func, nn.Module): # Module instead of PretrainedModel for compat with compiled models reward_func_name = reward_func.config._name_or_path.split("/")[-1] else: reward_func_name = reward_func.__name__ self._metrics[f"rewards/{reward_func_name}"].append(reward_per_func[i].item()) self._metrics["reward"].append(rewards.mean().item()) self._metrics["reward_std"].append(std_grouped_rewards.mean().item()) if ( self.log_completions and self.state.global_step % self.args.logging_steps == 0 and "wandb" in self.args.report_to ): import pandas as pd # For logging table = { "step": [str(self.state.global_step)] * len(rewards), "prompt": gather_object(prompts_text), "completion": gather_object(completions_text), "reward": rewards.tolist(), } df = pd.DataFrame(table) if wandb.run is not None and self.accelerator.is_main_process: wandb.log({"completions": wandb.Table(dataframe=df)}) return { "prompt_ids": prompt_ids, "prompt_mask": prompt_mask, "completion_ids": completion_ids, "completion_mask": completion_mask, "ref_per_token_logps": ref_per_token_logps, "advantages": advantages, } def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None): if return_outputs: raise ValueError("The GRPOTrainer does not support returning outputs") # Compute the per-token log probabilities for the model prompt_ids, prompt_mask = inputs["prompt_ids"], inputs["prompt_mask"] completion_ids, completion_mask = inputs["completion_ids"], inputs["completion_mask"] input_ids = torch.cat([prompt_ids, completion_ids], dim=1) attention_mask = torch.cat([prompt_mask, completion_mask], dim=1) logits_to_keep = completion_ids.size(1) # we only need to compute the logits for the completion tokens per_token_logps = self._get_per_token_logps(model, input_ids, attention_mask, logits_to_keep) # Compute the KL divergence between the model and the reference model ref_per_token_logps = inputs["ref_per_token_logps"] per_token_kl = torch.exp(ref_per_token_logps - per_token_logps) - (ref_per_token_logps - per_token_logps) - 1 # x - x.detach() allows for preserving gradients from x advantages = inputs["advantages"] per_token_loss = torch.exp(per_token_logps - per_token_logps.detach()) * advantages.unsqueeze(1) per_token_loss = -(per_token_loss - self.beta * per_token_kl) loss = ((per_token_loss * completion_mask).sum(dim=1) / completion_mask.sum(dim=1)).mean() # Log the metrics completion_length = self.accelerator.gather_for_metrics(completion_mask.sum(1)).float().mean().item() self._metrics["completion_length"].append(completion_length) mean_kl = ((per_token_kl * completion_mask).sum(dim=1) / completion_mask.sum(dim=1)).mean() self._metrics["kl"].append(self.accelerator.gather_for_metrics(mean_kl).mean().item()) return loss def prediction_step(self, model, inputs, prediction_loss_only, ignore_keys: Optional[list[str]] = None): inputs = self._prepare_inputs(inputs) with torch.no_grad(): with self.compute_loss_context_manager(): loss = self.compute_loss(model, inputs) loss = loss.mean().detach() return loss, None, None def log(self, logs: dict[str, float], start_time: Optional[float] = None) -> None: metrics = {key: sum(val) / len(val) for key, val in self._metrics.items()} # average the metrics # This method can be called both in training and evaluation. When called in evaluation, the keys in `logs` # start with "eval_". We need to add the prefix "eval_" to the keys in `metrics` to match the format. if next(iter(logs.keys())).startswith("eval_"): metrics = {f"eval_{key}": val for key, val in metrics.items()} logs = {**logs, **metrics} if version.parse(transformers.__version__) >= version.parse("4.47.0.dev0"): super().log(logs, start_time) else: # transformers<=4.46 super().log(logs) self._metrics.clear() def create_model_card( self, model_name: Optional[str] = None, dataset_name: Optional[str] = None, tags: Union[str, list[str], None] = None, ): """ Creates a draft of a model card using the information available to the `Trainer`. Args: model_name (`str` or `None`, *optional*, defaults to `None`): Name of the model. dataset_name (`str` or `None`, *optional*, defaults to `None`): Name of the dataset used for training. tags (`str`, `list[str]` or `None`, *optional*, defaults to `None`): Tags to be associated with the model card. """ if not self.is_world_process_zero(): return if hasattr(self.model.config, "_name_or_path") and not os.path.isdir(self.model.config._name_or_path): base_model = self.model.config._name_or_path else: base_model = None tags = tags or [] if isinstance(tags, str): tags = [tags] if hasattr(self.model.config, "unsloth_version"): tags.append("unsloth") citation = textwrap.dedent( """\ @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } """ ) model_card = generate_model_card( base_model=base_model, model_name=model_name, hub_model_id=self.hub_model_id, dataset_name=dataset_name, tags=tags, wandb_url=wandb.run.get_url() if is_wandb_available() and wandb.run is not None else None, comet_url=get_comet_experiment_url(), trainer_name="GRPO", trainer_citation=citation, paper_title="DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models", paper_id="2402.03300", ) model_card.save(os.path.join(self.args.output_dir, "README.md"))
trl/trl/trainer/grpo_trainer.py/0
{ "file_path": "trl/trl/trainer/grpo_trainer.py", "repo_id": "trl", "token_count": 16534 }
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass, field from typing import Optional from transformers import TrainingArguments @dataclass class RewardConfig(TrainingArguments): r""" Configuration class for the [`RewardTrainer`]. Using [`~transformers.HfArgumentParser`] we can turn this class into [argparse](https://docs.python.org/3/library/argparse#module-argparse) arguments that can be specified on the command line. Parameters: max_length (`int` or `None`, *optional*, defaults to `1024`): Maximum length of the sequences (prompt + completion) in the batch, filters out entries that exceed the limit. This argument is required if you want to use the default data collator. disable_dropout (`bool`, *optional*, defaults to `True`): Whether to disable dropout in the model. dataset_num_proc (`int`, *optional*, defaults to `None`): Number of processes to use for processing the dataset. center_rewards_coefficient (`float`, *optional*, defaults to `None`): Coefficient to incentivize the reward model to output mean-zero rewards (proposed by https://huggingface.co/papers/2312.09244, Eq. 2). Recommended value: `0.01`. remove_unused_columns (`bool`, *optional*, defaults to `False`): Whether to remove the columns that are not used by the model's forward pass. Can be `True` only if the dataset is pretokenized. """ max_length: Optional[int] = field( default=1024, metadata={ "help": "Maximum length of the sequences (prompt + completion) in the batch, filters out entries that " "exceed the limit. This argument is required if you want to use the default data collator." }, ) disable_dropout: bool = field( default=True, metadata={"help": "Whether to disable dropout in the model and reference model."}, ) dataset_num_proc: Optional[int] = field( default=None, metadata={"help": "Number of processes to use for processing the dataset."}, ) center_rewards_coefficient: Optional[float] = field( default=None, metadata={ "help": "Coefficient to incentivize the reward model to output mean-zero rewards (proposed by " "https://huggingface.co/papers/2312.09244, Eq. 2). Recommended value: `0.01`." }, ) remove_unused_columns: bool = field( default=False, metadata={ "help": "Whether to remove the columns that are not used by the model's forward pass. Can be `True` only " "if the dataset is pretokenized." }, )
trl/trl/trainer/reward_config.py/0
{ "file_path": "trl/trl/trainer/reward_config.py", "repo_id": "trl", "token_count": 1144 }
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/accelerate_logo.png" width="400"/> <br> <p> <p align="center"> <!-- Uncomment when CircleCI is set up <a href="https://circleci.com/gh/huggingface/accelerate"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"></a> --> <a href="https://github.com/huggingface/accelerate/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue"></a> <a href="https://huggingface.co/docs/accelerate/index.html"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online"></a> <a href="https://github.com/huggingface/accelerate/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg"></a> <a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a> </p> <h3 align="center"> <p>Run your *raw* PyTorch training script on any kind of device </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/course_banner.png"></a> </h3> ## Easy to integrate 🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. 🤗 Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged. Here is an example: ```diff import torch import torch.nn.functional as F from datasets import load_dataset + from accelerate import Accelerator + accelerator = Accelerator() - device = 'cpu' + device = accelerator.device model = torch.nn.Transformer().to(device) optimizer = torch.optim.Adam(model.parameters()) dataset = load_dataset('my_dataset') data = torch.utils.data.DataLoader(dataset, shuffle=True) + model, optimizer, data = accelerator.prepare(model, optimizer, data) model.train() for epoch in range(10): for source, targets in data: source = source.to(device) targets = targets.to(device) optimizer.zero_grad() output = model(source) loss = F.cross_entropy(output, targets) - loss.backward() + accelerator.backward(loss) optimizer.step() ``` As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16). In particular, the same code can then be run without modification on your local machine for debugging or your training environment. 🤗 Accelerate even handles the device placement for you (which requires a few more changes to your code, but is safer in general), so you can even simplify your training loop further: ```diff import torch import torch.nn.functional as F from datasets import load_dataset + from accelerate import Accelerator - device = 'cpu' + accelerator = Accelerator() - model = torch.nn.Transformer().to(device) + model = torch.nn.Transformer() optimizer = torch.optim.Adam(model.parameters()) dataset = load_dataset('my_dataset') data = torch.utils.data.DataLoader(dataset, shuffle=True) + model, optimizer, data = accelerator.prepare(model, optimizer, data) model.train() for epoch in range(10): for source, targets in data: - source = source.to(device) - targets = targets.to(device) optimizer.zero_grad() output = model(source) loss = F.cross_entropy(output, targets) - loss.backward() + accelerator.backward(loss) optimizer.step() ``` Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have a look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples). ## Launching script 🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.run` or to write a specific launcher for TPU training! On your machine(s) just run: ```bash accelerate config ``` and answer the questions asked. This will generate a config file that will be used automatically to properly set the default options when doing ```bash accelerate launch my_script.py --args_to_my_script ``` For instance, here is how you would run the GLUE example on the MRPC task (from the root of the repo): ```bash accelerate launch examples/nlp_example.py ``` This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenience. You can also directly pass in the arguments you would to `torchrun` as arguments to `accelerate launch` if you wish to not run` accelerate config`. For example, here is how to launch on two GPUs: ```bash accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py ``` To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli). Or view the configuration zoo [here](https://github.com/huggingface/accelerate/blob/main/examples/config_yaml_templates/) ## Launching multi-CPU run using MPI 🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well. Once you have MPI setup on your cluster, just run: ```bash accelerate config ``` Answer the questions that are asked, selecting to run using multi-CPU, and answer "yes" when asked if you want accelerate to launch mpirun. Then, use `accelerate launch` with your script like: ```bash accelerate launch examples/nlp_example.py ``` Alternatively, you can use mpirun directly, without using the CLI like: ```bash mpirun -np 2 python examples/nlp_example.py ``` ## Launching training using DeepSpeed 🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you the `DeepSpeedPlugin`. ```python from accelerate import Accelerator, DeepSpeedPlugin # deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it # Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=2) accelerator = Accelerator(mixed_precision='fp16', deepspeed_plugin=deepspeed_plugin) # How to save your 🤗 Transformer? accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(save_dir, save_function=accelerator.save, state_dict=accelerator.get_state_dict(model)) ``` Note: DeepSpeed support is experimental for now. In case you get into some problem, please open an issue. ## Launching your training from a notebook 🤗 Accelerate also provides a `notebook_launcher` function you can use in a notebook to launch a distributed training. This is especially useful for Colab or Kaggle notebooks with a TPU backend. Just define your training loop in a `training_function` then in your last cell, add: ```python from accelerate import notebook_launcher notebook_launcher(training_function) ``` An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb) ## Why should I use 🤗 Accelerate? You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of 🤗 Accelerate is in one class, the `Accelerator` object. ## Why shouldn't I use 🤗 Accelerate? You shouldn't use 🤗 Accelerate if you don't want to write a training loop yourself. There are plenty of high-level libraries above PyTorch that will offer you that, 🤗 Accelerate is not one of them. ## Frameworks using 🤗 Accelerate If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below: * [Amphion](https://github.com/open-mmlab/Amphion) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development. * [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76). * [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic. * [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms. * [Finetuner](https://github.com/jina-ai/finetuner) is a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses. * [InvokeAI](https://github.com/invoke-ai/InvokeAI) is a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products. * [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library. * [Open Assistant](https://projects.laion.ai/Open-Assistant/) is a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so. * [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centered around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves! * [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion. * [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric. * [transformers](https://github.com/huggingface/transformers) as a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side). ## Installation This repository is tested on Python 3.8+ and PyTorch 1.10.0+ You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). First, create a virtual environment with the version of Python you're going to use and activate it. Then, you will need to install PyTorch: refer to the [official installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform. Then 🤗 Accelerate can be installed using pip as follows: ```bash pip install accelerate ``` ## Supported integrations - CPU only - multi-CPU on one node (machine) - multi-CPU on several nodes (machines) - single GPU - multi-GPU on one node (machine) - multi-GPU on several nodes (machines) - TPU - FP16/BFloat16 mixed precision - FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) or [MS-AMP](https://github.com/Azure/MS-AMP/) - DeepSpeed support (Experimental) - PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental) - Megatron-LM support (Experimental) ## Citing 🤗 Accelerate If you use 🤗 Accelerate in your publication, please cite it by using the following BibTeX entry. ```bibtex @Misc{accelerate, title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.}, author = {Sylvain Gugger and Lysandre Debut and Thomas Wolf and Philipp Schmid and Zachary Mueller and Sourab Mangrulkar and Marc Sun and Benjamin Bossan}, howpublished = {\url{https://github.com/huggingface/accelerate}}, year = {2022} } ```
accelerate/README.md/0
{ "file_path": "accelerate/README.md", "repo_id": "accelerate", "token_count": 4483 }
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Loading big models into memory When loading a pre-trained model in PyTorch, the usual workflow looks like this: ```py import torch my_model = ModelClass(...) state_dict = torch.load(checkpoint_file) my_model.load_state_dict(state_dict) ``` In plain English, those steps are: 1. Create the model with randomly initialized weights 2. Load the model weights (in a dictionary usually called a state dict) from the disk 3. Load those weights inside the model While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you're loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16). <Tip warning={true}> This API is quite new and still in its experimental stage. While we strive to provide a stable API, it's possible some small parts of the public API will change in the future. </Tip> ## How the Process Works: A Quick Overview <Youtube id="MWCSGj9jEAo" /> ## How the Process Works: Working with Code ### Instantiating an empty model The first tool Accelerate introduces to help with big models is a context manager [`init_empty_weights`] that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works: ```py from accelerate import init_empty_weights with init_empty_weights(): my_model = ModelClass(...) ``` For instance: ```py with init_empty_weights(): model = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)]) ``` initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device. <Tip warning={true}> You can't move a model initialized like this on CPU or another device directly, since it doesn't have any data. It's also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device. </Tip> ### Sharded checkpoints It's possible your model is so big that even a single copy won't fit in RAM. That doesn't mean it can't be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it's better if your checkpoint is split into several smaller files that we call checkpoint shards. Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with [`~Accelerator.save_model`]. For instance, we could have a folder containing: ```bash first_state_dict.bin index.json second_state_dict.bin ``` with index.json being the following file: ``` { "linear1.weight": "first_state_dict.bin", "linear1.bias": "first_state_dict.bin", "linear2.weight": "second_state_dict.bin", "linear2.bias": "second_state_dict.bin" } ``` and `first_state_dict.bin` containing the weights for `"linear1.weight"` and `"linear1.bias"`, `second_state_dict.bin` the ones for `"linear2.weight"` and `"linear2.bias"` ### Loading weights The second tool Accelerate introduces is a function [`load_checkpoint_and_dispatch`], that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard. If you want to use big model inference with Transformers models, check out this [documentation](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading). Here is how we can use this to load the [GPT2-1.5B](https://huggingface.co/marcsun13/gpt2-xl-linear-sharded) model. Let's download the sharded version of this model. ```bash pip install huggingface_hub ``` ```py from huggingface_hub import snapshot_download checkpoint = "marcsun13/gpt2-xl-linear-sharded" weights_location = snapshot_download(repo_id=checkpoint) ``` In order to initialize the model, we will use the library minGPT. ```bash git clone https://github.com/karpathy/minGPT.git pip install minGPT/ ``` ```py from accelerate import init_empty_weights from mingpt.model import GPT model_config = GPT.get_default_config() model_config.model_type = 'gpt2-xl' model_config.vocab_size = 50257 model_config.block_size = 1024 with init_empty_weights(): model = GPT(model_config) ``` Then, load the checkpoint we just downloaded with: ```py from accelerate import load_checkpoint_and_dispatch model = load_checkpoint_and_dispatch( model, checkpoint=weights_location, device_map="auto", no_split_module_classes=['Block'] ) ``` By passing `device_map="auto"`, we tell Accelerate to determine automatically where to put each layer of the model depending on the available resources: - first, we use the maximum space available on the GPU(s) - if we still need space, we store the remaining weights on the CPU - if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors #### `no_split_module_classes` This parameter will indicate that some of the modules with the name `"Block"` should not be split across different devices. You should set here all blocks that include a residual connection of some kind. #### The `device_map` You can see the `device_map` that Accelerate picked by accessing the `hf_device_map` attribute of your model: ```py model.hf_device_map ``` ```python out {'transformer.wte': 0, 'transformer.wpe': 0, 'transformer.drop': 0, 'transformer.h.0': 0, ... 'transformer.h.21': 0, 'transformer.h.22': 1, 'transformer.h.23': 1, 'transformer.h.24': 1, ... 'transformer.h.47': 1, 'transformer.ln_f': 1, 'lm_head': 1} ``` It's fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), `"cpu"`, or `"disk"` and pass this in: ```python device_map = { "transformer.wte": "cpu", "transformer.wpe": 0, "transformer.drop": "cpu", "transformer.h.0": "disk" } model = load_checkpoint_and_dispatch( model, checkpoint=weights_location, device_map=device_map ) ``` ### Run the model Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model: ```py from mingpt.bpe import BPETokenizer tokenizer = BPETokenizer() inputs = tokenizer("Hello, my name is").to(0) outputs = model.generate(x1, max_new_tokens=10, do_sample=False)[0] tokenizer.decode(outputs.cpu().squeeze()) ``` Behind the scenes, Accelerate added hooks to the model, so that: - at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works) - for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after - for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after This way, your model can run for inference even if it doesn't fit on one of the GPUs or the CPU RAM! <Tip warning={true}> This only supports the inference of your model, not training. Most of the computation happens behind `torch.no_grad()` context managers to avoid spending some GPU memory with intermediate activations. </Tip> ### Designing a device map You can let Accelerate handle the device map computation by setting `device_map` to one of the supported options (`"auto"`, `"balanced"`, `"balanced_low_0"`, `"sequential"`) or create one yourself if you want more control over where each layer should go. <Tip> You can derive all sizes of the model (and thus compute a `device_map`) on a model that is on the meta device. </Tip> All the options will produce the same result when you don't have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM). When you have more GPU memory available than the model size, here is the difference between each option: - `"auto"` and `"balanced"` evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1. - `"balanced_low_0"` evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the `generate` function for Transformers models - `"sequential"` will fit what it can on GPU 0, then move on GPU 1 and so forth (so won't use the last GPUs if it doesn't need to). <Tip> The options `"auto"` and `"balanced"` produce the same results for now, but the behavior of `"auto"` might change in the future if we find a strategy that makes more sense, while `"balanced"` will stay stable. </Tip> First note that you can limit the memory used on each GPU by using the `max_memory` argument (available in [`infer_auto_device_map`] and in all functions using it). When setting `max_memory`, you should pass along a dictionary containing the GPU identifiers (for instance `0`, `1` etc.) and the `"cpu"` key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as `"10GiB"` or `"10GB"`. Here is an example where we don't want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights: ```python from accelerate import infer_auto_device_map device_map = infer_auto_device_map(my_model, max_memory={0: "10GiB", 1: "10GiB", "cpu": "30GiB"}) ``` <Tip warning={true}> When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do `torch.ones(1).cuda()` and look at the memory usage. Therefore when you create memory maps with `max_memory` make sure to adjust the available memory accordingly to avoid out-of-memory errors. </Tip> Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the `generate` method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is: ```python max_memory = {0: "30GIB", 1: "46GIB", 2: "46GIB", 3: "46GIB", 4: "46GIB", 5: "46GIB", 6: "46GIB", 7: "46GIB"} ``` as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0. If you opt to fully design the `device_map` yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or `"cpu"` for CPU offload, `"disk"` for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let's say `block1` and `block2`) which each contain three linear layers (let's say `linear1`, `linear2` and `linear3`), a valid device map can be: ```python device_map = {"block1": 0, "block2": 1} ``` another one that is valid could be: ```python device_map = {"block1": 0, "block2.linear1": 0, "block2.linear2": 1, "block2.linear3": 1} ``` On the other hand, this one is not valid as it does not cover every parameter of the model: ```python device_map = {"block1": 0, "block2.linear1": 1, "block2.linear2": 1} ``` <Tip> To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don't put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs. </Tip> ## CPU offload only If you want to offload your model on CPU, you can use [`cpu_offload`]. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device and passed as they are needed, then offloaded again. ```python cpu_offload(model, execution_device) ``` You can also use [`cpu_offload_with_hook`]. This function will offloads a model on the CPU and puts it back to an execution device when executed. The difference with [`cpu_offload`] is that the model stays on the execution device after the forward and is only offloaded again when the `offload` method of the returned `hook` is called. Furthermore, [`cpu_offload_with_hook`] is more performant but less memory saving. It is useful for pipelines running a model in a loop: ```python model_1, hook_1 = cpu_offload_with_hook(model_1, execution_device) model_2, hook_2 = cpu_offload_with_hook(model_2, execution_device, prev_module_hook=hook_1) model_3, hook_3 = cpu_offload_with_hook(model_3, execution_device, prev_module_hook=hook_2) hid_1 = model_1(input) for i in range(50): # model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop. hid_2 = model_2(hid_1) # model2 is offloaded to the CPU just before this forward. hid_3 = model_3(hid_3) # For model3, you need to manually call the hook offload method. hook_3.offload() ``` ## Disk offload only To perform disk offload, you can use [`disk_offload`]. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again. ```python disk_offload(model, offload_dir, execution_device) ``` ## Limits and further development We are aware of the current limitations in the API: - [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it's not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM. - [`infer_auto_device_map`] (or `device_map="auto"` in [`load_checkpoint_and_dispatch`]) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk. - [`load_checkpoint_and_dispatch`] and [`load_checkpoint_in_model`] do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys. - The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle. - When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before. - Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes).
accelerate/docs/source/concept_guides/big_model_inference.md/0
{ "file_path": "accelerate/docs/source/concept_guides/big_model_inference.md", "repo_id": "accelerate", "token_count": 4809 }
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # FP8 Below are functions and classes relative to the underlying FP8 implementation ## FP8RecipeKwargs [[autodoc]] utils.FP8RecipeKwargs ## convert_model [[autodoc]] utils.convert_model ## has_transformer_engine_layers [[autodoc]] utils.has_transformer_engine_layers ## contextual_fp8_autocast [[autodoc]] utils.contextual_fp8_autocast ## apply_fp8_autowrap [[autodoc]] utils.apply_fp8_autowrap
accelerate/docs/source/package_reference/fp8.md/0
{ "file_path": "accelerate/docs/source/package_reference/fp8.md", "repo_id": "accelerate", "token_count": 337 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Using multiple models with DeepSpeed <Tip warning={true}> This guide assumes that you have read and understood the [DeepSpeed usage guide](./deepspeed.md). </Tip> Running multiple models with Accelerate and DeepSpeed is useful for: * Knowledge distillation * Post-training techniques like RLHF (see the [TRL](https://github.com/huggingface/trl) library for more examples) * Training multiple models at once Currently, Accelerate has a **very experimental API** to help you use multiple models. This tutorial will focus on two common use cases: 1. Knowledge distillation, where a smaller student model is trained to mimic a larger, better-performing teacher. If the student model fits on a single GPU, we can use ZeRO-2 for training and ZeRO-3 to shard the teacher for inference. This is significantly faster than using ZeRO-3 for both models. 2. Training multiple *disjoint* models at once. ## Knowledge distillation Knowledge distillation is a good example of using multiple models, but only training one of them. Normally, you would use a single [`utils.DeepSpeedPlugin`] for both models. However, in this case, there are two separate configurations. Accelerate allows you to create and use multiple plugins **if and only if** they are in a `dict` so that you can reference and enable the proper plugin when needed. ```python from accelerate.utils import DeepSpeedPlugin zero2_plugin = DeepSpeedPlugin(hf_ds_config="zero2_config.json") zero3_plugin = DeepSpeedPlugin(hf_ds_config="zero3_config.json") deepspeed_plugins = {"student": zero2_plugin, "teacher": zero3_plugin} ``` The `zero2_config.json` should be configured for full training (so specify `scheduler` and `optimizer` if you are not utilizing your own), while `zero3_config.json` should only be configured for the inference model, as shown in the example below. ```json { "bf16": { "enabled": "auto" }, "zero_optimization": { "stage": 3, "overlap_comm": true, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": "auto", "stage3_max_reuse_distance": "auto", }, "train_micro_batch_size_per_gpu": 1 } ``` An example `zero2_config.json` configuration is shown below. ```json { "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto", "torch_adam": true, "adam_w_mode": true } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", } ``` <Tip> DeepSpeed will raise an error if `train_micro_batch_size_per_gpu` isn't specified, even if this particular model isn't being trained. </Tip> From here, create a single [`Accelerator`] and pass in both configurations. ```python from accelerate import Accelerator accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins) ``` Now let's see how to use them. ### Student model By default, Accelerate sets the first item in the `dict` as the default or enabled plugin (`"student"` plugin). Verify this by using the [`utils.deepspeed.get_active_deepspeed_plugin`] function to see which plugin is enabled. ```python active_plugin = get_active_deepspeed_plugin(accelerator.state) assert active_plugin is deepspeed_plugins["student"] ``` [`AcceleratorState`] also keeps the active DeepSpeed plugin saved in `state.deepspeed_plugin`. ```python assert active_plugin is accelerator.deepspeed_plugin ``` Since `student` is the currently active plugin, let's go ahead and prepare the model, optimizer, and scheduler. ```python student_model, optimizer, scheduler = ... student_model, optimizer, scheduler, train_dataloader = accelerator.prepare(student_model, optimizer, scheduler, train_dataloader) ``` Now it's time to deal with the teacher model. ### Teacher model First, you need to specify in [`Accelerator`] that the `zero3_config.json` configuration should be used. ```python accelerator.state.select_deepspeed_plugin("teacher") ``` This disables the `"student"` plugin and enables the `"teacher"` plugin instead. The DeepSpeed stateful config inside of Transformers is updated, and it changes which plugin configuration gets called when using `deepspeed.initialize()`. This allows you to use the automatic `deepspeed.zero.Init` context manager integration Transformers provides. ```python teacher_model = AutoModel.from_pretrained(...) teacher_model = accelerator.prepare(teacher_model) ``` Otherwise, you should manually initialize the model with `deepspeed.zero.Init`. ```python with deepspeed.zero.Init(accelerator.deepspeed_plugin.config): model = MyModel(...) ``` ### Training From here, your training loop can be whatever you like, as long as `teacher_model` is never being trained on. ```python teacher_model.eval() student_model.train() for batch in train_dataloader: with torch.no_grad(): output_teacher = teacher_model(**batch) output_student = student_model(**batch) # Combine the losses or modify it in some way loss = output_teacher.loss + output_student.loss accelerator.backward(loss) optimizer.step() scheduler.step() optimizer.zero_grad() ``` ## Train multiple disjoint models Training multiple models is a more complicated scenario. In its current state, we assume each model is **completely disjointed** from the other during training. This scenario still requires two [`utils.DeepSpeedPlugin`]'s to be made. However, you also need a second [`Accelerator`], since different `deepspeed` engines are being called at different times. A single [`Accelerator`] can only carry one instance at a time. Since the [`state.AcceleratorState`] is a stateful object though, it is already aware of both [`utils.DeepSpeedPlugin`]'s available. You can just instantiate a second [`Accelerator`] with no extra arguments. ```python first_accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins) second_accelerator = Accelerator() ``` You can call either `first_accelerator.state.select_deepspeed_plugin()` to enable or disable a particular plugin, and then call [`prepare`]. ```python # can be `accelerator_0`, `accelerator_1`, or by calling `AcceleratorState().select_deepspeed_plugin(...)` first_accelerator.state.select_deepspeed_plugin("first_model") first_model = AutoModel.from_pretrained(...) # For this example, `get_training_items` is a nonexistent function that gets the setup we need for training first_optimizer, first_scheduler, train_dl, eval_dl = get_training_items(model1) first_model, first_optimizer, first_scheduler, train_dl, eval_dl = accelerator.prepare( first_model, first_optimizer, first_scheduler, train_dl, eval_dl ) second_accelerator.state.select_deepspeed_plugin("second_model") second_model = AutoModel.from_pretrained(...) # For this example, `get_training_items` is a nonexistent function that gets the setup we need for training second_optimizer, second_scheduler, _, _ = get_training_items(model2) second_model, second_optimizer, second_scheduler = accelerator.prepare( second_model, second_optimizer, second_scheduler ) ``` And now you can train: ```python for batch in dl: outputs1 = first_model(**batch) first_accelerator.backward(outputs1.loss) first_optimizer.step() first_scheduler.step() first_optimizer.zero_grad() outputs2 = model2(**batch) second_accelerator.backward(outputs2.loss) second_optimizer.step() second_scheduler.step() second_optimizer.zero_grad() ``` ## Resources To see more examples, please check out the [related tests](https://github.com/huggingface/accelerate/blob/main/src/accelerate/test_utils/scripts/external_deps/test_ds_multiple_model.py) currently in [Accelerate].
accelerate/docs/source/usage_guides/deepspeed_multiple_model.md/0
{ "file_path": "accelerate/docs/source/usage_guides/deepspeed_multiple_model.md", "repo_id": "accelerate", "token_count": 2924 }
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # In this folder we showcase various full examples using 🤗 Accelerate ## Simple NLP example The [nlp_example.py](./nlp_example.py) script is a simple example to train a Bert model on a classification task ([GLUE's MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398)). Prior to running it you should install 🤗 Dataset and 🤗 Transformers: ```bash pip install datasets evaluate transformers ``` The same script can be run in any of the following configurations: - single CPU or single GPU - multi CPUs - multi GPUs (using PyTorch distributed mode) - (multi) TPUs - fp16 (mixed-precision) or fp32 (normal precision) To run it in each of these various modes, use the following commands: - single CPU: * from a server without GPU ```bash python ./nlp_example.py ``` * from any server by passing `cpu=True` to the `Accelerator`. ```bash python ./nlp_example.py --cpu ``` * from any server with Accelerate launcher ```bash accelerate launch --cpu ./nlp_example.py ``` - single GPU: ```bash python ./nlp_example.py # from a server with a GPU ``` - with fp16 (mixed-precision) * from any server by passing `mixed_precison=fp16` to the `Accelerator`. ```bash python ./nlp_example.py --mixed_precision fp16 ``` * from any server with Accelerate launcher ```bash accelerate launch --mixed_precision fp16 ./nlp_example.py - multi CPUs (requires Open MPI, Intel MPI, or MVAPICH) * With Accelerate config and launcher, execute the following from node 0: ```bash accelerate config # Select to have accelerate launch mpirun accelerate launch ./nlp_example.py # This will run the script on each server ``` * With Intel MPI: ```bash export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 16 -ppn 4 python ./nlp_example.py ``` - multi GPUs (using PyTorch distributed mode) * With Accelerate config and launcher ```bash accelerate config # This will create a config file on your server accelerate launch ./nlp_example.py # This will run the script on your server ``` * With traditional PyTorch launcher (`python -m torch.distributed.run` can be used instead of `torchrun`) ```bash torchrun --nproc_per_node 2 ./nlp_example.py ``` - multi GPUs, multi node (several machines, using PyTorch distributed mode) * With Accelerate config and launcher, on each machine: ```bash accelerate config # This will create a config file on each server accelerate launch ./nlp_example.py # This will run the script on each server ``` * With PyTorch launcher only (`python -m torch.distributed.run` can be used instead of `torchrun`). Run this command on each node: ```bash torchrun \ # python -m torch.distributed.run --nproc_per_node 2 \ --nnodes 2 \ --rdzv_id 2299 \ # A unique job id --rdzv_backend c10d \ --rdzv_endpoint master_node_ip_address:29500 \ ./nlp_example.py ``` - (multi) TPUs * With Accelerate config and launcher ```bash accelerate config # This will create a config file on your TPU server accelerate launch ./nlp_example.py # This will run the script on each server ``` * In PyTorch: Add an `xmp.spawn` line in your script as you usually do. ## Simple vision example The [cv_example.py](./cv_example.py) script is a simple example to fine-tune a ResNet-50 on a classification task ([Ofxord-IIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/)). The same script can be run in any of the following configurations: - single CPU or single GPU - multi CPUs - multi GPUs (using PyTorch distributed mode) - (multi) TPUs - fp16 (mixed-precision) or fp32 (normal precision) Prior to running it you should install timm and torchvision: ```bash pip install timm torchvision ``` and you should download the data with the following commands: ```bash wget https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz tar -xzf images.tar.gz ``` To run it in each of these various modes, use the following commands: - single CPU: * from a server without GPU ```bash python ./cv_example.py --data_dir path_to_data ``` * from any server by passing `cpu=True` to the `Accelerator`. ```bash python ./cv_example.py --data_dir path_to_data --cpu ``` * from any server with Accelerate launcher ```bash accelerate launch --cpu ./cv_example.py --data_dir path_to_data ``` - single GPU: ```bash python ./cv_example.py # from a server with a GPU ``` - with fp16 (mixed-precision) * from any server by passing `mixed_precison=fp16` to the `Accelerator`. ```bash python ./cv_example.py --data_dir path_to_data --mixed_precison fp16 ``` * from any server with Accelerate launcher ```bash accelerate launch --mixed_precison fp16 ./cv_example.py --data_dir path_to_data - multi CPUs (requires Open MPI, Intel MPI, or MVAPICH) * With Accelerate config and launcher, run the following from node 0: ```bash accelerate config --config_file config.yaml # Select to have accelerate launch mpirun accelerate launch ./cv_example.py --data_dir path_to_data # This will run the script on each server ``` * With Intel MPI, execute mpirun from node 0: ```bash export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 16 -ppn 4 python ./cv_example.py --data_dir path_to_data ``` - multi GPUs (using PyTorch distributed mode) * With Accelerate config and launcher ```bash accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml` accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on your server ``` * With traditional PyTorch launcher (`python -m torch.distributed.run` can be used instead of `torchrun`) ```bash torchrun --nproc_per_node 2 ./cv_example.py --data_dir path_to_data ``` - multi GPUs, multi node (several machines, using PyTorch distributed mode) * With Accelerate config and launcher, on each machine: ```bash accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml` accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on each server ``` * With PyTorch launcher only (`python -m torch.distributed.run` can be used instead of `torchrun`). Run this command on each node: ```bash torchrun \ # python -m torch.distributed.run --nproc_per_node 2 \ --nnodes 2 \ --rdzv_id 2299 \ # A unique job id --rdzv_backend c10d \ --rdzv_endpoint master_node_ip_address:29500 \ ./cv_example.py --data_dir path_to_data ``` - (multi) TPUs * With Accelerate config and launcher ```bash accelerate config --config_file config.yaml # This will create a config file on your server to `config.yaml` accelerate launch --config_file config.yaml ./cv_example.py --data_dir path_to_data # This will run the script on each server ``` * In PyTorch: Add an `xmp.spawn` line in your script as you usually do. ### Simple vision example (GANs) - [huggan project](https://github.com/huggingface/community-events/tree/main/huggan) ### Using AWS SageMaker integration - [Examples showcasing AWS SageMaker integration of 🤗 Accelerate.](https://github.com/pacman100/accelerate-aws-sagemaker) ## Configuration zoo In [/config_yaml_templates](./config_yaml_templates/) we have a variety of *minimal* `config.yaml` templates and examples to help you learn how to create your own configuration files depending on the scenario. ## SLURM Scripts In [/slurm/submit_multigpu.sh](./slurm/submit_multigpu.sh) and [/slurm/submit_multinode.sh](./slurm/submit_multinode.sh) we present two scripts for running the examples on a machine with [SLURM](https://slurm.schedmd.com/documentation.html) workload manager. In [/slurm/submit_multigpu.sh](./slurm/submit_multigpu.sh) the only parameter in the launcher that needs to be modified is `--num_processes`, which determines the number of GPUs we will use. In this case, using the environment variable `$SLURM_GPUS`, we indicate that we want to utilize all the GPUs available on the node we have requested. In [/slurm/submit_multinode.sh](./slurm/submit_multinode.sh) we must specify the number of nodes that will be part of the training (`--num_machines`), how many GPUs we will use in total (`--num_processes`), the [`backend`](https://pytorch.org/docs/stable/elastic/run.html#note-on-rendezvous-backend), `--main_process_ip` which will be the address the master node and the `--main_process_port`. In [/slurm/submit_multicpu.sh](./slurm/submit_multicpu.sh) we must specify the number of nodes that will be part of the training (`--num_machines`), how many CPU processes we will use in total (`--num_processes`), the [`backend`](https://pytorch.org/docs/stable/elastic/run.html#note-on-rendezvous-backend), `--main_process_ip` which will be the address the master node and the `--main_process_port`. `mpirun_hostfile` specifies to run the job using MPIRun. In both scripts, we run `activateEnviroment.sh` at the beginning. This script should contain the necessary instructions to initialize the environment for execution. Below, we show an example that loads the necessary libraries ([Environment modules](https://github.com/cea-hpc/modules)), activates the Python environment, and sets up various environment variables, most of them to run the scripts in offline mode in case we don't have internet connection from the cluster. ```bash # activateEnvironment.sh module purge module load anaconda3/2020.02 cuda/10.2 cudnn/8.0.5 nccl/2.9.9 arrow/7.0.0 openmpi source activate /home/nct01/nct01328/pytorch_antoni_local export HF_HOME=/gpfs/projects/nct01/nct01328/ export HF_LOCAL_HOME=/gpfs/projects/nct01/nct01328/HF_LOCAL export HF_DATASETS_OFFLINE=1 export TRANSFORMERS_OFFLINE=1 export PYTHONPATH=/home/nct01/nct01328/transformers-in-supercomputers:$PYTHONPATH export GPUS_PER_NODE=4 ``` ## Simple Multi-GPU Hardware Launcher (using an external platform) [multigpu_remote_launcher.py](./multigpu_remote_launcher.py) is a minimal script that demonstrates launching accelerate on multiple remote GPUs, and with automatic hardware environment and dependency setup for reproducibility. You can easily customize the training function used, training arguments, hyperparameters, and type of compute hardware, and then run the script to automatically launch multi GPU training on remote hardware. This script uses [Runhouse](https://github.com/run-house/runhouse) to launch on self-hosted hardware (e.g. in your own cloud account or on-premise cluster) but there are other options for running remotely as well. Runhouse can be installed with `pip install runhouse`, and you can refer to [hardware setup](https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup) for hardware setup instructions, or this [Colab tutorial](https://colab.research.google.com/drive/1qVwYyLTCPYPSdz9ZX7BZl9Qm0A3j7RJe) for a more in-depth walkthrough. ## Finer Examples While the first two scripts are extremely barebones when it comes to what you can do with accelerate, more advanced features are documented in two other locations. ### `by_feature` examples These scripts are *individual* examples highlighting one particular feature or use-case within Accelerate. They all stem from the [nlp_example.py](./nlp_example.py) script, and any changes or modifications is denoted with a `# New Code #` comment. Read the README.md file located in the `by_feature` folder for more information. ### `complete_*` examples These two scripts contain *every* single feature currently available in Accelerate in one place, as one giant script. New arguments that can be passed include: - `checkpointing_steps`, whether the various states should be saved at the end of every `n` steps, or `"epoch"` for each epoch. States are then saved to folders named `step_{n}` or `epoch_{n}` - `resume_from_checkpoint`, should be used if you want to resume training off of a previous call to the script and passed a `checkpointing_steps` to it. - `with_tracking`, should be used if you want to log the training run using all available experiment trackers in your environment. Currently supported trackers include TensorBoard, Weights and Biases, and CometML.
accelerate/examples/README.md/0
{ "file_path": "accelerate/examples/README.md", "repo_id": "accelerate", "token_count": 4684 }
{ "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "sub_group_size": 1e9, "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": "auto" }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false }
accelerate/examples/deepspeed_config_templates/zero_stage3_config.json/0
{ "file_path": "accelerate/examples/deepspeed_config_templates/zero_stage3_config.json", "repo_id": "accelerate", "token_count": 657 }
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from manim import * class Stage5(Scene): def construct(self): # The dataset items colors = ["BLUE_E", "DARK_BROWN", "GOLD_E", "GRAY_A"] fill = Rectangle(height=0.46,width=0.46).set_stroke(width=0) columns = [ VGroup(*[Rectangle(height=0.25,width=0.25,color=colors[j]) for i in range(8)]).arrange(RIGHT,buff=0) for j in range(4) ] dataset_recs = VGroup(*columns).arrange(UP, buff=0) dataset_text = Text("Dataset", font_size=24) dataset = Group(dataset_recs,dataset_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN) dataset.move_to([-2,0,0]) self.add(dataset) code = Code( code="# We enable this by default\naccelerator = Accelerator()\ndataloader = DataLoader(...)\ndataloader = accelerator.prepare(dataloader)\nfor batch in dataloader:\n\t...", tab_width=4, background="window", language="Python", font="Monospace", font_size=14, corner_radius=.2, insert_line_no=False, line_spacing=.75, style=Code.styles_list[1], ) code.move_to([-3.5, 2.5, 0]) self.add(code) # The dataloader itself sampler_1 = Group( Rectangle(color="blue", height=1, width=1), Text("Sampler GPU 1", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN) sampler_2 = Group( Rectangle(color="blue", height=1, width=1), Text("Sampler GPU 2", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN) sampler_3 = Group( Rectangle(color="blue", height=1, width=1), Text("Sampler GPU 3", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN) sampler_4 = Group( Rectangle(color="blue", height=1, width=1), Text("Sampler GPU 4", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN) sampler_1.move_to([2,2,0]) sampler_2.move_to([2,.5,0]) sampler_3.move_to([2,-1.,0]) sampler_4.move_to([2,-2.5,0]) self.add(sampler_1, sampler_2, sampler_3, sampler_4) samplers = [sampler_1[0], sampler_2[0], sampler_3[0], sampler_4[0]] gpu_1 = Group( Rectangle(color="white", height=1, width=1), Text("Output GPU 1", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN).move_to([4.5, 2, 0]) gpu_2 = Group( Rectangle(color="white", height=1, width=1), Text("Output GPU 2", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN).move_to([4.5, .5, 0]) gpu_3 = Group( Rectangle(color="white", height=1, width=1), Text("Output GPU 3", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN).move_to([4.5, -1, 0]) gpu_4 = Group( Rectangle(color="white", height=1, width=1), Text("Output GPU 4", font_size=12) ).arrange(DOWN, buff=.25, aligned_edge=DOWN).move_to([4.5, -2.5, 0]) gpus = [gpu_1[0], gpu_2[0], gpu_3[0], gpu_4[0]] self.add(gpu_1, gpu_2, gpu_3, gpu_4) # Animate their existence self.play( Create(gpu_1[0], run_time=1), Create(gpu_2[0], run_time=1), Create(gpu_3[0], run_time=1), Create(gpu_4[0], run_time=1), Create(dataset_recs, run_time=1), Create(sampler_1[0], run_time=1), Create(sampler_2[0], run_time=1), Create(sampler_3[0], run_time=1), Create(sampler_4[0], run_time=1), ) first_animations = [] second_animations = [] colors = ["BLUE_E", "DARK_BROWN", "GOLD_E", "GRAY_A"] current_color = colors[0] buff = 0 lr_buff = .25 old_target = None new_datasets = [] for i,row_data in enumerate(dataset_recs): new_row = [] current_color = colors[i] if i == 0: idx = -3 elif i == 1: idx = -2 elif i == 2: idx = -1 elif i == 3: idx = 0 for j,indiv_data in enumerate(row_data): dataset_target = Rectangle(height=0.46/2,width=0.46/2).set_stroke(width=0.).set_fill(current_color, opacity=0.7) dataset_target.move_to(indiv_data) dataset_target.generate_target() aligned_edge = ORIGIN if j % 8 == 0: aligned_edge = LEFT dataset_target.target.next_to( samplers[abs(idx)].get_corner(UP+LEFT), buff=.02, direction=RIGHT+DOWN, ) dataset_target.target.set_x(dataset_target.target.get_x()) elif j % 4 == 0: old_target = dataset_target.target dataset_target.target.next_to( samplers[abs(idx)].get_corner(UP+LEFT), buff=.02, direction=RIGHT+DOWN, ) dataset_target.target.set_x(dataset_target.target.get_x()) dataset_target.target.set_y(dataset_target.target.get_y()-.25) else: dataset_target.target.next_to( old_target, direction=RIGHT, buff=0.02, ) old_target = dataset_target.target new_row.append(dataset_target) first_animations.append(indiv_data.animate(run_time=0.5).set_stroke(current_color)) second_animations.append(MoveToTarget(dataset_target, run_time=1.5)) new_datasets.append(new_row) step_1 = MarkupText( f"Since we splice the dataset between each GPU,\nthe models weights can be averaged during `backward()`\nActing as though we did one giant epoch\nvery quickly.", font_size=18 ) step_1.move_to([-2.5, -2, 0]) self.play( Write(step_1, run_time=3), ) self.play( *first_animations, ) self.play(*second_animations) self.wait(duration=.5) move_animation = [] import random for i,row in enumerate(new_datasets): # row = [row[k] for k in random.sample(range(8), 8)] current_color = colors[i] if i == 0: idx = -3 elif i == 1: idx = -2 elif i == 2: idx = -1 elif i == 3: idx = 0 for j,indiv_data in enumerate(row): indiv_data.generate_target() aligned_edge = ORIGIN if j % 8 == 0: aligned_edge = LEFT indiv_data.target.next_to( gpus[abs(idx)].get_corner(UP+LEFT), buff=.02, direction=RIGHT+DOWN, ) indiv_data.target.set_x(indiv_data.target.get_x()) elif j % 4 == 0: indiv_data.target.next_to( gpus[abs(idx)].get_corner(UP+LEFT), buff=.02, direction=RIGHT+DOWN, ) indiv_data.target.set_x(indiv_data.target.get_x()) indiv_data.target.set_y(indiv_data.target.get_y()-.25) else: indiv_data.target.next_to( old_target, direction=RIGHT, buff=0.02, ) old_target = indiv_data.target move_animation.append(MoveToTarget(indiv_data, run_time=1.5)) self.play(*move_animation) self.wait()
accelerate/manim_animations/dataloaders/stage_5.py/0
{ "file_path": "accelerate/manim_animations/dataloaders/stage_5.py", "repo_id": "accelerate", "token_count": 4515 }
#!/usr/bin/env python # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from pathlib import Path import torch from ...utils import is_mlu_available, is_musa_available, is_npu_available, is_xpu_available from .config_args import ClusterConfig, default_json_config_file from .config_utils import SubcommandHelpFormatter description = "Create a default config file for Accelerate with only a few flags set." def write_basic_config(mixed_precision="no", save_location: str = default_json_config_file, use_xpu: bool = False): """ Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also set CPU if it is a CPU-only machine. Args: mixed_precision (`str`, *optional*, defaults to "no"): Mixed Precision to use. Should be one of "no", "fp16", or "bf16" save_location (`str`, *optional*, defaults to `default_json_config_file`): Optional custom save location. Should be passed to `--config_file` when using `accelerate launch`. Default location is inside the huggingface cache folder (`~/.cache/huggingface`) but can be overriden by setting the `HF_HOME` environmental variable, followed by `accelerate/default_config.yaml`. use_xpu (`bool`, *optional*, defaults to `False`): Whether to use XPU if available. """ path = Path(save_location) path.parent.mkdir(parents=True, exist_ok=True) if path.exists(): print( f"Configuration already exists at {save_location}, will not override. Run `accelerate config` manually or pass a different `save_location`." ) return False mixed_precision = mixed_precision.lower() if mixed_precision not in ["no", "fp16", "bf16", "fp8"]: raise ValueError( f"`mixed_precision` should be one of 'no', 'fp16', 'bf16', or 'fp8'. Received {mixed_precision}" ) config = { "compute_environment": "LOCAL_MACHINE", "mixed_precision": mixed_precision, } if is_mlu_available(): num_mlus = torch.mlu.device_count() config["num_processes"] = num_mlus config["use_cpu"] = False if num_mlus > 1: config["distributed_type"] = "MULTI_MLU" else: config["distributed_type"] = "NO" elif is_musa_available(): num_musas = torch.musa.device_count() config["num_processes"] = num_musas config["use_cpu"] = False if num_musas > 1: config["distributed_type"] = "MULTI_MUSA" else: config["distributed_type"] = "NO" elif torch.cuda.is_available(): num_gpus = torch.cuda.device_count() config["num_processes"] = num_gpus config["use_cpu"] = False if num_gpus > 1: config["distributed_type"] = "MULTI_GPU" else: config["distributed_type"] = "NO" elif is_xpu_available() and use_xpu: num_xpus = torch.xpu.device_count() config["num_processes"] = num_xpus config["use_cpu"] = False if num_xpus > 1: config["distributed_type"] = "MULTI_XPU" else: config["distributed_type"] = "NO" elif is_npu_available(): num_npus = torch.npu.device_count() config["num_processes"] = num_npus config["use_cpu"] = False if num_npus > 1: config["distributed_type"] = "MULTI_NPU" else: config["distributed_type"] = "NO" else: num_xpus = 0 config["use_cpu"] = True config["num_processes"] = 1 config["distributed_type"] = "NO" config["debug"] = False config["enable_cpu_affinity"] = False config = ClusterConfig(**config) config.to_json_file(path) return path def default_command_parser(parser, parents): parser = parser.add_parser("default", parents=parents, help=description, formatter_class=SubcommandHelpFormatter) parser.add_argument( "--config_file", default=default_json_config_file, help=( "The path to use to store the config file. Will default to a file named default_config.yaml in the cache " "location, which is the content of the environment `HF_HOME` suffixed with 'accelerate', or if you don't have " "such an environment variable, your cache directory ('~/.cache' or the content of `XDG_CACHE_HOME`) suffixed " "with 'huggingface'." ), dest="save_location", ) parser.add_argument( "--mixed_precision", choices=["no", "fp16", "bf16"], type=str, help="Whether or not to use mixed precision training. " "Choose between FP16 and BF16 (bfloat16) training. " "BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later.", default="no", ) parser.set_defaults(func=default_config_command) return parser def default_config_command(args): config_file = write_basic_config(args.mixed_precision, args.save_location) if config_file: print(f"accelerate configuration saved at {config_file}")
accelerate/src/accelerate/commands/config/default.py/0
{ "file_path": "accelerate/src/accelerate/commands/config/default.py", "repo_id": "accelerate", "token_count": 2280 }
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib import math from contextlib import suppress from typing import Callable, List, Optional, Union import torch from packaging import version from torch.utils.data import BatchSampler, DataLoader, IterableDataset, RandomSampler from .logging import get_logger from .state import DistributedType, GradientState, PartialState, is_torch_xla_available from .utils import ( RNGType, broadcast, broadcast_object_list, compare_versions, concatenate, find_batch_size, get_data_structure, initialize_tensors, is_torch_version, is_torchdata_stateful_dataloader_available, send_to_device, slice_tensors, synchronize_rng_states, ) logger = get_logger(__name__) # kwargs of the DataLoader in min version 2.0 _PYTORCH_DATALOADER_KWARGS = { "batch_size": 1, "shuffle": False, "sampler": None, "batch_sampler": None, "num_workers": 0, "collate_fn": None, "pin_memory": False, "drop_last": False, "timeout": 0, "worker_init_fn": None, "multiprocessing_context": None, "generator": None, "prefetch_factor": 2, "persistent_workers": False, "pin_memory_device": "", } # kwargs added after by version _PYTORCH_DATALOADER_ADDITIONAL_KWARGS = {"2.6.0": {"in_order": True}} for v, additional_kwargs in _PYTORCH_DATALOADER_ADDITIONAL_KWARGS.items(): if is_torch_version(">=", v): _PYTORCH_DATALOADER_KWARGS.update(additional_kwargs) class SeedableRandomSampler(RandomSampler): """ Same as a random sampler, except that in `__iter__` a seed can be used. Needed specifically in distributed cases, when the random generator for each GPU needs to start from the same seed and be fully reproducable on multiple iterations. If a custom `generator` is passed, it will rely on its initial seed as well as the current iteration it is on (stored in `self.epoch`). """ def __init__(self, *args, **kwargs): data_seed = kwargs.pop("data_seed", None) super().__init__(*args, **kwargs) self.initial_seed = data_seed if data_seed is not None else torch.random.initial_seed() self.epoch = 0 def __iter__(self): if self.generator is None: self.generator = torch.Generator() self.generator.manual_seed(self.initial_seed) # Allow `self.epoch` to modify the seed of the generator seed = self.epoch + self.initial_seed # print("Setting seed at epoch", self.epoch, seed) self.generator.manual_seed(seed) yield from super().__iter__() self.set_epoch(self.epoch + 1) def set_epoch(self, epoch: int): "Sets the current iteration of the sampler." self.epoch = epoch class BatchSamplerShard(BatchSampler): """ Wraps a PyTorch `BatchSampler` to generate batches for one of the processes only. Instances of this class will always yield a number of batches that is a round multiple of `num_processes` and that all have the same size. Depending on the value of the `drop_last` attribute of the batch sampler passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. Args: batch_sampler (`torch.utils.data.sampler.BatchSampler`): The batch sampler to split in several shards. num_processes (`int`, *optional*, defaults to 1): The number of processes running concurrently. process_index (`int`, *optional*, defaults to 0): The index of the current process. split_batches (`bool`, *optional*, defaults to `False`): Whether the shards should be created by splitting a batch to give a piece of it on each process, or by yielding different full batches on each process. On two processes with a sampler of `[[0, 1, 2, 3], [4, 5, 6, 7]]`, this will result in: - the sampler on process 0 to yield `[0, 1, 2, 3]` and the sampler on process 1 to yield `[4, 5, 6, 7]` if this argument is set to `False`. - the sampler on process 0 to yield `[0, 1]` then `[4, 5]` and the sampler on process 1 to yield `[2, 3]` then `[6, 7]` if this argument is set to `True`. even_batches (`bool`, *optional*, defaults to `True`): Whether or not to loop back at the beginning of the sampler when the number of samples is not a round multiple of (original batch size / number of processes). <Tip warning={true}> `BatchSampler`s with varying batch sizes are not enabled by default. To enable this behaviour, set `even_batches` equal to `False` </Tip>""" def __init__( self, batch_sampler: BatchSampler, num_processes: int = 1, process_index: int = 0, split_batches: bool = False, even_batches: bool = True, ): if split_batches and batch_sampler.batch_size % num_processes != 0: raise ValueError( f"To use `BatchSamplerShard` in `split_batches` mode, the batch size ({batch_sampler.batch_size}) " f"needs to be a round multiple of the number of processes ({num_processes})." ) self.batch_sampler = batch_sampler self.num_processes = num_processes self.process_index = process_index self.split_batches = split_batches self.even_batches = even_batches self.batch_size = getattr(batch_sampler, "batch_size", None) self.drop_last = getattr(batch_sampler, "drop_last", False) if self.batch_size is None and self.even_batches: raise ValueError( "You need to use `even_batches=False` when the batch sampler has no batch size. If you " "are not calling this method directly, set `accelerator.even_batches=False` instead." ) @property def total_length(self): return len(self.batch_sampler) def __len__(self): if self.split_batches: # Split batches does not change the length of the batch sampler return len(self.batch_sampler) if len(self.batch_sampler) % self.num_processes == 0: # If the length is a round multiple of the number of processes, it's easy. return len(self.batch_sampler) // self.num_processes length = len(self.batch_sampler) // self.num_processes if self.drop_last: # Same if we drop the remainder. return length elif self.even_batches: # When we even batches we always get +1 return length + 1 else: # Otherwise it depends on the process index. return length + 1 if self.process_index < len(self.batch_sampler) % self.num_processes else length def __iter__(self): return self._iter_with_split() if self.split_batches else self._iter_with_no_split() def _iter_with_split(self): initial_data = [] batch_length = self.batch_sampler.batch_size // self.num_processes for idx, batch in enumerate(self.batch_sampler): if idx == 0: initial_data = batch if len(batch) == self.batch_size: # If the batch is full, we yield the part of it this process is responsible of. yield batch[batch_length * self.process_index : batch_length * (self.process_index + 1)] # If drop_last is True of the last batch was full, iteration is over, otherwise... if not self.drop_last and len(initial_data) > 0 and len(batch) < self.batch_size: if not self.even_batches: if len(batch) > batch_length * self.process_index: yield batch[batch_length * self.process_index : batch_length * (self.process_index + 1)] else: # For degenerate cases where the dataset has less than num_process * batch_size samples while len(initial_data) < self.batch_size: initial_data += initial_data batch = batch + initial_data yield batch[batch_length * self.process_index : batch_length * (self.process_index + 1)] def _iter_with_no_split(self): initial_data = [] batch_to_yield = [] for idx, batch in enumerate(self.batch_sampler): # We gather the initial indices in case we need to circle back at the end. if not self.drop_last and idx < self.num_processes: initial_data += batch # We identify the batch to yield but wait until we ar sure every process gets a full batch before actually # yielding it. if idx % self.num_processes == self.process_index: batch_to_yield = batch if idx % self.num_processes == self.num_processes - 1 and ( self.batch_size is None or len(batch) == self.batch_size ): yield batch_to_yield batch_to_yield = [] # If drop_last is True, iteration is over, otherwise... if not self.drop_last and len(initial_data) > 0: if not self.even_batches: if len(batch_to_yield) > 0: yield batch_to_yield else: # ... we yield the complete batch we had saved before if it has the proper length if len(batch_to_yield) == self.batch_size: yield batch_to_yield # For degenerate cases where the dataset has less than num_process * batch_size samples while len(initial_data) < self.num_processes * self.batch_size: initial_data += initial_data # If the last batch seen was of the proper size, it has been yielded by its process so we move to the next if len(batch) == self.batch_size: batch = [] idx += 1 # Make sure we yield a multiple of self.num_processes batches cycle_index = 0 while idx % self.num_processes != 0 or len(batch) > 0: end_index = cycle_index + self.batch_size - len(batch) batch += initial_data[cycle_index:end_index] if idx % self.num_processes == self.process_index: yield batch cycle_index = end_index batch = [] idx += 1 class IterableDatasetShard(IterableDataset): """ Wraps a PyTorch `IterableDataset` to generate samples for one of the processes only. Instances of this class will always yield a number of samples that is a round multiple of the actual batch size (depending of the value of `split_batches`, this is either `batch_size` or `batch_size x num_processes`). Depending on the value of the `drop_last` attribute of the batch sampler passed, it will either stop the iteration at the first batch that would be too small or loop with indices from the beginning. Args: dataset (`torch.utils.data.dataset.IterableDataset`): The batch sampler to split in several shards. batch_size (`int`, *optional*, defaults to 1): The size of the batches per shard (if `split_batches=False`) or the size of the batches (if `split_batches=True`). drop_last (`bool`, *optional*, defaults to `False`): Whether or not to drop the last incomplete batch or complete the last batches by using the samples from the beginning. num_processes (`int`, *optional*, defaults to 1): The number of processes running concurrently. process_index (`int`, *optional*, defaults to 0): The index of the current process. split_batches (`bool`, *optional*, defaults to `False`): Whether the shards should be created by splitting a batch to give a piece of it on each process, or by yielding different full batches on each process. On two processes with an iterable dataset yielding of `[0, 1, 2, 3, 4, 5, 6, 7]`, this will result in: - the shard on process 0 to yield `[0, 1, 2, 3]` and the shard on process 1 to yield `[4, 5, 6, 7]` if this argument is set to `False`. - the shard on process 0 to yield `[0, 1, 4, 5]` and the sampler on process 1 to yield `[2, 3, 6, 7]` if this argument is set to `True`. """ def __init__( self, dataset: IterableDataset, batch_size: int = 1, drop_last: bool = False, num_processes: int = 1, process_index: int = 0, split_batches: bool = False, ): if split_batches and batch_size > 1 and batch_size % num_processes != 0: raise ValueError( f"To use `IterableDatasetShard` in `split_batches` mode, the batch size ({batch_size}) " f"needs to be a round multiple of the number of processes ({num_processes})." ) self.dataset = dataset self.batch_size = batch_size self.drop_last = drop_last self.num_processes = num_processes self.process_index = process_index self.split_batches = split_batches def set_epoch(self, epoch): self.epoch = epoch if hasattr(self.dataset, "set_epoch"): self.dataset.set_epoch(epoch) def __len__(self): # We will just raise the downstream error if the underlying dataset is not sized if self.drop_last: return (len(self.dataset) // (self.batch_size * self.num_processes)) * self.batch_size else: return math.ceil(len(self.dataset) / (self.batch_size * self.num_processes)) * self.batch_size def __iter__(self): if ( not hasattr(self.dataset, "set_epoch") and hasattr(self.dataset, "generator") and isinstance(self.dataset.generator, torch.Generator) ): self.dataset.generator.manual_seed(self.epoch) real_batch_size = self.batch_size if self.split_batches else (self.batch_size * self.num_processes) process_batch_size = (self.batch_size // self.num_processes) if self.split_batches else self.batch_size process_slice = range(self.process_index * process_batch_size, (self.process_index + 1) * process_batch_size) first_batch = None current_batch = [] for element in self.dataset: current_batch.append(element) # Wait to have a full batch before yielding elements. if len(current_batch) == real_batch_size: for i in process_slice: yield current_batch[i] if first_batch is None: first_batch = current_batch.copy() current_batch = [] # Finished if drop_last is True, otherwise complete the last batch with elements from the beginning. if not self.drop_last and len(current_batch) > 0: if first_batch is None: first_batch = current_batch.copy() while len(current_batch) < real_batch_size: current_batch += first_batch for i in process_slice: yield current_batch[i] class DataLoaderStateMixin: """ Mixin class that adds a state to a `DataLoader` to keep track of the status inside the dataloader such as at the end of the iteration, the number of items in the dataset in the last batch relative to the batch size, and other useful information that might be needed. **Available attributes:** - **end_of_dataloader** (`bool`) -- Whether at the last iteration or batch - **remainder** (`int`) -- The number of items that are remaining in the last batch, relative to the total batch size <Tip warning={true}> Inheriters of this class should ensure that the class creates a `GradientState()` instance, stored in `self.gradient_state`. </Tip> """ def __init_subclass__(cls, **kwargs): cls.end_of_dataloader = False cls.remainder = -1 def reset(self): self.end_of_dataloader = False self.remainder = -1 def begin(self): "Prepares the gradient state for the current dataloader" self.reset() with suppress(Exception): if not self._drop_last: length = getattr(self.dataset, "total_dataset_length", len(self.dataset)) self.remainder = length % self.total_batch_size self.gradient_state._add_dataloader(self) def end(self): "Cleans up the gradient state after exiting the dataloader" self.gradient_state._remove_dataloader(self) class DataLoaderAdapter: """ A class which wraps around a PyTorch `DataLoader` (or variants of it) to be used with the `Accelerator`. For compatability reasons, this class inherits from the class it wraps around, so it can be used as a drop-in. """ def __init__(self, dataset, use_stateful_dataloader=False, batch_sampler=None, **kwargs): self.use_stateful_dataloader = use_stateful_dataloader if is_torchdata_stateful_dataloader_available(): from torchdata.stateful_dataloader import StatefulDataLoader if use_stateful_dataloader and not is_torchdata_stateful_dataloader_available(): raise ImportError( "StatefulDataLoader is not available. Please install torchdata version 0.8.0 or higher to use it." ) if use_stateful_dataloader: torchdata_version = version.parse(importlib.metadata.version("torchdata")) if ( "in_order" in kwargs and compare_versions(torchdata_version, "<", "0.11") and is_torch_version(">=", "2.6.0") ): kwargs.pop("in_order") self.base_dataloader = StatefulDataLoader(dataset, batch_sampler=batch_sampler, **kwargs) else: self.base_dataloader = DataLoader(dataset, batch_sampler=batch_sampler, **kwargs) if hasattr(self.base_dataloader, "state_dict"): self.dl_state_dict = self.base_dataloader.state_dict() def __getattr__(self, name): # Avoid infinite recursion if we try to access a nonexistent base_dataloader attribute. if name == "base_dataloader": raise AttributeError() # Delegate attribute access to the internal dataloader return getattr(self.base_dataloader, name) def state_dict(self): return self.dl_state_dict def load_state_dict(self, state_dict): self.base_dataloader.load_state_dict(state_dict) @property def __class__(self): """ In order to maintain backwards compatability with other code, we need to ensure `isinstance(obj, DataLoader)` returs true. This is because some downstream code assumes that the `DataLoader` is the base class of the object. """ return self.base_dataloader.__class__ def __len__(self): return len(self.base_dataloader) def adjust_state_dict_for_prefetch(self): """ Adjusts the state dict for prefetching. Natively, this will adjust all of the iters yielded keys in `self.dl_state_dict` by a factor of `num_processes - 1`, however if a custom correction is needed, this can be overridden. This should modify `self.dl_state_dict` directly """ # The state dict will be off by a factor of `n-1` batch too many during DDP, # so we need to adjust it here if PartialState().distributed_type != DistributedType.NO: factor = PartialState().num_processes - 1 if self.dl_state_dict["_sampler_iter_yielded"] > 0: self.dl_state_dict["_sampler_iter_yielded"] -= factor if self.dl_state_dict["_num_yielded"] > 0: self.dl_state_dict["_num_yielded"] -= factor if self.dl_state_dict["_index_sampler_state"] is not None: if ( "samples_yielded" in self.dl_state_dict["_index_sampler_state"] and self.dl_state_dict["_index_sampler_state"]["samples_yielded"] > 0 ): self.dl_state_dict["_index_sampler_state"]["samples_yielded"] -= self.batch_size * factor def _update_state_dict(self): # The state_dict of the underlying base_dataloader may be ahead of what is currently being yielded. # E.g. the implementation of DataLoaderShard involves having an underlying iterator 1 element ahead of # what it wants to yield. # # _update_state_dict is called to snapshot the state_dict that would properly recover the DataLoaderAdapter. if hasattr(self.base_dataloader, "state_dict"): self.dl_state_dict = self.base_dataloader.state_dict() # Potentially modify the state_dict to adjust for prefetching self.adjust_state_dict_for_prefetch() # Then tag if we are at the end of the dataloader self.dl_state_dict["_iterator_finished"] = self.end_of_dataloader class DataLoaderShard(DataLoaderAdapter, DataLoaderStateMixin): """ Subclass of `DataLoaderAdapter` that will deal with device placement and current distributed setup. Args: dataset (`torch.utils.data.dataset.Dataset`): The dataset to use to build this dataloader. device (`torch.device`, *optional*): If passed, the device to put all batches on. rng_types (list of `str` or [`~utils.RNGType`]): The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: - `"torch"`: the base torch random number generator - `"cuda"`: the CUDA random number generator (GPU only) - `"xla"`: the XLA random number generator (TPU only) - `"generator"`: an optional `torch.Generator` synchronized_generator (`torch.Generator`, *optional*): A random number generator to keep synchronized across processes. skip_batches (`int`, *optional*, defaults to 0): The number of batches to skip at the beginning. use_stateful_dataloader (`bool`, *optional*, defaults to `False`): Whether to have this class adapt `StatefulDataLoader` from `torchdata` instead of the regular `DataLoader`. **kwargs (additional keyword arguments, *optional*): All other keyword arguments to pass to the regular `DataLoader` initialization. **Available attributes:** - **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes. Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total number of processes - **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes. """ def __init__( self, dataset, device=None, rng_types=None, synchronized_generator=None, skip_batches=0, use_stateful_dataloader=False, _drop_last: bool = False, _non_blocking: bool = False, torch_device_mesh=None, **kwargs, ): super().__init__(dataset, use_stateful_dataloader=use_stateful_dataloader, **kwargs) self.device = device self.rng_types = rng_types self.synchronized_generator = synchronized_generator self.skip_batches = skip_batches self.gradient_state = GradientState() self._drop_last = _drop_last self._non_blocking = _non_blocking self.iteration = 0 def __iter__(self): if self.rng_types is not None: synchronize_rng_states(self.rng_types, self.synchronized_generator) self.begin() self.set_epoch(self.iteration) dataloader_iter = self.base_dataloader.__iter__() # We iterate one batch ahead to check when we are at the end try: current_batch = next(dataloader_iter) except StopIteration: yield batch_index = 0 while True: try: # But we still move it to the device so it is done before `StopIteration` is reached if self.device is not None: current_batch = send_to_device(current_batch, self.device, non_blocking=self._non_blocking) self._update_state_dict() next_batch = next(dataloader_iter) if batch_index >= self.skip_batches: yield current_batch batch_index += 1 current_batch = next_batch except StopIteration: self.end_of_dataloader = True self._update_state_dict() if batch_index >= self.skip_batches: yield current_batch break self.iteration += 1 self.end() def __reduce__(self): """ Define the `__reduce__` method to ensure a `DataLoaderShard` can be pickled and unpickled. This needs to be explicitly defined since default pickling behavior is broken by `DataLoaderAdapter` messing with its `__class__` member. """ args = super().__reduce__() return (DataLoaderShard, *args[1:]) def set_epoch(self, epoch: int): # In case it is manually passed in, the user can set it to what they like if self.iteration != epoch: self.iteration = epoch if hasattr(self.batch_sampler, "set_epoch"): self.batch_sampler.set_epoch(epoch) if hasattr(self.batch_sampler, "sampler") and hasattr(self.batch_sampler.sampler, "set_epoch"): self.batch_sampler.sampler.set_epoch(epoch) # We support if a custom `Dataset` implementation has `set_epoch` # or in general HF datasets `Datasets` elif hasattr(self.dataset, "set_epoch"): self.dataset.set_epoch(epoch) @property def total_batch_size(self): batch_sampler = self.sampler if isinstance(self.sampler, BatchSampler) else self.batch_sampler return ( batch_sampler.batch_size if getattr(batch_sampler, "split_batches", False) else (batch_sampler.batch_size * getattr(batch_sampler, "num_processes", 1)) ) @property def total_dataset_length(self): if hasattr(self.dataset, "total_length"): return self.dataset.total_length else: return len(self.dataset) def get_sampler(self): return get_sampler(self) def set_sampler(self, sampler): sampler_is_batch_sampler = isinstance(self.sampler, BatchSampler) if sampler_is_batch_sampler: self.sampler.sampler = sampler else: self.batch_sampler.sampler = sampler if hasattr(self.batch_sampler, "batch_sampler"): self.batch_sampler.batch_sampler.sampler = sampler if is_torch_xla_available(): import torch_xla.distributed.parallel_loader as xpl class MpDeviceLoaderWrapper(xpl.MpDeviceLoader): """ Wrapper for the xpl.MpDeviceLoader class that knows the total batch size. XLA preloading threads will all call DataLoaderShard's __iter__(). Remove rng_types from DataLoaderShard to prevent it from using the XLA device in the preloading threads, and synchronize the RNG once from the main thread only. **Available attributes:** - **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes. Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total number of processes - **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes. """ def __init__(self, dataloader: DataLoaderShard, device: torch.device): super().__init__(dataloader, device) self._rng_types = self._loader.rng_types self._loader.rng_types = None self.device = device def __iter__(self): if self._rng_types is not None: synchronize_rng_states(self._rng_types, self._loader.synchronized_generator) return super().__iter__() def set_epoch(self, epoch: int): if hasattr(self.dataloader, "set_epoch"): self.dataloader.set_epoch(epoch) @property def total_batch_size(self): return self._loader.total_batch_size @property def total_dataset_length(self): return self._loader.total_dataset_length @property def batch_sampler(self): return self._loader.batch_sampler @property def dataloader(self): return self._loader class DataLoaderDispatcher(DataLoaderAdapter, DataLoaderStateMixin): """ Subclass of `DataLoaderAdapter` that will iterate and preprocess on process 0 only, then dispatch on each process their part of the batch. Args: split_batches (`bool`, *optional*, defaults to `False`): Whether the resulting `DataLoader` should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the `process_index`-th and advancing of `num_processes` batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial `dataloader` if this option is set to `True`, the batch size of the initial `dataloader` multiplied by `num_processes` otherwise. Setting this option to `True` requires that the batch size of the `dataloader` is a round multiple of `batch_size`. skip_batches (`int`, *optional*, defaults to 0): The number of batches to skip at the beginning of an iteration. use_stateful_dataloader (`bool`, *optional*, defaults to `False`): Whether to have this class adapt `StatefulDataLoader` from `torchdata` instead of the regular `DataLoader`. **Available attributes:** - **total_batch_size** (`int`) -- Total batch size of the dataloader across all processes. Equal to the original batch size when `split_batches=True`; otherwise the original batch size * the total number of processes - **total_dataset_length** (`int`) -- Total length of the inner dataset across all processes. """ def __init__( self, dataset, split_batches: bool = False, skip_batches=0, use_stateful_dataloader=False, _drop_last: bool = False, _non_blocking: bool = False, slice_fn=None, torch_device_mesh=None, **kwargs, ): shuffle = False from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe # We need to save the shuffling state of the DataPipe if isinstance(dataset, ShufflerIterDataPipe): shuffle = dataset._shuffle_enabled super().__init__(dataset, use_stateful_dataloader=use_stateful_dataloader, **kwargs) self.split_batches = split_batches if shuffle: torch.utils.data.graph_settings.apply_shuffle_settings(dataset, shuffle=shuffle) self.gradient_state = GradientState() self.state = PartialState() self._drop_last = _drop_last self._non_blocking = _non_blocking self.skip_batches = skip_batches self.torch_device_mesh = torch_device_mesh self.slice_fn = slice_tensors if slice_fn is None else slice_fn self.iteration = 0 # if a device mesh is provided extract each dimension (dp, fsdp, tp) # device mesh may hold any number of dimensions, however, # below code is for targetted support for dp, fsdp and tp # device mesh will be used only if there is tp involved # or any multi-dimensional parallelism involving tp # (dp, tp) (fsdp, tp) (dp, fsdp, tp) # otherwise the default behavour not using device mesh should be sufficient # since multi dimensional parallelism devoid of tp would anyway need # different batches for each process irrespective of dp or fsdp self.submesh_tp = None self.submesh_dp = None self.submesh_fsdp = None if self.torch_device_mesh and "tp" in self.torch_device_mesh.mesh_dim_names: self.submesh_tp = self.torch_device_mesh["tp"] if "dp" in self.torch_device_mesh.mesh_dim_names: self.submesh_dp = self.torch_device_mesh["dp"] if "fsdp" in self.torch_device_mesh.mesh_dim_names: self.submesh_fsdp = self.torch_device_mesh["fsdp"] if self.submesh_tp and (self.submesh_dp or self.submesh_fsdp): raise ValueError("TP + (DP/FSDP) is not yet supported in dispatch mode") def _fetch_batches(self, iterator): batches, batch = None, None # On process 0, we gather the batch to dispatch. if self.state.process_index == 0: # Procedure to support TP only is simpler # since we want to dispatch the same batch of samples across all ranks # this removes complexity of handling multiple tp rank groups when TP + DP # combination is involved. try: # for TP case avoid using split_batches # since it would mean that the dataloader should be spilling out # duplicates of batches. if self.split_batches: # One batch of the main iterator is dispatched and split. if self.submesh_tp: logger.warning( "Use of split_batches for TP would need the dataloader to produce duplicate batches," "otherwise, use dispatch_batches=True instead." ) self._update_state_dict() batch = next(iterator) else: # num_processes batches of the main iterator are concatenated then dispatched and split. # We add the batches one by one so we have the remainder available when drop_last=False. batches = [] if self.submesh_tp: # when tp, extract single batch and then replicate self._update_state_dict() batch = next(iterator) batches = [batch] * self.state.num_processes else: for _ in range(self.state.num_processes): self._update_state_dict() batches.append(next(iterator)) try: batch = concatenate(batches, dim=0) except RuntimeError as e: raise RuntimeError( "You can't use batches of different size with `dispatch_batches=True` or when using an `IterableDataset`." "either pass `dispatch_batches=False` and have each process fetch its own batch " " or pass `split_batches=True`. By doing so, the main process will fetch a full batch and " "slice it into `num_processes` batches for each process." ) from e # In both cases, we need to get the structure of the batch that we will broadcast on other # processes to initialize the tensors with the right shape. # data_structure, stop_iteration batch_info = [get_data_structure(batch), False] except StopIteration: batch_info = [None, True] else: batch_info = [None, self._stop_iteration] # This is inplace, so after this instruction, every process has the same `batch_info` as process 0. broadcast_object_list(batch_info) self._stop_iteration = batch_info[1] if self._stop_iteration: # If drop_last is False and split_batches is False, we may have a remainder to take care of. if not self.split_batches and not self._drop_last: if self.state.process_index == 0 and len(batches) > 0: batch = concatenate(batches, dim=0) batch_info = [get_data_structure(batch), False] else: batch_info = [None, True] broadcast_object_list(batch_info) return batch, batch_info def __iter__(self): self.begin() self.set_epoch(self.iteration) main_iterator = None if is_torch_version(">=", "2.0.1"): # NOTE PyTorch DataLoader adds forward compatibilities for DataPipes, which broadcasts # shared seed to all dist processes. Thus, we need to create iterator for all dist processes. # But, we only iterate through the DataLoader on process 0. main_iterator = self.base_dataloader.__iter__() elif self.state.process_index == 0: main_iterator = self.base_dataloader.__iter__() stop_iteration = False self._stop_iteration = False first_batch = None next_batch, next_batch_info = self._fetch_batches(main_iterator) batch_index = 0 while not stop_iteration: batch, batch_info = next_batch, next_batch_info if self.state.process_index != 0: # Initialize tensors on other processes than process 0. batch = initialize_tensors(batch_info[0]) batch = send_to_device(batch, self.state.device, non_blocking=self._non_blocking) # Broadcast the batch before splitting it. batch = broadcast(batch, from_process=0) if not self._drop_last and first_batch is None: # We keep at least num processes elements of the first batch to be able to complete the last batch first_batch = self.slice_fn( batch, slice(0, self.state.num_processes), process_index=self.state.process_index, num_processes=self.state.num_processes, ) if batch is None: raise ValueError( f"Batch does not contain any data (`{batch}`). At the end of all iterable data available before expected stop iteration." ) observed_batch_size = find_batch_size(batch) batch_size = observed_batch_size // self.state.num_processes stop_iteration = self._stop_iteration if not stop_iteration: # We may still be at the end of the dataloader without knowing it yet: if there is nothing left in # the dataloader since the number of batches is a round multiple of the number of processes. next_batch, next_batch_info = self._fetch_batches(main_iterator) # next_batch_info[0] is None when there are no more batches, otherwise we still need to process them. if self._stop_iteration and next_batch_info[0] is None: stop_iteration = True if not self._drop_last and stop_iteration and observed_batch_size % self.state.num_processes != 0: # If the last batch is not complete, let's add the first batch to it. batch = concatenate([batch, first_batch], dim=0) # Batch size computation above is wrong, it's off by 1 so we fix it. batch_size += 1 data_slice = slice(self.state.process_index * batch_size, (self.state.process_index + 1) * batch_size) batch = self.slice_fn( batch, data_slice, process_index=self.state.process_index, num_processes=self.state.num_processes, ) if stop_iteration: self.end_of_dataloader = True self._update_state_dict() self.remainder = observed_batch_size if batch_index >= self.skip_batches: yield batch batch_index += 1 self.iteration += 1 self.end() def set_epoch(self, epoch: int): # In case it is manually passed in, the user can set it to what they like if self.iteration != epoch: self.iteration = epoch if hasattr(self.batch_sampler, "sampler") and hasattr(self.batch_sampler.sampler, "set_epoch"): self.batch_sampler.sampler.set_epoch(epoch) elif hasattr(self.dataset, "set_epoch"): self.dataset.set_epoch(epoch) def __len__(self): whole_length = len(self.base_dataloader) if self.split_batches: return whole_length elif self._drop_last: return whole_length // self.state.num_processes else: return math.ceil(whole_length / self.state.num_processes) def __reduce__(self): """ Define the `__reduce__` method to ensure a `DataLoaderDispatcher` can be pickled and unpickled. This needs to be explicitly defined since default pickling behavior is broken by `DataLoaderAdapter` messing with its `__class__` member. """ args = super().__reduce__() return (DataLoaderDispatcher, *args[1:]) @property def total_batch_size(self): return ( self.dataset.batch_size if self.split_batches else (self.dataset.batch_size * self.dataset.num_processes) ) @property def total_dataset_length(self): return len(self.dataset) def get_sampler(self): return get_sampler(self) def set_sampler(self, sampler): sampler_is_batch_sampler = isinstance(self.sampler, BatchSampler) if sampler_is_batch_sampler: self.sampler.sampler = sampler else: self.batch_sampler.sampler = sampler if hasattr(self.batch_sampler, "batch_sampler"): self.batch_sampler.batch_sampler.sampler = sampler def get_sampler(dataloader): """ Get the sampler associated to the dataloader Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. Returns: `torch.utils.data.Sampler`: The sampler associated to the dataloader """ sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) if sampler_is_batch_sampler: sampler = getattr(dataloader.sampler, "sampler", None) else: sampler = getattr(dataloader.batch_sampler, "sampler", None) return sampler def prepare_data_loader( dataloader: DataLoader, device: Optional[torch.device] = None, num_processes: Optional[int] = None, process_index: Optional[int] = None, split_batches: bool = False, put_on_device: bool = False, rng_types: Optional[List[Union[str, RNGType]]] = None, dispatch_batches: Optional[bool] = None, even_batches: bool = True, slice_fn_for_dispatch: Optional[Callable] = None, use_seedable_sampler: bool = False, data_seed: Optional[int] = None, non_blocking: bool = False, use_stateful_dataloader: bool = False, torch_device_mesh=None, ) -> DataLoader: """ Wraps a PyTorch `DataLoader` to generate batches for one of the processes only. Depending on the value of the `drop_last` attribute of the `dataloader` passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. Args: dataloader (`torch.utils.data.dataloader.DataLoader`): The data loader to split across several devices. device (`torch.device`): The target device for the returned `DataLoader`. num_processes (`int`, *optional*): The number of processes running concurrently. Will default to the value given by [`~state.PartialState`]. process_index (`int`, *optional*): The index of the current process. Will default to the value given by [`~state.PartialState`]. split_batches (`bool`, *optional*, defaults to `False`): Whether the resulting `DataLoader` should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the `process_index`-th and advancing of `num_processes` batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial `dataloader` if this option is set to `True`, the batch size of the initial `dataloader` multiplied by `num_processes` otherwise. Setting this option to `True` requires that the batch size of the `dataloader` is a round multiple of `batch_size`. put_on_device (`bool`, *optional*, defaults to `False`): Whether or not to put the batches on `device` (only works if the batches are nested list, tuples or dictionaries of tensors). rng_types (list of `str` or [`~utils.RNGType`]): The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: - `"torch"`: the base torch random number generator - `"cuda"`: the CUDA random number generator (GPU only) - `"xla"`: the XLA random number generator (TPU only) - `"generator"`: the `torch.Generator` of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type. dispatch_batches (`bool`, *optional*): If set to `True`, the dataloader prepared is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to `True` when the underlying dataset is an `IterableDataset`, `False` otherwise. even_batches (`bool`, *optional*, defaults to `True`): If set to `True`, in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. slice_fn_for_dispatch (`Callable`, *optional*`): If passed, this function will be used to slice tensors across `num_processes`. Will default to [`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will be ignored otherwise. use_seedable_sampler (`bool`, *optional*, defaults to `False`): Whether to use the [`~data_loader.SeedableRandomSampler`] instead of a `RandomSampler` for better reproducability. Comes at a cost of potentially different performances due to different shuffling algorithms but ensures results will be the *exact* same. Should be paired with `set_seed()` at every `self.set_epoch` data_seed (`int`, *optional*, defaults to `None`): The seed to use for the underlying generator when using `use_seedable_sampler`. If `None`, the generator will use the current default seed from torch. non_blocking (`bool`, *optional*, defaults to `False`): If set to `True`, dataloader will utilize non-blocking host-to-device transfers. If the dataloader has `pin_memory` set to `True`, this will help to increase overlap between data transfer and computations. use_stateful_dataloader (`bool`, *optional*, defaults to `False`): "If set to true, the dataloader prepared by the Accelerator will be backed by " "[torchdata.StatefulDataLoader](https://github.com/pytorch/data/tree/main/torchdata/stateful_dataloader). This requires `torchdata` version 0.8.0 or higher that supports StatefulDataLoader to be installed." torch_device_mesh (`torch.distributed.DeviceMesh`, *optional*, defaults to `None`): PyTorch device mesh. Returns: `torch.utils.data.dataloader.DataLoader`: A new data loader that will yield the portion of the batches <Tip warning={true}> `BatchSampler`s with varying batch sizes are not enabled by default. To enable this behaviour, set `even_batches` equal to `False` </Tip> """ if dispatch_batches is None: if not put_on_device: dispatch_batches = False else: dispatch_batches = isinstance(dataloader.dataset, IterableDataset) if dispatch_batches and not put_on_device: raise ValueError("Using `dispatch_batches=True` requires `put_on_device=True`.") # Grab defaults from PartialState state = PartialState() if num_processes is None: num_processes = state.num_processes if process_index is None: process_index = state.process_index # when device mesh is used, specifically with TP # then there is need to update process_index and num_processes # to bring in the effect of generating same batch across TP ranks # and different batch across FSDP and DP ranks. # Example: # if device mesh is (dp,fsdp,tp) = (2, 2, 3) # ranks would range from 0...11 # from data angle ranks should look like 0 0 0 1 1 1 2 2 2 3 3 3 # processes with same ranks/ids would receive the same batch if torch_device_mesh: submesh_fsdp_size = 1 submesh_dp_size = 1 submesh_tp_size = 1 if "tp" in torch_device_mesh.mesh_dim_names: submesh_tp_size = torch_device_mesh["tp"].size() if "dp" in torch_device_mesh.mesh_dim_names: submesh_dp_size = torch_device_mesh["dp"].size() if "fsdp" in torch_device_mesh.mesh_dim_names: submesh_fsdp_size = torch_device_mesh["fsdp"].size() process_index = process_index // submesh_tp_size num_processes = submesh_fsdp_size * submesh_dp_size # Sanity check if split_batches: if dataloader.batch_size is not None: batch_size_for_check = dataloader.batch_size else: # For custom batch_sampler if hasattr(dataloader.batch_sampler, "batch_size"): batch_size_for_check = dataloader.batch_sampler.batch_size else: raise ValueError( "In order to use `split_batches==True` you must have a `batch_size` attribute either in the passed " "`dataloader` or `dataloader.batch_sampler` objects, and it has to return a natural number. " "Your `dataloader.batch_size` is None and `dataloader.batch_sampler` " f"(`{type(dataloader.batch_sampler)}`) does not have the `batch_size` attribute set." ) if batch_size_for_check > 1 and batch_size_for_check % num_processes != 0: raise ValueError( f"To use a `DataLoader` in `split_batches` mode, the batch size ({dataloader.batch_size}) " f"needs to be a round multiple of the number of processes ({num_processes})." ) new_dataset = dataloader.dataset # Iterable dataset doesn't like batch_sampler, but data_loader creates a default one for it new_batch_sampler = dataloader.batch_sampler if not isinstance(new_dataset, IterableDataset) else None sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) synchronized_generator = None sampler = get_sampler(dataloader) if isinstance(sampler, RandomSampler) and use_seedable_sampler: # When iterating through the dataloader during distributed processes # we want to ensure that on each process we are iterating through the same # samples in the same order if a seed is set. This requires a tweak # to the `torch.utils.data.RandomSampler` class (if used). sampler = SeedableRandomSampler( data_source=sampler.data_source, replacement=sampler.replacement, num_samples=sampler._num_samples, generator=getattr(sampler, "generator", torch.Generator()), data_seed=data_seed, ) if isinstance(dataloader.sampler, RandomSampler) and state.distributed_type == DistributedType.XLA: # isinstance(dataloader.sampler, RandomSampler) indicates the original dataloader has `shuffle` enabled. generator = torch.Generator().manual_seed(42) dataloader.generator = generator dataloader.sampler.generator = generator # No change if no multiprocess if (num_processes != 1 or state.distributed_type == DistributedType.MEGATRON_LM) and not dispatch_batches: if isinstance(new_dataset, IterableDataset): if getattr(dataloader.dataset, "generator", None) is not None: synchronized_generator = dataloader.dataset.generator new_dataset = IterableDatasetShard( new_dataset, batch_size=dataloader.batch_size, drop_last=dataloader.drop_last, num_processes=num_processes, process_index=process_index, split_batches=split_batches, ) else: if not use_seedable_sampler and hasattr(sampler, "generator"): if sampler.generator is None: sampler.generator = torch.Generator() synchronized_generator = sampler.generator batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler new_batch_sampler = BatchSamplerShard( batch_sampler, num_processes=num_processes, process_index=process_index, split_batches=split_batches, even_batches=even_batches, ) # We ignore all of those since they are all dealt with by our new_batch_sampler ignore_kwargs = [ "batch_size", "shuffle", "sampler", "batch_sampler", "drop_last", ] if rng_types is not None and synchronized_generator is None and "generator" in rng_types: rng_types.remove("generator") kwargs = { k: getattr(dataloader, k, _PYTORCH_DATALOADER_KWARGS[k]) for k in _PYTORCH_DATALOADER_KWARGS if k not in ignore_kwargs } # Need to provide batch_size as batch_sampler is None for Iterable dataset if new_batch_sampler is None: kwargs["drop_last"] = dataloader.drop_last kwargs["batch_size"] = ( dataloader.batch_size // num_processes if split_batches and not dispatch_batches else dataloader.batch_size ) if dispatch_batches: kwargs.pop("generator") dataloader = DataLoaderDispatcher( new_dataset, split_batches=split_batches, batch_sampler=new_batch_sampler, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, slice_fn=slice_fn_for_dispatch, use_stateful_dataloader=use_stateful_dataloader, torch_device_mesh=torch_device_mesh, **kwargs, ) elif sampler_is_batch_sampler: dataloader = DataLoaderShard( new_dataset, device=device if put_on_device and state.distributed_type != DistributedType.XLA else None, sampler=new_batch_sampler, batch_size=dataloader.batch_size, rng_types=rng_types, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, synchronized_generator=synchronized_generator, use_stateful_dataloader=use_stateful_dataloader, **kwargs, ) else: dataloader = DataLoaderShard( new_dataset, device=device if put_on_device and state.distributed_type != DistributedType.XLA else None, batch_sampler=new_batch_sampler, rng_types=rng_types, synchronized_generator=synchronized_generator, _drop_last=dataloader.drop_last, _non_blocking=non_blocking, use_stateful_dataloader=use_stateful_dataloader, **kwargs, ) if isinstance(sampler, SeedableRandomSampler) and use_seedable_sampler: dataloader.set_sampler(sampler) if state.distributed_type == DistributedType.XLA: return MpDeviceLoaderWrapper(dataloader, device) return dataloader class SkipBatchSampler(BatchSampler): """ A `torch.utils.data.BatchSampler` that skips the first `n` batches of another `torch.utils.data.BatchSampler`. Should not be used if the original dataloader is a `StatefulDataLoader`. """ def __init__(self, batch_sampler, skip_batches=0): self.batch_sampler = batch_sampler self.skip_batches = skip_batches def __iter__(self): for index, samples in enumerate(self.batch_sampler): if index >= self.skip_batches: yield samples @property def total_length(self): return len(self.batch_sampler) def __len__(self): return len(self.batch_sampler) - self.skip_batches class SkipDataLoader(DataLoaderAdapter, DataLoaderStateMixin): """ Subclass of a PyTorch `DataLoader` that will skip the first batches. Generally it's preferable to use `skip_first_batches`/`torchdata.StatefulDataLoader` instead of this class. Args: dataset (`torch.utils.data.dataset.Dataset`): The dataset to use to build this dataloader. skip_batches (`int`, *optional*, defaults to 0): The number of batches to skip at the beginning. kwargs: All other keyword arguments to pass to the regular `DataLoader` initialization. """ def __init__(self, dataset, skip_batches=0, use_stateful_dataloader=False, **kwargs): super().__init__(dataset, use_stateful_dataloader=use_stateful_dataloader, **kwargs) self.skip_batches = skip_batches self.gradient_state = GradientState() def __iter__(self): self.begin() for index, batch in enumerate(self.base_dataloader.__iter__()): if index >= self.skip_batches: self._update_state_dict() yield batch self.end() def __len__(self): return len(self.base_dataloader) - self.skip_batches def __reduce__(self): """ Define the `__reduce__` method to ensure a `SkipDataLoader` can be pickled and unpickled. This needs to be explicitly defined since default pickling behavior is broken by `DataLoaderAdapter` messing with its `__class__` member. """ args = super().__reduce__() return (SkipDataLoader, *args[1:]) def skip_first_batches(dataloader, num_batches=0): """ Creates a `torch.utils.data.DataLoader` that will efficiently skip the first `num_batches`. Should not be used if the original dataloader is a `StatefulDataLoader`. """ state = PartialState() if state.distributed_type == DistributedType.XLA: device = dataloader.device dataloader = dataloader.dataloader dataset = dataloader.dataset sampler_is_batch_sampler = False if isinstance(dataset, IterableDataset): new_batch_sampler = None else: sampler_is_batch_sampler = isinstance(dataloader.sampler, BatchSampler) batch_sampler = dataloader.sampler if sampler_is_batch_sampler else dataloader.batch_sampler new_batch_sampler = SkipBatchSampler(batch_sampler, skip_batches=num_batches) # We ignore all of those since they are all dealt with by our new_batch_sampler ignore_kwargs = [ "batch_size", "shuffle", "sampler", "batch_sampler", "drop_last", ] kwargs = { k: getattr(dataloader, k, _PYTORCH_DATALOADER_KWARGS[k]) for k in _PYTORCH_DATALOADER_KWARGS if k not in ignore_kwargs } # Need to provide batch_size as batch_sampler is None for Iterable dataset if new_batch_sampler is None: kwargs["drop_last"] = dataloader.drop_last kwargs["batch_size"] = dataloader.batch_size if isinstance(dataloader, DataLoaderDispatcher): if new_batch_sampler is None: # Need to manually skip batches in the dataloader kwargs["skip_batches"] = num_batches dataloader = DataLoaderDispatcher( dataset, split_batches=dataloader.split_batches, batch_sampler=new_batch_sampler, _drop_last=dataloader._drop_last, **kwargs, ) elif isinstance(dataloader, DataLoaderShard): if new_batch_sampler is None: # Need to manually skip batches in the dataloader kwargs["skip_batches"] = num_batches elif sampler_is_batch_sampler: kwargs["sampler"] = new_batch_sampler kwargs["batch_size"] = dataloader.batch_size else: kwargs["batch_sampler"] = new_batch_sampler dataloader = DataLoaderShard( dataset, device=dataloader.device, rng_types=dataloader.rng_types, synchronized_generator=dataloader.synchronized_generator, **kwargs, ) else: if new_batch_sampler is None: # Need to manually skip batches in the dataloader dataloader = SkipDataLoader(dataset, skip_batches=num_batches, **kwargs) else: dataloader = DataLoader(dataset, batch_sampler=new_batch_sampler, **kwargs) if state.distributed_type == DistributedType.XLA: dataloader = MpDeviceLoaderWrapper(dataloader, device) return dataloader
accelerate/src/accelerate/data_loader.py/0
{ "file_path": "accelerate/src/accelerate/data_loader.py", "repo_id": "accelerate", "token_count": 26947 }
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import math import os from copy import deepcopy import datasets import evaluate import torch import transformers from datasets import load_dataset from torch.utils.data import DataLoader, IterableDataset from transformers import AutoModelForSequenceClassification, AutoTokenizer from accelerate import Accelerator, DataLoaderConfiguration, DistributedType from accelerate.data_loader import DataLoaderDispatcher from accelerate.test_utils import RegressionDataset, RegressionModel, torch_device from accelerate.utils import is_torch_xla_available, set_seed os.environ["TRANSFORMERS_NO_ADVISORY_WARNINGS"] = "true" class ListHandler(logging.Handler): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.logs = [] def emit(self, record): self.logs.append(record) def get_basic_setup(accelerator, num_samples=82, batch_size=16): "Returns everything needed to perform basic training" set_seed(42) model = RegressionModel() ddp_model = deepcopy(model) dset = RegressionDataset(length=num_samples) dataloader = DataLoader(dset, batch_size=batch_size) model.to(accelerator.device) ddp_model, dataloader = accelerator.prepare(ddp_model, dataloader) return model, ddp_model, dataloader def get_dataloader(accelerator: Accelerator, use_longest=False): tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/mrpc-bert-base-cased") dataset = load_dataset("glue", "mrpc", split="validation") def tokenize_function(examples): outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs with accelerator.main_process_first(): tokenized_datasets = dataset.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): if use_longest: return tokenizer.pad(examples, padding="longest", return_tensors="pt") return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt") return DataLoader(tokenized_datasets, shuffle=False, collate_fn=collate_fn, batch_size=16) def get_mrpc_setup(dispatch_batches, split_batches): dataloader_config = DataLoaderConfiguration(dispatch_batches=dispatch_batches, split_batches=split_batches) accelerator = Accelerator(dataloader_config=dataloader_config) dataloader = get_dataloader(accelerator, not dispatch_batches) model = AutoModelForSequenceClassification.from_pretrained( "hf-internal-testing/mrpc-bert-base-cased", return_dict=True ) ddp_model, ddp_dataloader = accelerator.prepare(model, dataloader) return { "ddp": [ddp_model, ddp_dataloader, torch_device], "no": [model, dataloader, accelerator.device], }, accelerator def generate_predictions(model, dataloader, accelerator): logits_and_targets = [] for batch in dataloader: input, target = batch.values() with torch.no_grad(): logit = model(input) logit, target = accelerator.gather_for_metrics((logit, target)) logits_and_targets.append((logit, target)) logits, targs = [], [] for logit, targ in logits_and_targets: logits.append(logit) targs.append(targ) logits, targs = torch.cat(logits), torch.cat(targs) return logits, targs def test_torch_metrics( accelerator: Accelerator, num_samples=82, dispatch_batches=False, split_batches=False, batch_size=16 ): _, ddp_model, dataloader = get_basic_setup(accelerator, num_samples, batch_size) logits, _ = generate_predictions(ddp_model, dataloader, accelerator) assert ( len(logits) == num_samples ), f"Unexpected number of inputs:\n Expected: {num_samples}\n Actual: {len(logits)}" def test_mrpc(dispatch_batches: bool = False, split_batches: bool = False): metric = evaluate.load("glue", "mrpc") setup, accelerator = get_mrpc_setup(dispatch_batches, split_batches) # First do baseline model, dataloader, device = setup["no"] model.to(device) model.eval() for batch in dataloader: batch.to(device) with torch.inference_mode(): outputs = model(**batch) preds = outputs.logits.argmax(dim=-1) metric.add_batch(predictions=preds, references=batch["labels"]) baseline = metric.compute() # Then do distributed model, dataloader, device = setup["ddp"] model.eval() for batch in dataloader: with torch.inference_mode(): outputs = model(**batch) preds = outputs.logits.argmax(dim=-1) references = batch["labels"] preds, references = accelerator.gather_for_metrics((preds, references)) metric.add_batch(predictions=preds, references=references) distributed = metric.compute() for key in "accuracy f1".split(): assert math.isclose( baseline[key], distributed[key] ), f"Baseline and Distributed are not the same for key {key}:\n\tBaseline: {baseline[key]}\n\tDistributed: {distributed[key]}\n" def test_gather_for_metrics_with_non_tensor_objects_iterable_dataset(): class DummyIterableDataset(IterableDataset): def __init__(self, data): self.data = data def __len__(self): return len(self.data) def __iter__(self): yield from self.data iterable_dataset = DummyIterableDataset([n for n in range(30)]) dataloader = DataLoader(iterable_dataset, batch_size=4) accelerator = Accelerator() prepared_dataloader = accelerator.prepare(dataloader) if accelerator.is_main_process: logger = logging.root.manager.loggerDict["accelerate.accelerator"] list_handler = ListHandler() logger.addHandler(list_handler) batches_for_metrics = [] for batch in prepared_dataloader: batches_for_metrics.append(accelerator.gather_for_metrics(batch)) assert torch.cat(batches_for_metrics).size(0) == 30 if accelerator.is_main_process: assert len(list_handler.logs) == 0 logger.removeHandler(list_handler) def test_gather_for_metrics_with_iterable_dataset(): class DummyIterableDataset(IterableDataset): def __init__(self, data): self.data = data def __len__(self): return len(self.data) def __iter__(self): yield from self.data iterable_dataset = DummyIterableDataset(torch.as_tensor(range(30))) dataloader = DataLoader(iterable_dataset, batch_size=4) accelerator = Accelerator() prepared_dataloader = accelerator.prepare(dataloader) assert isinstance(prepared_dataloader, DataLoaderDispatcher) if accelerator.is_main_process: logger = logging.root.manager.loggerDict["accelerate.accelerator"] list_handler = ListHandler() logger.addHandler(list_handler) batches_for_metrics = [] for batch in prepared_dataloader: batches_for_metrics.append(accelerator.gather_for_metrics(batch)) assert torch.cat(batches_for_metrics).size(0) == 30 if accelerator.is_main_process: assert len(list_handler.logs) == 0 logger.removeHandler(list_handler) def test_gather_for_metrics_drop_last(): accelerator = Accelerator() per_device_batch_size = 5 num_items = (10 * accelerator.num_processes) + 1 dataloader = DataLoader(range(num_items), batch_size=per_device_batch_size, drop_last=True) dataloader = accelerator.prepare(dataloader) iterator = iter(dataloader) next(iterator) # Skip first batch tensor([0, 1, 2, 3, 4], device='cuda:0') batch = next(iterator) gathered_items = accelerator.gather_for_metrics(batch) # Should return a full set of complete batches from each GPU num_expected_items = per_device_batch_size * accelerator.num_processes assert gathered_items.size(0) == ( num_expected_items ), f"Expected number of items: {num_expected_items}, Actual: {gathered_items.size(0)}" def main(): dataloader_config = DataLoaderConfiguration(split_batches=False, dispatch_batches=False) accelerator = Accelerator(dataloader_config=dataloader_config) if accelerator.is_local_main_process: datasets.utils.logging.set_verbosity_warning() transformers.utils.logging.set_verbosity_warning() else: datasets.utils.logging.set_verbosity_error() transformers.utils.logging.set_verbosity_error() # TorchXLA does not support batch dispatching. 'put_on_device' is always False for # TorchXLA, which can cause a value error in 'prepare_data_loader' function. dispatch_batches_options = [False] if accelerator.state.distributed_type == DistributedType.XLA else [True, False] # Temporarily close this test for TorchXLA due to the 'Cannot set version_counter for # inference tensor' error in inference mode. Reopen it after TorchXLA fixes this bug. # These are a bit slower so they should only be ran on the GPU or TPU if accelerator.device.type != "cpu" and not is_torch_xla_available(): if accelerator.is_local_main_process: print("**Testing gather_for_metrics**") for split_batches in [True, False]: for dispatch_batches in dispatch_batches_options: if accelerator.is_local_main_process: print(f"With: `split_batches={split_batches}`, `dispatch_batches={dispatch_batches}`") test_mrpc(dispatch_batches, split_batches) accelerator.state._reset_state() print("test_gather_for_metrics_with_iterable_dataset") test_gather_for_metrics_with_iterable_dataset() print("test gather_for_metrics_with_non_tensor_objects_iterable_dataset") test_gather_for_metrics_with_non_tensor_objects_iterable_dataset() # MpDeviceLoader in TorchXLA is an asynchronous loader that preloads several batches into cache. # This can cause the 'end_of_dataloader' of DataLoaderStateMixin to be set earlier than intended. # Skip this test when TorchXLA is enabled. if accelerator.state.distributed_type != DistributedType.XLA: if accelerator.is_local_main_process: print("**Test torch metrics**") for split_batches in [True, False]: for dispatch_batches in dispatch_batches_options: dataloader_config = DataLoaderConfiguration( split_batches=split_batches, dispatch_batches=dispatch_batches ) accelerator = Accelerator(dataloader_config=dataloader_config) if accelerator.is_local_main_process: print(f"With: `split_batches={split_batches}`, `dispatch_batches={dispatch_batches}`, length=99") test_torch_metrics(accelerator, 99) accelerator.state._reset_state() if accelerator.is_local_main_process: print("**Test last batch is not dropped when perfectly divisible**") accelerator = Accelerator() test_torch_metrics(accelerator, 512) accelerator.state._reset_state() if accelerator.is_local_main_process: print("**Test that `drop_last` is taken into account**") test_gather_for_metrics_drop_last() accelerator.end_training() accelerator.state._reset_state() def _mp_fn(index): # For xla_spawn (TPUs) main() if __name__ == "__main__": main()
accelerate/src/accelerate/test_utils/scripts/external_deps/test_metrics.py/0
{ "file_path": "accelerate/src/accelerate/test_utils/scripts/external_deps/test_metrics.py", "repo_id": "accelerate", "token_count": 4714 }
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .constants import ( MITA_PROFILING_AVAILABLE_PYTORCH_VERSION, MODEL_NAME, OPTIMIZER_NAME, PROFILE_PATTERN_NAME, RNG_STATE_NAME, SAFE_MODEL_NAME, SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, SAFE_WEIGHTS_PATTERN_NAME, SAMPLER_NAME, SCALER_NAME, SCHEDULER_NAME, TORCH_DISTRIBUTED_OPERATION_TYPES, TORCH_LAUNCH_PARAMS, WEIGHTS_INDEX_NAME, WEIGHTS_NAME, WEIGHTS_PATTERN_NAME, XPU_PROFILING_AVAILABLE_PYTORCH_VERSION, ) from .dataclasses import ( AutocastKwargs, BnbQuantizationConfig, ComputeEnvironment, CustomDtype, DataLoaderConfiguration, DDPCommunicationHookType, DeepSpeedPlugin, DistributedDataParallelKwargs, DistributedType, DynamoBackend, FP8RecipeKwargs, FullyShardedDataParallelPlugin, GradientAccumulationPlugin, GradScalerKwargs, InitProcessGroupKwargs, KwargsHandler, LoggerType, MegatronLMPlugin, PrecisionType, ProfileKwargs, ProjectConfiguration, RNGType, SageMakerDistributedType, TensorInformation, TorchDynamoPlugin, TorchTensorParallelPlugin, add_model_config_to_megatron_parser, ) from .environment import ( are_libraries_initialized, check_cuda_p2p_ib_support, check_fp8_capability, clear_environment, convert_dict_to_env_variables, get_cpu_distributed_information, get_gpu_info, get_int_from_env, parse_choice_from_env, parse_flag_from_env, patch_environment, purge_accelerate_environment, set_numa_affinity, str_to_bool, ) from .imports import ( deepspeed_required, get_ccl_version, is_4bit_bnb_available, is_8bit_bnb_available, is_aim_available, is_bf16_available, is_bitsandbytes_multi_backend_available, is_bnb_available, is_boto3_available, is_ccl_available, is_clearml_available, is_comet_ml_available, is_cuda_available, is_datasets_available, is_deepspeed_available, is_dvclive_available, is_fp8_available, is_import_timer_available, is_ipex_available, is_lomo_available, is_megatron_lm_available, is_mlflow_available, is_mlu_available, is_mps_available, is_msamp_available, is_musa_available, is_npu_available, is_pandas_available, is_peft_available, is_pippy_available, is_pynvml_available, is_pytest_available, is_rich_available, is_sagemaker_available, is_schedulefree_available, is_tensorboard_available, is_timm_available, is_torch_xla_available, is_torchdata_available, is_torchdata_stateful_dataloader_available, is_torchvision_available, is_transformer_engine_available, is_transformers_available, is_triton_available, is_wandb_available, is_weights_only_available, is_xpu_available, ) from .modeling import ( align_module_device, calculate_maximum_sizes, check_device_map, check_tied_parameters_in_config, check_tied_parameters_on_same_device, compute_module_sizes, convert_file_size_to_int, dtype_byte_size, find_tied_parameters, get_balanced_memory, get_grad_scaler, get_max_layer_size, get_max_memory, get_mixed_precision_context_manager, has_offloaded_params, id_tensor_storage, infer_auto_device_map, is_peft_model, load_checkpoint_in_model, load_offloaded_weights, load_state_dict, named_module_tensors, retie_parameters, set_module_tensor_to_device, ) from .offload import ( OffloadedWeightsLoader, PrefixedDataset, extract_submodules_state_dict, load_offloaded_weight, offload_state_dict, offload_weight, save_offload_index, ) from .operations import ( CannotPadNestedTensorWarning, GatheredParameters, broadcast, broadcast_object_list, concatenate, convert_outputs_to_fp32, convert_to_fp32, copy_tensor_to_devices, find_batch_size, find_device, gather, gather_object, get_data_structure, honor_type, ignorant_find_batch_size, initialize_tensors, is_namedtuple, is_tensor_information, is_torch_tensor, listify, pad_across_processes, pad_input_tensors, recursively_apply, reduce, send_to_device, slice_tensors, ) from .versions import compare_versions, is_torch_version if is_deepspeed_available(): from .deepspeed import ( DeepSpeedEngineWrapper, DeepSpeedOptimizerWrapper, DeepSpeedSchedulerWrapper, DummyOptim, DummyScheduler, HfDeepSpeedConfig, get_active_deepspeed_plugin, map_pytorch_optim_to_deepspeed, ) from .bnb import has_4bit_bnb_layers, load_and_quantize_model from .fsdp_utils import ( disable_fsdp_ram_efficient_loading, enable_fsdp_ram_efficient_loading, ensure_weights_retied, load_fsdp_model, load_fsdp_optimizer, merge_fsdp_weights, save_fsdp_model, save_fsdp_optimizer, ) from .launch import ( PrepareForLaunch, _filter_args, prepare_deepspeed_cmd_env, prepare_multi_gpu_env, prepare_sagemager_args_inputs, prepare_simple_launcher_cmd_env, prepare_tpu, ) # For docs from .megatron_lm import ( AbstractTrainStep, BertTrainStep, GPTTrainStep, MegatronLMDummyDataLoader, MegatronLMDummyScheduler, T5TrainStep, avg_losses_across_data_parallel_group, ) if is_megatron_lm_available(): from .megatron_lm import ( MegatronEngine, MegatronLMOptimizerWrapper, MegatronLMSchedulerWrapper, gather_across_data_parallel_groups, ) from .megatron_lm import initialize as megatron_lm_initialize from .megatron_lm import prepare_data_loader as megatron_lm_prepare_data_loader from .megatron_lm import prepare_model_optimizer_scheduler as megatron_lm_prepare_model_optimizer_scheduler from .megatron_lm import prepare_optimizer as megatron_lm_prepare_optimizer from .megatron_lm import prepare_scheduler as megatron_lm_prepare_scheduler from .memory import find_executable_batch_size, release_memory from .other import ( check_os_kernel, clean_state_dict_for_safetensors, convert_bytes, extract_model_from_parallel, get_pretty_name, is_port_in_use, load, merge_dicts, recursive_getattr, save, wait_for_everyone, write_basic_config, ) from .random import set_seed, synchronize_rng_state, synchronize_rng_states from .torch_xla import install_xla from .tqdm import tqdm from .transformer_engine import ( apply_fp8_autowrap, contextual_fp8_autocast, convert_model, has_transformer_engine_layers, )
accelerate/src/accelerate/utils/__init__.py/0
{ "file_path": "accelerate/src/accelerate/utils/__init__.py", "repo_id": "accelerate", "token_count": 3050 }
compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: 'NO' downcast_bf16: 'no' fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 1 use_cpu: false
accelerate/tests/test_configs/0_12_0.yaml/0
{ "file_path": "accelerate/tests/test_configs/0_12_0.yaml", "repo_id": "accelerate", "token_count": 105 }
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import numpy as np from packaging import version from accelerate import debug_launcher from accelerate.test_utils import ( DEFAULT_LAUNCH_COMMAND, device_count, execute_subprocess_async, path_in_accelerate_package, require_cpu, require_huggingface_suite, require_multi_device, require_single_device, ) from accelerate.utils import patch_environment @require_huggingface_suite @unittest.skipIf(version.parse(np.__version__) >= version.parse("2.0"), "Test requires numpy version < 2.0") class MetricTester(unittest.TestCase): def setUp(self): self.test_file_path = path_in_accelerate_package("test_utils", "scripts", "external_deps", "test_metrics.py") from accelerate.test_utils.scripts.external_deps import test_metrics # noqa: F401 self.test_metrics = test_metrics @require_cpu def test_metric_cpu_noop(self): debug_launcher(self.test_metrics.main, num_processes=1) @require_cpu def test_metric_cpu_multi(self): debug_launcher(self.test_metrics.main) @require_single_device def test_metric_accelerator(self): self.test_metrics.main() @require_multi_device def test_metric_accelerator_multi(self): print(f"Found {device_count} devices.") cmd = DEFAULT_LAUNCH_COMMAND + [self.test_file_path] with patch_environment(omp_num_threads=1, ACCELERATE_LOG_LEVEL="INFO"): execute_subprocess_async(cmd)
accelerate/tests/test_metrics.py/0
{ "file_path": "accelerate/tests/test_metrics.py", "repo_id": "accelerate", "token_count": 744 }
# candle [![discord server](https://dcbadge.vercel.app/api/server/hugging-face-879548962464493619)](https://discord.gg/hugging-face-879548962464493619) [![Latest version](https://img.shields.io/crates/v/candle-core.svg)](https://crates.io/crates/candle-core) [![Documentation](https://docs.rs/candle-core/badge.svg)](https://docs.rs/candle-core) [![License](https://img.shields.io/github/license/base-org/node?color=blue)](https://github.com/huggingface/candle/blob/main/LICENSE-MIT) [![License](https://img.shields.io/badge/license-Apache%202.0-blue?style=flat-square)](https://github.com/huggingface/candle/blob/main/LICENSE-APACHE) Candle is a minimalist ML framework for Rust with a focus on performance (including GPU support) and ease of use. Try our online demos: [whisper](https://huggingface.co/spaces/lmz/candle-whisper), [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2), [T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm), [yolo](https://huggingface.co/spaces/lmz/candle-yolo), [Segment Anything](https://huggingface.co/spaces/radames/candle-segment-anything-wasm). ## Get started Make sure that you have [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) correctly installed as described in [**Installation**](https://huggingface.github.io/candle/guide/installation.html). Let's see how to run a simple matrix multiplication. Write the following to your `myapp/src/main.rs` file: ```rust use candle_core::{Device, Tensor}; fn main() -> Result<(), Box<dyn std::error::Error>> { let device = Device::Cpu; let a = Tensor::randn(0f32, 1., (2, 3), &device)?; let b = Tensor::randn(0f32, 1., (3, 4), &device)?; let c = a.matmul(&b)?; println!("{c}"); Ok(()) } ``` `cargo run` should display a tensor of shape `Tensor[[2, 4], f32]`. Having installed `candle` with Cuda support, simply define the `device` to be on GPU: ```diff - let device = Device::Cpu; + let device = Device::new_cuda(0)?; ``` For more advanced examples, please have a look at the following section. ## Check out our examples These online demos run entirely in your browser: - [yolo](https://huggingface.co/spaces/lmz/candle-yolo): pose estimation and object recognition. - [whisper](https://huggingface.co/spaces/lmz/candle-whisper): speech recognition. - [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2): text generation. - [T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm): text generation. - [Phi-1.5, and Phi-2](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm): text generation. - [Segment Anything Model](https://huggingface.co/spaces/radames/candle-segment-anything-wasm): Image segmentation. - [BLIP](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning): image captioning. We also provide a some command line based examples using state of the art models: - [LLaMA v1, v2, and v3](./candle-examples/examples/llama/): general LLM, includes the SOLAR-10.7B variant. - [Falcon](./candle-examples/examples/falcon/): general LLM. - [Codegeex4](./candle-examples/examples/codegeex4-9b/): Code completion,code interpreter,web search,fuction calling,repository-level - [GLM4](./candle-examples/examples/glm4/): Open Multilingual Multimodal Chat LMs by THUDM - [Gemma v1 and v2](./candle-examples/examples/gemma/): 2b and 7b+/9b general LLMs from Google Deepmind. - [RecurrentGemma](./candle-examples/examples/recurrent-gemma/): 2b and 7b Griffin based models from Google that mix attention with a RNN like state. - [Phi-1, Phi-1.5, Phi-2, and Phi-3](./candle-examples/examples/phi/): 1.3b, 2.7b, and 3.8b general LLMs with performance on par with 7b models. - [StableLM-3B-4E1T](./candle-examples/examples/stable-lm/): a 3b general LLM pre-trained on 1T tokens of English and code datasets. Also supports StableLM-2, a 1.6b LLM trained on 2T tokens, as well as the code variants. - [Mamba](./candle-examples/examples/mamba/): an inference only implementation of the Mamba state space model. - [Mistral7b-v0.1](./candle-examples/examples/mistral/): a 7b general LLM with better performance than all publicly available 13b models as of 2023-09-28. - [Mixtral8x7b-v0.1](./candle-examples/examples/mixtral/): a sparse mixture of experts 8x7b general LLM with better performance than a Llama 2 70B model with much faster inference. - [StarCoder](./candle-examples/examples/bigcode/) and [StarCoder2](./candle-examples/examples/starcoder2/): LLM specialized to code generation. - [Qwen1.5](./candle-examples/examples/qwen/): Bilingual (English/Chinese) LLMs. - [RWKV v5 and v6](./candle-examples/examples/rwkv/): An RNN with transformer level LLM performance. - [Replit-code-v1.5](./candle-examples/examples/replit-code/): a 3.3b LLM specialized for code completion. - [Yi-6B / Yi-34B](./candle-examples/examples/yi/): two bilingual (English/Chinese) general LLMs with 6b and 34b parameters. - [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of the LLaMA model using the same quantization techniques as [llama.cpp](https://github.com/ggerganov/llama.cpp). <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/quantized/assets/aoc.gif" width="600"> - [Stable Diffusion](./candle-examples/examples/stable-diffusion/): text to image generative model, support for the 1.5, 2.1, SDXL 1.0 and Turbo versions. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg" width="200"> - [Wuerstchen](./candle-examples/examples/wuerstchen/): another text to image generative model. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/wuerstchen/assets/cat.jpg" width="200"> - [yolo-v3](./candle-examples/examples/yolo-v3/) and [yolo-v8](./candle-examples/examples/yolo-v8/): object detection and pose estimation models. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/yolo-v8/assets/bike.od.jpg" width="200"><img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/yolo-v8/assets/bike.pose.jpg" width="200"> - [segment-anything](./candle-examples/examples/segment-anything/): image segmentation model with prompt. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/segment-anything/assets/sam_merged.jpg" width="200"> - [SegFormer](./candle-examples/examples/segformer/): transformer based semantic segmentation model. - [Whisper](./candle-examples/examples/whisper/): speech recognition model. - [EnCodec](./candle-examples/examples/encodec/): high-quality audio compression model using residual vector quantization. - [MetaVoice](./candle-examples/examples/metavoice/): foundational model for text-to-speech. - [Parler-TTS](./candle-examples/examples/parler-tts/): large text-to-speech model. - [T5](./candle-examples/examples/t5), [Bert](./candle-examples/examples/bert/), [JinaBert](./candle-examples/examples/jina-bert/) : useful for sentence embeddings. - [DINOv2](./candle-examples/examples/dinov2/): computer vision model trained using self-supervision (can be used for imagenet classification, depth evaluation, segmentation). - [VGG](./candle-examples/examples/vgg/), [RepVGG](./candle-examples/examples/repvgg): computer vision models. - [BLIP](./candle-examples/examples/blip/): image to text model, can be used to generate captions for an image. - [CLIP](./candle-examples/examples/clip/): multi-model vision and language model. - [TrOCR](./candle-examples/examples/trocr/): a transformer OCR model, with dedicated submodels for hand-writing and printed recognition. - [Marian-MT](./candle-examples/examples/marian-mt/): neural machine translation model, generates the translated text from the input text. - [Moondream](./candle-examples/examples/moondream/): tiny computer-vision model that can answer real-world questions about images. Run them using commands like: ``` cargo run --example quantized --release ``` In order to use **CUDA** add `--features cuda` to the example command line. If you have cuDNN installed, use `--features cudnn` for even more speedups. There are also some wasm examples for whisper and [llama2.c](https://github.com/karpathy/llama2.c). You can either build them with `trunk` or try them online: [whisper](https://huggingface.co/spaces/lmz/candle-whisper), [llama2](https://huggingface.co/spaces/lmz/candle-llama2), [T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm), [Phi-1.5, and Phi-2](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm), [Segment Anything Model](https://huggingface.co/spaces/radames/candle-segment-anything-wasm). For LLaMA2, run the following command to retrieve the weight files and start a test server: ```bash cd candle-wasm-examples/llama2-c wget https://huggingface.co/spaces/lmz/candle-llama2/resolve/main/model.bin wget https://huggingface.co/spaces/lmz/candle-llama2/resolve/main/tokenizer.json trunk serve --release --port 8081 ``` And then head over to [http://localhost:8081/](http://localhost:8081/). <!--- ANCHOR: useful_libraries ---> ## Useful External Resources - [`candle-tutorial`](https://github.com/ToluClassics/candle-tutorial): A very detailed tutorial showing how to convert a PyTorch model to Candle. - [`candle-lora`](https://github.com/EricLBuehler/candle-lora): Efficient and ergonomic LoRA implementation for Candle. `candle-lora` has out-of-the-box LoRA support for many models from Candle, which can be found [here](https://github.com/EricLBuehler/candle-lora/tree/master/candle-lora-transformers/examples). - [`optimisers`](https://github.com/KGrewal1/optimisers): A collection of optimisers including SGD with momentum, AdaGrad, AdaDelta, AdaMax, NAdam, RAdam, and RMSprop. - [`candle-vllm`](https://github.com/EricLBuehler/candle-vllm): Efficient platform for inference and serving local LLMs including an OpenAI compatible API server. - [`candle-ext`](https://github.com/mokeyish/candle-ext): An extension library to Candle that provides PyTorch functions not currently available in Candle. - [`candle-coursera-ml`](https://github.com/vishpat/candle-coursera-ml): Implementation of ML algorithms from Coursera's [Machine Learning Specialization](https://www.coursera.org/specializations/machine-learning-introduction) course. - [`kalosm`](https://github.com/floneum/floneum/tree/master/interfaces/kalosm): A multi-modal meta-framework in Rust for interfacing with local pre-trained models with support for controlled generation, custom samplers, in-memory vector databases, audio transcription, and more. - [`candle-sampling`](https://github.com/EricLBuehler/candle-sampling): Sampling techniques for Candle. - [`gpt-from-scratch-rs`](https://github.com/jeroenvlek/gpt-from-scratch-rs): A port of Andrej Karpathy's _Let's build GPT_ tutorial on YouTube showcasing the Candle API on a toy problem. - [`candle-einops`](https://github.com/tomsanbear/candle-einops): A pure rust implementation of the python [einops](https://github.com/arogozhnikov/einops) library. - [`atoma-infer`](https://github.com/atoma-network/atoma-infer): A Rust library for fast inference at scale, leveraging FlashAttention2 for efficient attention computation, PagedAttention for efficient KV-cache memory management, and multi-GPU support. It is OpenAI api compatible. - [`llms-from-scratch-rs`](https://github.com/nerdai/llms-from-scratch-rs): A comprehensive Rust translation of the code from Sebastian Raschka's Build an LLM from Scratch book. If you have an addition to this list, please submit a pull request. <!--- ANCHOR_END: useful_libraries ---> <!--- ANCHOR: features ---> ## Features - Simple syntax, looks and feels like PyTorch. - Model training. - Embed user-defined ops/kernels, such as [flash-attention v2](https://github.com/huggingface/candle/blob/89ba005962495f2bfbda286e185e9c3c7f5300a3/candle-flash-attn/src/lib.rs#L152). - Backends. - Optimized CPU backend with optional MKL support for x86 and Accelerate for macs. - CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL. - WASM support, run your models in a browser. - Included models. - Language Models. - LLaMA v1, v2, and v3 with variants such as SOLAR-10.7B. - Falcon. - StarCoder, StarCoder2. - Phi 1, 1.5, 2, and 3. - Mamba, Minimal Mamba - Gemma v1 2b and 7b+, v2 2b and 9b. - Mistral 7b v0.1. - Mixtral 8x7b v0.1. - StableLM-3B-4E1T, StableLM-2-1.6B, Stable-Code-3B. - Replit-code-v1.5-3B. - Bert. - Yi-6B and Yi-34B. - Qwen1.5, Qwen1.5 MoE. - RWKV v5 and v6. - Quantized LLMs. - Llama 7b, 13b, 70b, as well as the chat and code variants. - Mistral 7b, and 7b instruct. - Mixtral 8x7b. - Zephyr 7b a and b (Mistral-7b based). - OpenChat 3.5 (Mistral-7b based). - Text to text. - T5 and its variants: FlanT5, UL2, MADLAD400 (translation), CoEdit (Grammar correction). - Marian MT (Machine Translation). - Text to image. - Stable Diffusion v1.5, v2.1, XL v1.0. - Wurstchen v2. - Image to text. - BLIP. - TrOCR. - Audio. - Whisper, multi-lingual speech-to-text. - EnCodec, audio compression model. - MetaVoice-1B, text-to-speech model. - Parler-TTS, text-to-speech model. - Computer Vision Models. - DINOv2, ConvMixer, EfficientNet, ResNet, ViT, VGG, RepVGG, ConvNeXT, ConvNeXTv2, MobileOne, EfficientVit (MSRA), MobileNetv4, Hiera, FastViT. - yolo-v3, yolo-v8. - Segment-Anything Model (SAM). - SegFormer. - File formats: load models from safetensors, npz, ggml, or PyTorch files. - Serverless (on CPU), small and fast deployments. - Quantization support using the llama.cpp quantized types. <!--- ANCHOR_END: features ---> ## How to use <!--- ANCHOR: cheatsheet ---> Cheatsheet: | | Using PyTorch | Using Candle | |------------|------------------------------------------|------------------------------------------------------------------| | Creation | `torch.Tensor([[1, 2], [3, 4]])` | `Tensor::new(&[[1f32, 2.], [3., 4.]], &Device::Cpu)?` | | Creation | `torch.zeros((2, 2))` | `Tensor::zeros((2, 2), DType::F32, &Device::Cpu)?` | | Indexing | `tensor[:, :4]` | `tensor.i((.., ..4))?` | | Operations | `tensor.view((2, 2))` | `tensor.reshape((2, 2))?` | | Operations | `a.matmul(b)` | `a.matmul(&b)?` | | Arithmetic | `a + b` | `&a + &b` | | Device | `tensor.to(device="cuda")` | `tensor.to_device(&Device::new_cuda(0)?)?` | | Dtype | `tensor.to(dtype=torch.float16)` | `tensor.to_dtype(&DType::F16)?` | | Saving | `torch.save({"A": A}, "model.bin")` | `candle::safetensors::save(&HashMap::from([("A", A)]), "model.safetensors")?` | | Loading | `weights = torch.load("model.bin")` | `candle::safetensors::load("model.safetensors", &device)` | <!--- ANCHOR_END: cheatsheet ---> ## Structure - [candle-core](./candle-core): Core ops, devices, and `Tensor` struct definition - [candle-nn](./candle-nn/): Tools to build real models - [candle-examples](./candle-examples/): Examples of using the library in realistic settings - [candle-kernels](./candle-kernels/): CUDA custom kernels - [candle-datasets](./candle-datasets/): Datasets and data loaders. - [candle-transformers](./candle-transformers): transformers-related utilities. - [candle-flash-attn](./candle-flash-attn): Flash attention v2 layer. - [candle-onnx](./candle-onnx/): ONNX model evaluation. ## FAQ ### Why should I use Candle? Candle's core goal is to *make serverless inference possible*. Full machine learning frameworks like PyTorch are very large, which makes creating instances on a cluster slow. Candle allows deployment of lightweight binaries. Secondly, Candle lets you *remove Python* from production workloads. Python overhead can seriously hurt performance, and the [GIL](https://www.backblaze.com/blog/the-python-gil-past-present-and-future/) is a notorious source of headaches. Finally, Rust is cool! A lot of the HF ecosystem already has Rust crates, like [safetensors](https://github.com/huggingface/safetensors) and [tokenizers](https://github.com/huggingface/tokenizers). ### Other ML frameworks - [dfdx](https://github.com/coreylowman/dfdx) is a formidable crate, with shapes being included in types. This prevents a lot of headaches by getting the compiler to complain about shape mismatches right off the bat. However, we found that some features still require nightly, and writing code can be a bit daunting for non rust experts. We're leveraging and contributing to other core crates for the runtime so hopefully both crates can benefit from each other. - [burn](https://github.com/burn-rs/burn) is a general crate that can leverage multiple backends so you can choose the best engine for your workload. - [tch-rs](https://github.com/LaurentMazare/tch-rs.git) Bindings to the torch library in Rust. Extremely versatile, but they bring in the entire torch library into the runtime. The main contributor of `tch-rs` is also involved in the development of `candle`. ### Common Errors #### Missing symbols when compiling with the mkl feature. If you get some missing symbols when compiling binaries/tests using the mkl or accelerate features, e.g. for mkl you get: ``` = note: /usr/bin/ld: (....o): in function `blas::sgemm': .../blas-0.22.0/src/lib.rs:1944: undefined reference to `sgemm_' collect2: error: ld returned 1 exit status = note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified = note: use the `-l` flag to specify native libraries to link = note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo ``` or for accelerate: ``` Undefined symbols for architecture arm64: "_dgemm_", referenced from: candle_core::accelerate::dgemm::h1b71a038552bcabe in libcandle_core... "_sgemm_", referenced from: candle_core::accelerate::sgemm::h2cf21c592cba3c47 in libcandle_core... ld: symbol(s) not found for architecture arm64 ``` This is likely due to a missing linker flag that was needed to enable the mkl library. You can try adding the following for mkl at the top of your binary: ```rust extern crate intel_mkl_src; ``` or for accelerate: ```rust extern crate accelerate_src; ``` #### Cannot run the LLaMA examples: access to source requires login credentials ``` Error: request error: https://huggingface.co/meta-llama/Llama-2-7b-hf/resolve/main/tokenizer.json: status code 401 ``` This is likely because you're not permissioned for the LLaMA-v2 model. To fix this, you have to register on the huggingface-hub, accept the [LLaMA-v2 model conditions](https://huggingface.co/meta-llama/Llama-2-7b-hf), and set up your authentication token. See issue [#350](https://github.com/huggingface/candle/issues/350) for more details. #### Missing cute/cutlass headers when compiling flash-attn ``` In file included from kernels/flash_fwd_launch_template.h:11:0, from kernels/flash_fwd_hdim224_fp16_sm80.cu:5: kernels/flash_fwd_kernel.h:8:10: fatal error: cute/algorithm/copy.hpp: No such file or directory #include <cute/algorithm/copy.hpp> ^~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. Error: nvcc error while compiling: ``` [cutlass](https://github.com/NVIDIA/cutlass) is provided as a git submodule so you may want to run the following command to check it in properly. ```bash git submodule update --init ``` #### Compiling with flash-attention fails ``` /usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’: ``` This is a bug in gcc-11 triggered by the Cuda compiler. To fix this, install a different, supported gcc version - for example gcc-10, and specify the path to the compiler in the NVCC_CCBIN environment variable. ``` env NVCC_CCBIN=/usr/lib/gcc/x86_64-linux-gnu/10 cargo ... ``` #### Linking error on windows when running rustdoc or mdbook tests ``` Couldn't compile the test. ---- .\candle-book\src\inference\hub.md - Using_the_hub::Using_in_a_real_model_ (line 50) stdout ---- error: linking with `link.exe` failed: exit code: 1181 //very long chain of linking = note: LINK : fatal error LNK1181: cannot open input file 'windows.0.48.5.lib' ``` Make sure you link all native libraries that might be located outside a project target, e.g., to run mdbook tests, you should run: ``` mdbook test candle-book -L .\target\debug\deps\ ` -L native=$env:USERPROFILE\.cargo\registry\src\index.crates.io-6f17d22bba15001f\windows_x86_64_msvc-0.42.2\lib ` -L native=$env:USERPROFILE\.cargo\registry\src\index.crates.io-6f17d22bba15001f\windows_x86_64_msvc-0.48.5\lib ``` #### Extremely slow model load time with WSL This may be caused by the models being loaded from `/mnt/c`, more details on [stackoverflow](https://stackoverflow.com/questions/68972448/why-is-wsl-extremely-slow-when-compared-with-native-windows-npm-yarn-processing). #### Tracking down errors You can set `RUST_BACKTRACE=1` to be provided with backtraces when a candle error is generated. #### CudaRC error If you encounter an error like this one `called `Result::unwrap()` on an `Err` value: LoadLibraryExW { source: Os { code: 126, kind: Uncategorized, message: "The specified module could not be found." } }` on windows. To fix copy and rename these 3 files (make sure they are in path). The paths depend on your cuda version. `c:\Windows\System32\nvcuda.dll` -> `cuda.dll` `c:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin\cublas64_12.dll` -> `cublas.dll` `c:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin\curand64_10.dll` -> `curand.dll`
candle/README.md/0
{ "file_path": "candle/README.md", "repo_id": "candle", "token_count": 8420 }
#![allow(dead_code)] use libc::{c_char, c_double, c_float, c_int, c_long, c_ulong}; mod ffi { use super::*; extern "C" { // It would be nice to be able to switch to the NEWLAPACK version of the function but this // seems to trigger some link error. Available function names can be seen here: // /Library/Developer/CommandLineTools/SDKs/MacOSX13.3.sdk/System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate.tbd #[link_name = "sgemm_"] pub fn sgemm_ffi( transa: *const c_char, transb: *const c_char, m: *const c_int, n: *const c_int, k: *const c_int, alpha: *const c_float, a: *const c_float, lda: *const c_int, b: *const c_float, ldb: *const c_int, beta: *const c_float, c: *mut c_float, ldc: *const c_int, ); #[link_name = "dgemm_"] pub fn dgemm_ffi( transa: *const c_char, transb: *const c_char, m: *const c_int, n: *const c_int, k: *const c_int, alpha: *const c_double, a: *const c_double, lda: *const c_int, b: *const c_double, ldb: *const c_int, beta: *const c_double, c: *mut c_double, ldc: *const c_int, ); pub fn vvexpf(dst: *mut c_float, src: *const c_float, len: *const c_int); pub fn vvexp(dst: *mut c_double, src: *const c_double, len: *const c_int); pub fn vvsqrtf(dst: *mut c_float, src: *const c_float, len: *const c_int); pub fn vvsqrt(dst: *mut c_double, src: *const c_double, len: *const c_int); pub fn vvsinf(dst: *mut c_float, src: *const c_float, len: *const c_int); pub fn vvsin(dst: *mut c_double, src: *const c_double, len: *const c_int); pub fn vvcosf(dst: *mut c_float, src: *const c_float, len: *const c_int); pub fn vvcos(dst: *mut c_double, src: *const c_double, len: *const c_int); pub fn vvlogf(dst: *mut c_float, src: *const c_float, len: *const c_int); pub fn vvlog(dst: *mut c_double, src: *const c_double, len: *const c_int); pub fn vvtanhf(dst: *mut c_float, src: *const c_float, len: *const c_int); pub fn vvtanh(dst: *mut c_double, src: *const c_double, len: *const c_int); pub fn vDSP_vaddD( _: *const c_double, _: c_long, _: *const c_double, _: c_long, _: *mut c_double, _: c_long, _: c_ulong, ); pub fn vDSP_vadd( _: *const c_float, _: c_long, _: *const c_float, _: c_long, _: *mut c_float, _: c_long, _: c_ulong, ); pub fn vDSP_vsubD( _: *const c_double, _: c_long, _: *const c_double, _: c_long, _: *mut c_double, _: c_long, _: c_ulong, ); pub fn vDSP_vsub( _: *const c_float, _: c_long, _: *const c_float, _: c_long, _: *mut c_float, _: c_long, _: c_ulong, ); pub fn vDSP_vmulD( _: *const c_double, _: c_long, _: *const c_double, _: c_long, _: *mut c_double, _: c_long, _: c_ulong, ); pub fn vDSP_vmul( _: *const c_float, _: c_long, _: *const c_float, _: c_long, _: *mut c_float, _: c_long, _: c_ulong, ); pub fn vDSP_vdivD( _: *const c_double, _: c_long, _: *const c_double, _: c_long, _: *mut c_double, _: c_long, _: c_ulong, ); pub fn vDSP_vdiv( _: *const c_float, _: c_long, _: *const c_float, _: c_long, _: *mut c_float, _: c_long, _: c_ulong, ); pub fn vDSP_vminD( _: *const c_double, _: c_long, _: *const c_double, _: c_long, _: *mut c_double, _: c_long, _: c_ulong, ); pub fn vDSP_vmin( _: *const c_float, _: c_long, _: *const c_float, _: c_long, _: *mut c_float, _: c_long, _: c_ulong, ); pub fn vDSP_vmaxD( _: *const c_double, _: c_long, _: *const c_double, _: c_long, _: *mut c_double, _: c_long, _: c_ulong, ); pub fn vDSP_vmax( _: *const c_float, _: c_long, _: *const c_float, _: c_long, _: *mut c_float, _: c_long, _: c_ulong, ); } } #[allow(clippy::too_many_arguments)] #[inline] pub unsafe fn sgemm( transa: u8, transb: u8, m: i32, n: i32, k: i32, alpha: f32, a: &[f32], lda: i32, b: &[f32], ldb: i32, beta: f32, c: &mut [f32], ldc: i32, ) { ffi::sgemm_ffi( &(transa as c_char), &(transb as c_char), &m, &n, &k, &alpha, a.as_ptr(), &lda, b.as_ptr(), &ldb, &beta, c.as_mut_ptr(), &ldc, ) } #[allow(clippy::too_many_arguments)] #[inline] pub unsafe fn dgemm( transa: u8, transb: u8, m: i32, n: i32, k: i32, alpha: f64, a: &[f64], lda: i32, b: &[f64], ldb: i32, beta: f64, c: &mut [f64], ldc: i32, ) { ffi::dgemm_ffi( &(transa as c_char), &(transb as c_char), &m, &n, &k, &alpha, a.as_ptr(), &lda, b.as_ptr(), &ldb, &beta, c.as_mut_ptr(), &ldc, ) } #[inline] pub fn vs_exp(a: &[f32], y: &mut [f32]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvexpf(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vd_exp(a: &[f64], y: &mut [f64]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvexp(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vs_sqrt(a: &[f32], y: &mut [f32]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvsqrtf(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vd_sqrt(a: &[f64], y: &mut [f64]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvsqrt(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vs_sin(a: &[f32], y: &mut [f32]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvsinf(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vd_sin(a: &[f64], y: &mut [f64]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvsin(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vs_cos(a: &[f32], y: &mut [f32]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvcosf(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vd_cos(a: &[f64], y: &mut [f64]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvcos(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vs_tanh(a: &[f32], y: &mut [f32]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvtanhf(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vd_tanh(a: &[f64], y: &mut [f64]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvtanh(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vs_ln(a: &[f32], y: &mut [f32]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvlogf(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vd_ln(a: &[f64], y: &mut [f64]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } unsafe { ffi::vvlog(y.as_mut_ptr(), a.as_ptr(), &(a_len as i32)) } } #[inline] pub fn vs_sqr(a: &[f32], y: &mut [f32]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } y.iter_mut().zip(a.iter()).for_each(|(y, a)| *y = *a * *a) } #[inline] pub fn vd_sqr(a: &[f64], y: &mut [f64]) { let a_len = a.len(); let y_len = y.len(); if a_len != y_len { panic!("a and y have different lengths {a_len} <> {y_len}") } y.iter_mut().zip(a.iter()).for_each(|(y, a)| *y = *a * *a) } #[inline] pub fn vs_tanh_inplace(y: &mut [f32]) { unsafe { ffi::vvtanhf(y.as_mut_ptr(), y.as_ptr(), &(y.len() as i32)) } } #[inline] pub fn vd_tanh_inplace(y: &mut [f64]) { unsafe { ffi::vvtanh(y.as_mut_ptr(), y.as_ptr(), &(y.len() as i32)) } } #[inline] pub fn vs_exp_inplace(y: &mut [f32]) { unsafe { ffi::vvexpf(y.as_mut_ptr(), y.as_ptr(), &(y.len() as i32)) } } #[inline] pub fn vd_exp_inplace(y: &mut [f64]) { unsafe { ffi::vvexp(y.as_mut_ptr(), y.as_ptr(), &(y.len() as i32)) } } #[inline] pub fn vs_gelu(vs: &[f32], ys: &mut [f32]) { for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = (2.0f32 / std::f32::consts::PI).sqrt() * v * (1.0 + 0.044715 * v * v) } vs_tanh_inplace(ys); for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = 0.5 * v * (1.0 + *y) } } #[inline] pub fn vd_gelu(vs: &[f64], ys: &mut [f64]) { for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = (2.0f64 / std::f64::consts::PI).sqrt() * v * (1.0 + 0.044715 * v * v) } vd_tanh_inplace(ys); for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = 0.5 * v * (1.0 + *y) } } #[inline] pub fn vs_silu(vs: &[f32], ys: &mut [f32]) { for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = -v } vs_exp_inplace(ys); for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = v / (1.0 + *y) } } #[inline] pub fn vd_silu(vs: &[f64], ys: &mut [f64]) { for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = -v } vd_exp_inplace(ys); for (&v, y) in vs.iter().zip(ys.iter_mut()) { *y = v / (1.0 + *y) } } macro_rules! binary_op { ($fn_name:ident, $ty:ty, $accelerate_name:ident) => { #[inline] pub fn $fn_name(a: &[$ty], b: &[$ty], y: &mut [$ty]) { let a_len = a.len(); let b_len = b.len(); let y_len = y.len(); if a_len != y_len || b_len != y_len { panic!( "{} a,b,y len mismatch {a_len} {b_len} {y_len}", stringify!($fn_name) ); } unsafe { // Weird quirk of accelerate, the rhs comes before the lhs. ffi::$accelerate_name( b.as_ptr(), 1, a.as_ptr(), 1, y.as_mut_ptr(), 1, a_len as u64, ) } } }; } binary_op!(vs_add, f32, vDSP_vadd); binary_op!(vd_add, f64, vDSP_vaddD); binary_op!(vs_sub, f32, vDSP_vsub); binary_op!(vd_sub, f64, vDSP_vsubD); binary_op!(vs_mul, f32, vDSP_vmul); binary_op!(vd_mul, f64, vDSP_vmulD); binary_op!(vs_div, f32, vDSP_vdiv); binary_op!(vd_div, f64, vDSP_vdivD); binary_op!(vs_max, f32, vDSP_vmax); binary_op!(vd_max, f64, vDSP_vmaxD); binary_op!(vs_min, f32, vDSP_vmin); binary_op!(vd_min, f64, vDSP_vminD);
candle/candle-core/src/accelerate.rs/0
{ "file_path": "candle/candle-core/src/accelerate.rs", "repo_id": "candle", "token_count": 7639 }
//! Implementation of Backend traits for CUDA device //! use crate::backend::{BackendDevice, BackendStorage}; use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT}; use crate::{CpuStorage, DType, Layout, Result, Shape, WithDType}; pub use candle_kernels as kernels; pub use cudarc; use cudarc::cublas::{Gemm, GemmConfig, StridedBatchedConfig}; use cudarc::driver::{ CudaSlice, DevicePtr, DeviceRepr, DeviceSlice, LaunchAsync, LaunchConfig, ValidAsZeroBits, }; use half::{bf16, f16}; #[cfg(feature = "cudnn")] pub mod cudnn; mod device; mod error; mod utils; pub use device::{CudaDevice, DeviceId}; pub use error::{CudaError, WrapErr}; pub use utils::{Map1, Map1Any, Map2, Map2Any, Map2InPlace, Map3, S}; pub enum SlicePtrOrNull<T> { Ptr(CudaSlice<T>), Null, } unsafe impl<T: DeviceRepr> DeviceRepr for &SlicePtrOrNull<T> { fn as_kernel_param(&self) -> *mut std::ffi::c_void { match self { SlicePtrOrNull::Ptr(slice) => slice.as_kernel_param(), SlicePtrOrNull::Null => 0usize.as_kernel_param(), } } } impl SlicePtrOrNull<usize> { pub fn params_from_layout(dev: &CudaDevice, l: &Layout) -> Result<Self> { let ds = if l.is_contiguous() { SlicePtrOrNull::Null } else { SlicePtrOrNull::Ptr(dev.htod_copy([l.dims(), l.stride()].concat()).w()?) }; Ok(ds) } } #[derive(Debug)] pub enum CudaStorageSlice { U8(CudaSlice<u8>), U32(CudaSlice<u32>), I64(CudaSlice<i64>), BF16(CudaSlice<bf16>), F16(CudaSlice<f16>), F32(CudaSlice<f32>), F64(CudaSlice<f64>), } struct Clone; impl Map1 for Clone { fn f<T: DeviceRepr>( &self, s: &CudaSlice<T>, _: &CudaDevice, _: &Layout, ) -> Result<CudaSlice<T>> { s.try_clone().w() } } pub fn kernel_name<T: WithDType>(root: &str) -> String { let dtype = T::DTYPE.as_str(); format!("{root}_{dtype}") } struct Affine(f64, f64); impl Map1 for Affine { fn f<T: DeviceRepr + WithDType>( &self, src: &CudaSlice<T>, dev: &CudaDevice, layout: &Layout, ) -> Result<CudaSlice<T>> { let shape = layout.shape(); let dims = shape.dims(); let el = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(el as u32); let ds = SlicePtrOrNull::params_from_layout(dev, layout)?; let src = &src.slice(layout.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>("affine"), kernels::AFFINE)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(el) }.w()?; let params = ( el, dims.len(), &ds, src, &out, T::from_f64(self.0), T::from_f64(self.1), ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct Elu(f64); impl Map1 for Elu { fn f<T: DeviceRepr + WithDType>( &self, src: &CudaSlice<T>, dev: &CudaDevice, layout: &Layout, ) -> Result<CudaSlice<T>> { let shape = layout.shape(); let dims = shape.dims(); let el = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(el as u32); let ds = SlicePtrOrNull::params_from_layout(dev, layout)?; let src = &src.slice(layout.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>("uelu"), kernels::UNARY)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(el) }.w()?; let params = (el, dims.len(), &ds, T::from_f64(self.0), src, &out); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct Im2Col1D { l_k: usize, stride: usize, dilation: usize, padding: usize, } impl Im2Col1D { fn l_out(&self, l: usize) -> usize { (l + 2 * self.padding - self.dilation * (self.l_k - 1) - 1) / self.stride + 1 } } impl Map1 for Im2Col1D { fn f<T: DeviceRepr + WithDType>( &self, src: &CudaSlice<T>, dev: &CudaDevice, layout: &Layout, ) -> Result<CudaSlice<T>> { let shape = layout.shape(); let dims = shape.dims(); let l_out = self.l_out(dims[2]); let dst_el = dims[0] * l_out * dims[1] * self.l_k; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let ds = dev.htod_copy([dims, layout.stride()].concat()).w()?; let src = &src.slice(layout.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>("im2col1d"), kernels::CONV)?; // SAFETY: Set later by running the kernel. let dst = unsafe { dev.alloc::<T>(dst_el) }.w()?; let params = ( dst_el, l_out, self.l_k, self.stride, self.padding, self.dilation, &ds, src, &dst, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(dst) } } #[allow(unused)] struct Im2Col { h_k: usize, w_k: usize, stride: usize, dilation: usize, padding: usize, } impl Im2Col { #[allow(unused)] fn hw_out(&self, h: usize, w: usize) -> (usize, usize) { let h_out = (h + 2 * self.padding - self.dilation * (self.h_k - 1) - 1) / self.stride + 1; let w_out = (w + 2 * self.padding - self.dilation * (self.w_k - 1) - 1) / self.stride + 1; (h_out, w_out) } } impl Map1 for Im2Col { fn f<T: DeviceRepr + WithDType>( &self, src: &CudaSlice<T>, dev: &CudaDevice, layout: &Layout, ) -> Result<CudaSlice<T>> { let shape = layout.shape(); let dims = shape.dims(); let (h_out, w_out) = self.hw_out(dims[2], dims[3]); let dst_el = dims[0] * h_out * w_out * dims[1] * self.h_k * self.w_k; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let ds = dev.htod_copy([dims, layout.stride()].concat()).w()?; let src = &src.slice(layout.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>("im2col"), kernels::CONV)?; // SAFETY: Set later by running the kernel. let dst = unsafe { dev.alloc::<T>(dst_el) }.w()?; let params = ( dst_el, h_out, w_out, self.h_k, self.w_k, self.stride, self.padding, self.dilation, &ds, src, &dst, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(dst) } } struct Powf(f64); impl Map1 for Powf { fn f<T: DeviceRepr + WithDType>( &self, src: &CudaSlice<T>, dev: &CudaDevice, layout: &Layout, ) -> Result<CudaSlice<T>> { let shape = layout.shape(); let dims = shape.dims(); let el = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(el as u32); let ds = SlicePtrOrNull::params_from_layout(dev, layout)?; let src = &src.slice(layout.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>("upowf"), kernels::UNARY)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(el) }.w()?; let params = (el, dims.len(), &ds, T::from_f64(self.0), src, &out); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct FastReduce<'a>(&'a [usize], ReduceOp); impl Map1Any for FastReduce<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits, W: Fn(CudaSlice<T>) -> S>( &self, src: &CudaSlice<T>, dev: &CudaDevice, layout: &Layout, wrap: W, ) -> Result<S> { let src_stride = layout.stride(); let src_dims = layout.shape().dims(); let src_el: usize = src_dims.iter().product(); // Source dims and strides with the sum dims at the end. let mut dims = vec![]; let mut stride = vec![]; let mut dst_el: usize = 1; for (dim_idx, &d) in src_dims.iter().enumerate() { if !self.0.contains(&dim_idx) { dst_el *= d; dims.push(d); stride.push(src_stride[dim_idx]); } } for &dim_idx in self.0.iter() { dims.push(src_dims[dim_idx]); stride.push(src_stride[dim_idx]); } let el_to_sum_per_block = src_el / dst_el; // The reduction loop requires the shared array to be properly initialized and for // this we want the number of threads to be a power of two. let block_dim = usize::min(1024, el_to_sum_per_block).next_power_of_two(); let cfg = LaunchConfig { // TODO: Maybe use grid_y if the output is too large? // TODO: Specialized implementation when reducing on no or all dimensions or when // reducing only aggregate a small number of elements together. grid_dim: (dst_el as u32, 1, 1), block_dim: (block_dim as u32, 1, 1), shared_mem_bytes: 0, }; let ds = dev .htod_copy([dims.as_slice(), stride.as_slice()].concat()) .w()?; let src = &src.slice(layout.start_offset()..); let (name, check_empty, return_index) = match self.1 { ReduceOp::Sum => ("fast_sum", false, false), ReduceOp::Min => ("fast_min", true, false), ReduceOp::Max => ("fast_max", true, false), ReduceOp::ArgMin => ("fast_argmin", true, true), ReduceOp::ArgMax => ("fast_argmax", true, true), }; if check_empty && layout.shape().elem_count() == 0 { Err(crate::Error::EmptyTensor { op: "reduce" }.bt())? } let func = dev.get_or_load_func(&kernel_name::<T>(name), kernels::REDUCE)?; if return_index { // SAFETY: filled in by the follow up kernel. let out = unsafe { dev.alloc::<u32>(dst_el) }.w()?; let params = (src_el, el_to_sum_per_block, src_dims.len(), &ds, src, &out); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(S::U32(out)) } else { // SAFETY: filled in by the follow up kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let params = (src_el, el_to_sum_per_block, src_dims.len(), &ds, src, &out); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(wrap(out)) } } } impl<U: UnaryOpT> Map1 for U { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, src: &CudaSlice<T>, dev: &CudaDevice, layout: &Layout, ) -> Result<CudaSlice<T>> { let shape = layout.shape(); let dims = shape.dims(); let el_count = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(el_count as u32); let ds = SlicePtrOrNull::params_from_layout(dev, layout)?; let src = &src.slice(layout.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>(U::KERNEL), kernels::UNARY)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(el_count) }.w()?; let params = (el_count, dims.len(), &ds, src, &out); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct IndexSelect<'a>(&'a CudaStorage, &'a Layout, usize); impl Map1 for IndexSelect<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, src: &CudaSlice<T>, dev: &CudaDevice, src_l: &Layout, ) -> Result<CudaSlice<T>> { let ids_l = &self.1; let (name, ids) = match &self.0.slice { CudaStorageSlice::U32(slice) => { ("is_u32", *slice.slice(ids_l.start_offset()..).device_ptr()) } CudaStorageSlice::U8(slice) => { ("is_u8", *slice.slice(ids_l.start_offset()..).device_ptr()) } CudaStorageSlice::I64(slice) => { ("is_i64", *slice.slice(ids_l.start_offset()..).device_ptr()) } _ => Err(CudaError::UnexpectedDType { msg: "index_select ids should be u8 or u32", expected: DType::U32, got: self.0.dtype(), }) .w()?, }; let ids_shape = ids_l.shape(); let ids_dims = ids_shape.dims(); let ds = dev.htod_copy([ids_dims, ids_l.stride()].concat()).w()?; let src = match src_l.contiguous_offsets() { Some((o1, o2)) => src.slice(o1..o2), None => Err(crate::Error::RequiresContiguous { op: "index-select" }.bt())?, }; let left_size: usize = src_l.dims()[..self.2].iter().product(); let right_size: usize = src_l.dims()[self.2 + 1..].iter().product(); let src_dim_size = src_l.dims()[self.2]; let ids_dim_size = ids_shape.elem_count(); let dst_el = ids_shape.elem_count() * left_size * right_size; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let func = dev.get_or_load_func(&kernel_name::<T>(name), kernels::INDEXING)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let params = ( dst_el, ids_dims.len(), &ds, ids, &src, &out, left_size, src_dim_size, ids_dim_size, right_size, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct Gather<'a>(&'a CudaStorage, &'a Layout, usize); impl Map1 for Gather<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, src: &CudaSlice<T>, dev: &CudaDevice, src_l: &Layout, ) -> Result<CudaSlice<T>> { let ids = &self.0; let ids_l = &self.1; let dim = self.2; let (ids_o1, ids_o2) = match ids_l.contiguous_offsets() { Some(o12) => o12, None => Err(crate::Error::RequiresContiguous { op: "gather" }.bt())?, }; let (name, ids) = match &ids.slice { CudaStorageSlice::U32(slice) => { ("gather_u32", *slice.slice(ids_o1..ids_o2).device_ptr()) } CudaStorageSlice::U8(slice) => ("gather_u8", *slice.slice(ids_o1..ids_o2).device_ptr()), CudaStorageSlice::I64(slice) => { ("gather_i64", *slice.slice(ids_o1..ids_o2).device_ptr()) } _ => Err(CudaError::UnexpectedDType { msg: "gather ids should be u8/u32/i64", expected: DType::U32, got: ids.dtype(), })?, }; let el = ids_l.shape().elem_count(); let cfg = LaunchConfig::for_num_elems(el as u32); let src = match src_l.contiguous_offsets() { Some((o1, o2)) => src.slice(o1..o2), None => Err(crate::Error::RequiresContiguous { op: "gather" }.bt())?, }; let left_sz: usize = src_l.dims()[..dim].iter().product(); let right_sz: usize = src_l.dims()[dim + 1..].iter().product(); let src_dim_sz = src_l.dims()[dim]; let ids_dim_sz = ids_l.dims()[dim]; let func = dev.get_or_load_func(&kernel_name::<T>(name), kernels::INDEXING)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(el) }.w()?; let params = ( el, ids, &src, &out, left_sz, src_dim_sz, ids_dim_sz, right_sz, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct IndexAdd<'a>(&'a CudaStorage, &'a Layout, usize); impl Map2InPlace for IndexAdd<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, dst: &mut CudaSlice<T>, dst_shape: &Shape, src: &CudaSlice<T>, src_l: &Layout, dev: &CudaDevice, ) -> Result<()> { let ids = &self.0; let ids_l = &self.1; let dim = self.2; let (ids_o1, ids_o2) = match ids_l.contiguous_offsets() { Some(o12) => o12, None => Err(crate::Error::RequiresContiguous { op: "index-add" }.bt())?, }; let (name, ids) = match &ids.slice { CudaStorageSlice::U32(slice) => ("ia_u32", *slice.slice(ids_o1..ids_o2).device_ptr()), CudaStorageSlice::I64(slice) => ("ia_i64", *slice.slice(ids_o1..ids_o2).device_ptr()), CudaStorageSlice::U8(slice) => ("ia_u8", *slice.slice(ids_o1..ids_o2).device_ptr()), _ => Err(CudaError::UnexpectedDType { msg: "index-add ids should be u8/u32/i64", expected: DType::U32, got: ids.dtype(), })?, }; let src = match src_l.contiguous_offsets() { Some((o1, o2)) => src.slice(o1..o2), None => Err(crate::Error::RequiresContiguous { op: "index-add" }.bt())?, }; let left_sz: usize = src_l.dims()[..dim].iter().product(); let right_sz: usize = src_l.dims()[dim + 1..].iter().product(); let src_dim_sz = src_l.dims()[dim]; let dst_dim_sz = dst_shape.dims()[dim]; let ids_dim_sz = ids_l.dims()[0]; let cfg = LaunchConfig::for_num_elems((left_sz * right_sz) as u32); let func = dev.get_or_load_func(&kernel_name::<T>(name), kernels::INDEXING)?; // SAFETY: Set later by running the kernel. let params = ( ids, ids_dim_sz, &src, dst, left_sz, src_dim_sz, dst_dim_sz, right_sz, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(()) } } struct ScatterAdd<'a>(&'a CudaStorage, &'a Layout, usize); impl Map2InPlace for ScatterAdd<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, dst: &mut CudaSlice<T>, dst_shape: &Shape, src: &CudaSlice<T>, src_l: &Layout, dev: &CudaDevice, ) -> Result<()> { let ids = &self.0; let ids_l = &self.1; let dim = self.2; let (ids_o1, ids_o2) = match ids_l.contiguous_offsets() { Some(o12) => o12, None => Err(crate::Error::RequiresContiguous { op: "scatter-add" }.bt())?, }; let (name, ids) = match &ids.slice { CudaStorageSlice::U32(slice) => ("sa_u32", *slice.slice(ids_o1..ids_o2).device_ptr()), CudaStorageSlice::I64(slice) => ("sa_i64", *slice.slice(ids_o1..ids_o2).device_ptr()), CudaStorageSlice::U8(slice) => ("sa_u8", *slice.slice(ids_o1..ids_o2).device_ptr()), _ => Err(CudaError::UnexpectedDType { msg: "scatter-add ids should be u8/u32/i64", expected: DType::U32, got: ids.dtype(), })?, }; let src = match src_l.contiguous_offsets() { Some((o1, o2)) => src.slice(o1..o2), None => Err(crate::Error::RequiresContiguous { op: "scatter-add" }.bt())?, }; let left_sz: usize = src_l.dims()[..dim].iter().product(); let right_sz: usize = src_l.dims()[dim + 1..].iter().product(); let src_dim_sz = src_l.dims()[dim]; let dst_dim_sz = dst_shape.dims()[dim]; let cfg = LaunchConfig::for_num_elems((left_sz * right_sz) as u32); let func = dev.get_or_load_func(&kernel_name::<T>(name), kernels::INDEXING)?; // SAFETY: Set later by running the kernel. let params = (ids, &src, dst, left_sz, src_dim_sz, dst_dim_sz, right_sz); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(()) } } struct Conv1D<'a>(&'a crate::conv::ParamsConv1D); impl Map2 for Conv1D<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, inp: &CudaSlice<T>, inp_l: &Layout, k: &CudaSlice<T>, k_l: &Layout, dev: &CudaDevice, ) -> Result<CudaSlice<T>> { // Kernel shape: (c_out, c_in_k, k_size) // Input shape: (b_size, c_in, l_in) or (c_in, l_in) let p = &self.0; let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(k_l.start_offset()..); let shape = inp_l.shape(); let dims = shape.dims(); let el = shape.elem_count(); let l_out = p.l_out(); let dst_el = p.c_out * l_out * p.b_size; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let func = dev.get_or_load_func(&kernel_name::<T>("conv1d"), kernels::CONV)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let ds = if dims.len() == 3 { [dims, inp_l.stride(), k_l.dims(), k_l.stride()].concat() } else if dims.len() == 2 { [&[1], dims, &[1], inp_l.stride(), k_l.dims(), k_l.stride()].concat() } else { crate::bail!("unexpected input shape for conv1d {dims:?}") }; let ds = dev.htod_copy(ds).w()?; let params = ( el, l_out, p.stride, p.padding, p.dilation, &ds, inp, k, &out, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct Conv2D<'a>(&'a crate::conv::ParamsConv2D); impl Map2 for Conv2D<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, inp: &CudaSlice<T>, inp_l: &Layout, k: &CudaSlice<T>, k_l: &Layout, dev: &CudaDevice, ) -> Result<CudaSlice<T>> { // Kernel shape: (c_out, c_in_k, h_k, w_k) // Input shape: (b_size, c_in, h_in, w_in) let p = &self.0; let (out_w, out_h) = (p.out_w(), p.out_h()); let dst_el = p.c_out * out_w * out_h * p.b_size; let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(k_l.start_offset()..); let shape = inp_l.shape(); let dims = shape.dims(); let el = shape.elem_count(); // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let func = dev.get_or_load_func(&kernel_name::<T>("conv2d"), kernels::CONV)?; let ds = if dims.len() == 4 { [dims, inp_l.stride(), k_l.dims(), k_l.stride()].concat() } else { crate::bail!("unexpected input shape for conv2d {dims:?}") }; let ds = dev.htod_copy(ds).w()?; let params = ( el, out_w, out_h, p.stride, p.padding, p.dilation, &ds, inp, k, &out, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct Col2Im1D { stride: usize, } impl Map1 for Col2Im1D { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, col: &CudaSlice<T>, dev: &CudaDevice, l: &Layout, ) -> Result<CudaSlice<T>> { let (b_size, l_in, c_out, k_size) = l.shape().dims4()?; let stride = self.stride; let l_out = (l_in - 1) * stride + k_size; let dst_el = b_size * c_out * l_out; let mut im = unsafe { dev.alloc::<T>(dst_el) }.w()?; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let params = (dst_el, l_out, l_in, c_out, k_size, stride, col, &mut im); let func = dev.get_or_load_func(&kernel_name::<T>("col2im1d"), kernels::CONV)?; unsafe { func.launch(cfg, params) }.w()?; Ok(im) } } struct ConvTranspose1D<'a>(&'a crate::conv::ParamsConvTranspose1D); impl Map2 for ConvTranspose1D<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, inp: &CudaSlice<T>, inp_l: &Layout, k: &CudaSlice<T>, k_l: &Layout, dev: &CudaDevice, ) -> Result<CudaSlice<T>> { // Kernel shape: (c_in_k, c_out, l_k) // Input shape: (b_size, c_in, l_in) let p = &self.0; let l_out = p.l_out(); let dst_el = p.c_out * l_out * p.b_size; let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(k_l.start_offset()..); let shape = inp_l.shape(); let dims = shape.dims(); let el = shape.elem_count(); // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let func = dev.get_or_load_func(&kernel_name::<T>("conv_transpose1d"), kernels::CONV)?; let ds = if dims.len() == 3 { [dims, inp_l.stride(), k_l.dims(), k_l.stride()].concat() } else { crate::bail!("unexpected input shape for conv_transpose1d {dims:?}") }; let ds = dev.htod_copy(ds).w()?; let params = ( el, l_out, p.stride, p.padding, p.output_padding, p.dilation, &ds, inp, k, &out, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct ConvTranspose2D<'a>(&'a crate::conv::ParamsConvTranspose2D); impl Map2 for ConvTranspose2D<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, inp: &CudaSlice<T>, inp_l: &Layout, k: &CudaSlice<T>, k_l: &Layout, dev: &CudaDevice, ) -> Result<CudaSlice<T>> { // Kernel shape: (c_in_k, c_out, h_k, w_k) // Input shape: (b_size, c_in, h_in, w_in) let p = &self.0; let (out_w, out_h) = (p.out_w(), p.out_h()); let dst_el = p.c_out * out_w * out_h * p.b_size; let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(k_l.start_offset()..); let shape = inp_l.shape(); let dims = shape.dims(); let el = shape.elem_count(); // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let func = dev.get_or_load_func(&kernel_name::<T>("conv_transpose2d"), kernels::CONV)?; let ds = if dims.len() == 4 { [dims, inp_l.stride(), k_l.dims(), k_l.stride()].concat() } else { crate::bail!("unexpected input shape for conv_transpose2d {dims:?}") }; let ds = dev.htod_copy(ds).w()?; let params = ( el, out_w, out_h, p.stride, p.padding, p.output_padding, p.dilation, &ds, inp, k, &out, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } enum PoolOp { Max, Avg, } struct Pool2D { w_k: usize, h_k: usize, w_stride: usize, h_stride: usize, op: PoolOp, } impl Map1 for Pool2D { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, inp: &CudaSlice<T>, dev: &CudaDevice, inp_l: &Layout, ) -> Result<CudaSlice<T>> { // Input shape: (b_size, c, h, w) let inp = &inp.slice(inp_l.start_offset()..); let shape = inp_l.shape(); let dims = shape.dims(); let ds = if dims.len() == 4 { [dims, inp_l.stride()].concat() } else { crate::bail!("unexpected input shape for pool {dims:?}") }; let el = shape.elem_count(); let out_w = (dims[2] - self.w_k) / self.w_stride + 1; let out_h = (dims[3] - self.h_k) / self.h_stride + 1; let dst_el = out_w * out_h * dims[0] * dims[1]; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let kname = match self.op { PoolOp::Max => "max_pool2d", PoolOp::Avg => "avg_pool2d", }; let func = dev.get_or_load_func(&kernel_name::<T>(kname), kernels::CONV)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let ds = dev.htod_copy(ds).w()?; let params = ( el, self.w_k, self.h_k, self.w_stride, self.h_stride, &ds, inp, &out, ); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct UpsampleNearest2D(usize, usize); impl Map1 for UpsampleNearest2D { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, inp: &CudaSlice<T>, dev: &CudaDevice, inp_l: &Layout, ) -> Result<CudaSlice<T>> { // Input shape: (b_size, c, h, w) let inp = &inp.slice(inp_l.start_offset()..); let shape = inp_l.shape(); let dims = shape.dims(); let ds = if dims.len() == 4 { [dims, inp_l.stride()].concat() } else { crate::bail!("unexpected input shape for upsample {dims:?}") }; let (out_w, out_h) = (self.0, self.1); let dst_el = out_w * out_h * dims[0] * dims[1]; let cfg = LaunchConfig::for_num_elems(dst_el as u32); let func = dev.get_or_load_func(&kernel_name::<T>("upsample_nearest2d"), kernels::CONV)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(dst_el) }.w()?; let ds = dev.htod_copy(ds).w()?; let scale_w = dims[2] as f64 / out_w as f64; let scale_h = dims[3] as f64 / out_h as f64; let params = (out_w, out_h, scale_w, scale_h, &ds, inp, &out); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct WhereCond<'a>(&'a CudaStorage, &'a Layout); impl Map2 for WhereCond<'_> { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, t: &CudaSlice<T>, layout_t: &Layout, f: &CudaSlice<T>, layout_f: &Layout, dev: &CudaDevice, ) -> Result<CudaSlice<T>> { let ids_l = &self.1; let (ids, name) = match &self.0.slice { CudaStorageSlice::U8(slice) => { let ptr = *slice.slice(ids_l.start_offset()..).device_ptr(); (ptr, "where_u8") } CudaStorageSlice::U32(slice) => { let ptr = *slice.slice(ids_l.start_offset()..).device_ptr(); (ptr, "where_u32") } CudaStorageSlice::I64(slice) => { let ptr = *slice.slice(ids_l.start_offset()..).device_ptr(); (ptr, "where_i64") } _ => Err(CudaError::UnexpectedDType { msg: "where conditions should be u8/u32/i64", expected: DType::U32, got: self.0.dtype(), }) .w()?, }; let shape = ids_l.shape(); let dims = shape.dims(); let el = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(el as u32); let ds = dev .htod_copy([dims, ids_l.stride(), layout_t.stride(), layout_f.stride()].concat()) .w()?; let t = &t.slice(layout_t.start_offset()..); let f = &f.slice(layout_f.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>(name), kernels::TERNARY)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(el) }.w()?; let params = (el, dims.len(), &ds, ids, t, f, &out); // SAFETY: ffi unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } impl<U: crate::op::BinaryOpT> Map2 for U { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, lhs: &CudaSlice<T>, lhs_l: &Layout, rhs: &CudaSlice<T>, rhs_l: &Layout, dev: &CudaDevice, ) -> Result<CudaSlice<T>> { let shape = lhs_l.shape(); let dims = shape.dims(); let elem_count = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(elem_count as u32); let dims_and_strides = if lhs_l.is_contiguous() && rhs_l.is_contiguous() { SlicePtrOrNull::Null } else { SlicePtrOrNull::Ptr( dev.htod_copy([dims, lhs_l.stride(), rhs_l.stride()].concat()) .w()?, ) }; let lhs = &lhs.slice(lhs_l.start_offset()..); let rhs = &rhs.slice(rhs_l.start_offset()..); let func = dev.get_or_load_func(&kernel_name::<T>(U::KERNEL), kernels::BINARY)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<T>(elem_count) }.w()?; let params = (elem_count, dims.len(), &dims_and_strides, lhs, rhs, &out); // SAFETY: ffi unsafe { func.launch(cfg, params) }.w()?; Ok(out) } } struct Cmp(CmpOp); impl Map2Any for Cmp { fn f<T: DeviceRepr + WithDType + ValidAsZeroBits>( &self, lhs: &CudaSlice<T>, lhs_l: &Layout, rhs: &CudaSlice<T>, rhs_l: &Layout, dev: &CudaDevice, ) -> Result<S> { let shape = lhs_l.shape(); let dims = shape.dims(); let elem_count = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(elem_count as u32); let dims_and_strides = if lhs_l.is_contiguous() && rhs_l.is_contiguous() { SlicePtrOrNull::Null } else { SlicePtrOrNull::Ptr( dev.htod_copy([dims, lhs_l.stride(), rhs_l.stride()].concat()) .w()?, ) }; let lhs = &lhs.slice(lhs_l.start_offset()..); let rhs = &rhs.slice(rhs_l.start_offset()..); let name = match self.0 { CmpOp::Eq => "eq", CmpOp::Ne => "ne", CmpOp::Lt => "lt", CmpOp::Le => "le", CmpOp::Gt => "gt", CmpOp::Ge => "ge", }; let func = dev.get_or_load_func(&kernel_name::<T>(name), kernels::BINARY)?; // SAFETY: Set later by running the kernel. let out = unsafe { dev.alloc::<u8>(elem_count) }.w()?; let params = (elem_count, dims.len(), &dims_and_strides, lhs, rhs, &out); // SAFETY: ffi unsafe { func.launch(cfg, params) }.w()?; Ok(S::U8(out)) } } fn slice_src_and_dst<'a, T>( src: &'a CudaSlice<T>, src_l: &Layout, dst: &'a mut CudaSlice<T>, dst_offset: usize, ) -> ( cudarc::driver::CudaView<'a, T>, cudarc::driver::CudaViewMut<'a, T>, ) { let src_offset = src_l.start_offset(); let to_copy = dst .len() .saturating_sub(dst_offset) .min(src.len().saturating_sub(src_offset)); let src = src.slice(src_offset..src_offset + to_copy); let dst = dst.slice_mut(dst_offset..dst_offset + to_copy); (src, dst) } #[derive(Debug)] pub struct CudaStorage { pub slice: CudaStorageSlice, pub device: CudaDevice, } pub trait CudaDType: Sized { fn as_cuda_slice(s: &CudaStorage) -> Result<&CudaSlice<Self>>; fn wrap_cuda_slice(s: CudaSlice<Self>, dev: CudaDevice) -> CudaStorage; } macro_rules! cuda_dtype { ($ty:ty, $dtype:ident) => { impl CudaDType for $ty { fn as_cuda_slice(s: &CudaStorage) -> Result<&CudaSlice<Self>> { match &s.slice { CudaStorageSlice::$dtype(data) => Ok(&data), _ => Err(crate::Error::UnexpectedDType { expected: DType::$dtype, got: s.dtype(), msg: "unexpected dtype", } .bt()), } } fn wrap_cuda_slice(slice: CudaSlice<Self>, device: CudaDevice) -> CudaStorage { let slice = CudaStorageSlice::$dtype(slice); CudaStorage { slice, device } } } }; } cuda_dtype!(u8, U8); cuda_dtype!(u32, U32); cuda_dtype!(i64, I64); cuda_dtype!(f16, F16); cuda_dtype!(bf16, BF16); cuda_dtype!(f32, F32); cuda_dtype!(f64, F64); impl CudaStorage { pub fn wrap_cuda_slice<T: CudaDType>(slice: CudaSlice<T>, device: CudaDevice) -> CudaStorage { T::wrap_cuda_slice(slice, device) } pub fn as_cuda_slice<T: CudaDType>(&self) -> Result<&CudaSlice<T>> { T::as_cuda_slice(self) } } fn gemm_config<T>( alpha: T, beta: T, (b, m, n, k): (usize, usize, usize, usize), lhs_l: &Layout, rhs_l: &Layout, ) -> Result<StridedBatchedConfig<T>> { // https://docs.nvidia.com/cuda/cublas/index.html#cublas-t-gemm use cudarc::cublas::sys::cublasOperation_t; let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let rhs_m1 = rhs_stride[rhs_stride.len() - 1]; let rhs_m2 = rhs_stride[rhs_stride.len() - 2]; let lhs_m1 = lhs_stride[lhs_stride.len() - 1]; let lhs_m2 = lhs_stride[lhs_stride.len() - 2]; // The a tensor has dims batching, k, n (rhs) // We also allow for the case where the stride on the minor dimension is not as expected but // there is a single element. let (lda, transa) = if (rhs_m1 == 1 || n == 1) && (rhs_m2 == n || k == 1) { (n as i32, cublasOperation_t::CUBLAS_OP_N) } else if (rhs_m1 == k || n == 1) && (rhs_m2 == 1 || k == 1) { (k as i32, cublasOperation_t::CUBLAS_OP_T) } else { Err(CudaError::MatMulNonContiguous { lhs_stride: lhs_l.clone(), rhs_stride: rhs_l.clone(), mnk: (m, n, k), })? }; // The b tensor has dims batching, m, k (lhs) // We also allow for the case where the stride on the minor dimension is not as expected but // there is a single element. let (ldb, transb) = if (lhs_m1 == 1 || k == 1) && (lhs_m2 == k || m == 1) { (k as i32, cublasOperation_t::CUBLAS_OP_N) } else if (lhs_m1 == m || k == 1) && (lhs_m2 == 1 || m == 1) { (m as i32, cublasOperation_t::CUBLAS_OP_T) } else { Err(CudaError::MatMulNonContiguous { lhs_stride: lhs_l.clone(), rhs_stride: rhs_l.clone(), mnk: (m, n, k), })? }; // The setup below was copied from: // https://github.com/lebedov/scikit-cuda/blob/7e7300474286019c917a6c8a4bca59405c64fbce/tests/test_cublas.py#L531 let gemm = GemmConfig { alpha, beta, m: n as i32, n: m as i32, k: k as i32, lda, ldb, ldc: n as i32, transa, transb, }; let stride_b: usize = match lhs_stride[..lhs_stride.len() - 2] { [s1, stride] if s1 == stride * lhs_l.dims()[1] => stride, [_, stride] if lhs_l.dims()[0] == 1 => stride, [stride, _] if lhs_l.dims()[1] == 1 => stride, [stride] => stride, [] => m * k, _ => Err(CudaError::MatMulNonContiguous { lhs_stride: lhs_l.clone(), rhs_stride: rhs_l.clone(), mnk: (m, n, k), })?, }; let stride_a: usize = match rhs_stride[..rhs_stride.len() - 2] { [s1, stride] if s1 == stride * rhs_l.dims()[1] => stride, [_, stride] if rhs_l.dims()[0] == 1 => stride, [stride, _] if rhs_l.dims()[1] == 1 => stride, [stride] => stride, [] => n * k, _ => Err(CudaError::MatMulNonContiguous { lhs_stride: lhs_l.clone(), rhs_stride: rhs_l.clone(), mnk: (m, n, k), })?, }; Ok(StridedBatchedConfig { batch_size: b as i32, gemm, stride_a: stride_a as i64, stride_b: stride_b as i64, stride_c: (m * n) as i64, }) } impl BackendStorage for CudaStorage { type Device = CudaDevice; fn try_clone(&self, layout: &Layout) -> Result<Self> { let slice = Clone.map(&self.slice, self.device(), layout)?; let device = self.device.clone(); Ok(Self { slice, device }) } fn dtype(&self) -> DType { match self.slice { CudaStorageSlice::U8(_) => DType::U8, CudaStorageSlice::U32(_) => DType::U32, CudaStorageSlice::I64(_) => DType::I64, CudaStorageSlice::BF16(_) => DType::BF16, CudaStorageSlice::F16(_) => DType::F16, CudaStorageSlice::F32(_) => DType::F32, CudaStorageSlice::F64(_) => DType::F64, } } fn device(&self) -> &CudaDevice { &self.device } fn to_dtype(&self, layout: &Layout, dtype: DType) -> Result<Self> { let shape = layout.shape(); let dims = shape.dims(); let el = shape.elem_count(); let cfg = LaunchConfig::for_num_elems(el as u32); let dev = self.device(); let ds = SlicePtrOrNull::params_from_layout(dev, layout)?; let start_o = layout.start_offset(); // This returns an i64 rather than a &i64, this is useful to get around some temporary // lifetime issue and is safe as long as self.slice does not go out of scope before inp // is used. let inp = match &self.slice { CudaStorageSlice::U8(inp) => *inp.slice(start_o..).device_ptr(), CudaStorageSlice::U32(inp) => *inp.slice(start_o..).device_ptr(), CudaStorageSlice::I64(inp) => *inp.slice(start_o..).device_ptr(), CudaStorageSlice::BF16(inp) => *inp.slice(start_o..).device_ptr(), CudaStorageSlice::F16(inp) => *inp.slice(start_o..).device_ptr(), CudaStorageSlice::F32(inp) => *inp.slice(start_o..).device_ptr(), CudaStorageSlice::F64(inp) => *inp.slice(start_o..).device_ptr(), }; let inp = &inp; let kernel_name = format!("cast_{}_{}", self.dtype().as_str(), dtype.as_str()); let func = dev.get_or_load_func(&kernel_name, kernels::CAST)?; let slice = match dtype { DType::U8 => { let out = unsafe { dev.alloc::<u8>(el) }.w()?; let params = (el, dims.len(), &ds, *inp, &out); unsafe { func.launch(cfg, params) }.w()?; CudaStorageSlice::U8(out) } DType::U32 => { let out = unsafe { dev.alloc::<u32>(el) }.w()?; let params = (el, dims.len(), &ds, *inp, &out); unsafe { func.launch(cfg, params) }.w()?; CudaStorageSlice::U32(out) } DType::I64 => { let out = unsafe { dev.alloc::<i64>(el) }.w()?; let params = (el, dims.len(), &ds, *inp, &out); unsafe { func.launch(cfg, params) }.w()?; CudaStorageSlice::I64(out) } DType::BF16 => { let out = unsafe { dev.alloc::<bf16>(el) }.w()?; let params = (el, dims.len(), &ds, *inp, &out); unsafe { func.launch(cfg, params) }.w()?; CudaStorageSlice::BF16(out) } DType::F16 => { let out = unsafe { dev.alloc::<f16>(el) }.w()?; let params = (el, dims.len(), &ds, *inp, &out); unsafe { func.launch(cfg, params) }.w()?; CudaStorageSlice::F16(out) } DType::F32 => { let out = unsafe { dev.alloc::<f32>(el) }.w()?; let params = (el, dims.len(), &ds, *inp, &out); unsafe { func.launch(cfg, params) }.w()?; CudaStorageSlice::F32(out) } DType::F64 => { let out = unsafe { dev.alloc::<f64>(el) }.w()?; let params = (el, dims.len(), &ds, *inp, &out); unsafe { func.launch(cfg, params) }.w()?; CudaStorageSlice::F64(out) } }; Ok(Self { slice, device: dev.clone(), }) } fn affine(&self, layout: &Layout, mul: f64, add: f64) -> Result<Self> { let device = self.device().clone(); let slice = Affine(mul, add).map(&self.slice, &device, layout)?; Ok(Self { slice, device }) } fn powf(&self, layout: &Layout, e: f64) -> Result<Self> { let device = self.device().clone(); let slice = Powf(e).map(&self.slice, &device, layout)?; Ok(Self { slice, device }) } fn elu(&self, layout: &Layout, alpha: f64) -> Result<Self> { let device = self.device().clone(); let slice = Elu(alpha).map(&self.slice, &device, layout)?; Ok(Self { slice, device }) } fn reduce_op(&self, op: ReduceOp, layout: &Layout, sum_dims: &[usize]) -> Result<Self> { let device = self.device().clone(); let slice = FastReduce(sum_dims, op).map(&self.slice, &device, layout)?; Ok(Self { slice, device }) } fn cmp(&self, op: CmpOp, rhs: &Self, lhs_l: &Layout, rhs_l: &Layout) -> Result<Self> { let device = self.device().clone(); let slice = Cmp(op).map(&self.slice, lhs_l, &rhs.slice, rhs_l, &device)?; Ok(Self { slice, device }) } fn unary_impl<U: UnaryOpT>(&self, layout: &Layout) -> Result<Self> { let device = self.device().clone(); let slice = U::V.map(&self.slice, &device, layout)?; Ok(Self { slice, device }) } fn binary_impl<B: BinaryOpT>( &self, rhs: &Self, lhs_l: &Layout, rhs_l: &Layout, ) -> Result<Self> { let device = self.device().clone(); let slice = B::V.map(&self.slice, lhs_l, &rhs.slice, rhs_l, &device)?; Ok(Self { slice, device }) } fn to_cpu_storage(&self) -> Result<CpuStorage> { match &self.slice { CudaStorageSlice::U8(slice) => { let dev = slice.device(); let cpu_storage = dev.dtoh_sync_copy(slice).w()?; Ok(CpuStorage::U8(cpu_storage)) } CudaStorageSlice::U32(slice) => { let dev = slice.device(); let cpu_storage = dev.dtoh_sync_copy(slice).w()?; Ok(CpuStorage::U32(cpu_storage)) } CudaStorageSlice::I64(slice) => { let dev = slice.device(); let cpu_storage = dev.dtoh_sync_copy(slice).w()?; Ok(CpuStorage::I64(cpu_storage)) } CudaStorageSlice::BF16(slice) => { let dev = slice.device(); let cpu_storage = dev.dtoh_sync_copy(slice).w()?; Ok(CpuStorage::BF16(cpu_storage)) } CudaStorageSlice::F16(slice) => { let dev = slice.device(); let cpu_storage = dev.dtoh_sync_copy(slice).w()?; Ok(CpuStorage::F16(cpu_storage)) } CudaStorageSlice::F32(slice) => { let dev = slice.device(); let cpu_storage = dev.dtoh_sync_copy(slice).w()?; Ok(CpuStorage::F32(cpu_storage)) } CudaStorageSlice::F64(slice) => { let dev = slice.device(); let cpu_storage = dev.dtoh_sync_copy(slice).w()?; Ok(CpuStorage::F64(cpu_storage)) } } } fn where_cond( &self, layout: &Layout, t: &Self, t_l: &Layout, f: &Self, f_l: &Layout, ) -> Result<Self> { let device = self.device().clone(); let slice = WhereCond(self, layout).map(&t.slice, t_l, &f.slice, f_l, &device)?; Ok(Self { slice, device }) } fn conv1d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConv1D, ) -> Result<Self> { const USE_IM2COL_CONV1D: bool = true; let device = self.device().clone(); if !USE_IM2COL_CONV1D { let slice = Conv1D(params).map(&self.slice, l, &kernel.slice, kernel_l, &device)?; return Ok(Self { slice, device }); } let col = Im2Col1D { l_k: params.k_size, stride: params.stride, dilation: params.dilation, padding: params.padding, } .map(&self.slice, &device, l)?; let col = Self { slice: col, device }; let l_out = params.l_out(); let b = params.b_size; let n = params.c_out; let k = params.k_size * params.c_in; let m = l_out; let col_l = Layout::contiguous((b, m, k)); let res = if kernel_l.is_contiguous() { let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? } else { // Make the kernel contiguous if not already the case. let mut kernel_c = unsafe { self.device() .alloc_uninit(kernel_l.shape(), kernel.dtype())? }; kernel.copy_strided_src(&mut kernel_c, 0, kernel_l)?; let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? }; let res_l = Layout::contiguous((b, l_out, n)).transpose(1, 2)?; let mut res_t = unsafe { self.device().alloc_uninit(res_l.shape(), res.dtype())? }; res.copy_strided_src(&mut res_t, 0, &res_l)?; Ok(res_t) } fn conv_transpose1d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConvTranspose1D, ) -> Result<Self> { const USE_COL2IM_CONV1D_TR: bool = true; let device = self.device().clone(); let can_use_col2im = kernel_l.is_contiguous() && params.dilation == 1 && params.padding == 0 && params.output_padding == 0; let slice = if USE_COL2IM_CONV1D_TR && can_use_col2im { let (b_size, c_in, l_in) = l.shape().dims3()?; let (c_in2, c_out, k_size) = kernel_l.shape().dims3()?; if !kernel_l.is_contiguous() { crate::bail!( "convtr1d: the second argument (kernel) has to be contiguous {kernel_l:?}" ) } if c_in != c_in2 { crate::bail!( "convtr1d: shape mismatch on c_in {:?} {:?}", l.shape(), kernel_l.shape() ) } let col = { // This merges the last two dimensions of the kernel together. let kernel_l_mm = Layout::new( (b_size, c_in, k_size * c_out).into(), vec![0, k_size * c_out, 1], kernel_l.start_offset(), ); self.matmul( kernel, ( b_size, /* m */ l_in, /* n */ c_out * k_size, /* k */ c_in, ), &l.transpose(1, 2)?, &kernel_l_mm, )? }; let col_l = Layout::contiguous((b_size, l_in, c_out, k_size)); Col2Im1D { stride: params.stride, } .map(&col.slice, &device, &col_l)? } else { ConvTranspose1D(params).map(&self.slice, l, &kernel.slice, kernel_l, &device)? }; Ok(Self { slice, device }) } #[cfg(not(feature = "cudnn"))] fn conv2d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConv2D, ) -> Result<Self> { const USE_IM2COL_CONV2D: bool = true; let device = self.device().clone(); if !USE_IM2COL_CONV2D { let slice = Conv2D(params).map(&self.slice, l, &kernel.slice, kernel_l, &device)?; return Ok(Self { slice, device }); } let col = Im2Col { h_k: params.k_h, w_k: params.k_w, stride: params.stride, dilation: params.dilation, padding: params.padding, } .map(&self.slice, &device, l)?; let col = Self { slice: col, device }; let h_out = params.out_h(); let w_out = params.out_w(); let b = params.b_size; let n = params.c_out; let k = params.k_h * params.k_w * params.c_in; let m = h_out * w_out; let col_l = Layout::contiguous((b, m, k)); let res = if kernel_l.is_contiguous() { let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? } else { // Make the kernel contiguous if not already the case. let mut kernel_c = unsafe { self.device() .alloc_uninit(kernel_l.shape(), kernel.dtype())? }; kernel.copy_strided_src(&mut kernel_c, 0, kernel_l)?; let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? }; let res_l = Layout::contiguous((b, h_out, w_out, n)) .transpose(1, 2)? .transpose(1, 3)?; let mut res_t = unsafe { self.device().alloc_uninit(res_l.shape(), res.dtype())? }; res.copy_strided_src(&mut res_t, 0, &res_l)?; Ok(res_t) } #[cfg(feature = "cudnn")] fn conv2d( &self, inp_l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConv2D, ) -> Result<Self> { let device = self.device().clone(); if !kernel_l.is_contiguous() { let slice = Conv2D(params).map(&self.slice, inp_l, &kernel.slice, kernel_l, &device)?; return Ok(Self { slice, device }); } let (out_w, out_h) = (params.out_w(), params.out_h()); let dst_el = params.c_out * out_w * out_h * params.b_size; let slice = match (&self.slice, &kernel.slice) { (S::U8(inp), S::U8(k)) => { let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(kernel_l.start_offset()..); let mut out = unsafe { device.alloc::<u8>(dst_el) }.w()?; crate::cudnn::launch_conv2d::<u8, u8>(inp, inp_l, k, &mut out, params, &device) .map_err(crate::Error::wrap)?; S::U8(out) } (S::BF16(inp), S::BF16(k)) => { let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(kernel_l.start_offset()..); let mut out = unsafe { device.alloc::<bf16>(dst_el) }.w()?; // Only PSEUDO_BFLOAT16_CONFIG is supported in cudnn, there is no "true bfloat16" // version. // https://docs.nvidia.com/deeplearning/cudnn/latest/api/cudnn-cnn-library.html#id88 crate::cudnn::launch_conv2d::<bf16, f32>(inp, inp_l, k, &mut out, params, &device) .map_err(crate::Error::wrap)?; S::BF16(out) } (S::F16(inp), S::F16(k)) => { let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(kernel_l.start_offset()..); let mut out = unsafe { device.alloc::<f16>(dst_el) }.w()?; crate::cudnn::launch_conv2d::<f16, f16>(inp, inp_l, k, &mut out, params, &device) .map_err(crate::Error::wrap)?; S::F16(out) } (S::F32(inp), S::F32(k)) => { let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(kernel_l.start_offset()..); let mut out = unsafe { device.alloc::<f32>(dst_el) }.w()?; crate::cudnn::launch_conv2d::<f32, f32>(inp, inp_l, k, &mut out, params, &device) .map_err(crate::Error::wrap)?; S::F32(out) } (S::F64(inp), S::F64(k)) => { let inp = &inp.slice(inp_l.start_offset()..); let k = &k.slice(kernel_l.start_offset()..); let mut out = unsafe { device.alloc::<f64>(dst_el) }.w()?; crate::cudnn::launch_conv2d::<f64, f64>(inp, inp_l, k, &mut out, params, &device) .map_err(crate::Error::wrap)?; S::F64(out) } (S::U32(_), S::U32(_)) => Err(CudaError::InternalError("conv2d does not support u32"))?, (S::I64(_), S::I64(_)) => Err(CudaError::InternalError("conv2d does not support i64"))?, _ => Err(CudaError::InternalError("dtype mismatch in conv2d"))?, }; Ok(Self { slice, device }) } fn conv_transpose2d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConvTranspose2D, ) -> Result<Self> { let device = self.device().clone(); let slice = ConvTranspose2D(params).map(&self.slice, l, &kernel.slice, kernel_l, &device)?; Ok(Self { slice, device }) } fn avg_pool2d(&self, l: &Layout, k: (usize, usize), stride: (usize, usize)) -> Result<Self> { let device = self.device().clone(); let slice = Pool2D { w_k: k.0, h_k: k.1, w_stride: stride.0, h_stride: stride.1, op: PoolOp::Avg, } .map(&self.slice, &device, l)?; Ok(Self { slice, device }) } fn max_pool2d(&self, l: &Layout, k: (usize, usize), stride: (usize, usize)) -> Result<Self> { let device = self.device().clone(); let slice = Pool2D { w_k: k.0, h_k: k.1, w_stride: stride.0, h_stride: stride.1, op: PoolOp::Max, } .map(&self.slice, &device, l)?; Ok(Self { slice, device }) } fn upsample_nearest1d(&self, _: &Layout, _out_sz: usize) -> Result<Self> { crate::bail!("upsample-nearest1d is not supported on cuda") } fn upsample_nearest2d(&self, l: &Layout, out_w: usize, out_h: usize) -> Result<Self> { let device = self.device().clone(); let slice = UpsampleNearest2D(out_w, out_h).map(&self.slice, &device, l)?; Ok(Self { slice, device }) } fn index_select(&self, ids: &Self, l: &Layout, ids_l: &Layout, dim: usize) -> Result<Self> { let device = self.device().clone(); let slice = IndexSelect(ids, ids_l, dim).map(&self.slice, &device, l)?; Ok(Self { slice, device }) } fn gather(&self, l: &Layout, ids: &Self, ids_l: &Layout, dim: usize) -> Result<Self> { let device = self.device().clone(); let slice = Gather(ids, ids_l, dim).map(&self.slice, &device, l)?; Ok(Self { slice, device }) } fn scatter_add( &self, l: &Layout, ids: &Self, ids_l: &Layout, src: &Self, src_l: &Layout, dim: usize, ) -> Result<Self> { let device = self.device().clone(); let mut acc = unsafe { device.alloc_uninit(l.shape(), self.dtype())? }; self.copy_strided_src(&mut acc, 0, l)?; ScatterAdd(ids, ids_l, dim).map(&mut acc.slice, l.shape(), &src.slice, src_l, &device)?; Ok(acc) } fn index_add( &self, l: &Layout, ids: &Self, ids_l: &Layout, src: &Self, src_l: &Layout, dim: usize, ) -> Result<Self> { let device = self.device().clone(); let mut acc = unsafe { device.alloc_uninit(l.shape(), self.dtype())? }; self.copy_strided_src(&mut acc, 0, l)?; IndexAdd(ids, ids_l, dim).map(&mut acc.slice, l.shape(), &src.slice, src_l, &device)?; Ok(acc) } fn matmul( &self, rhs: &Self, (b, m, n, k): (usize, usize, usize, usize), lhs_l: &Layout, rhs_l: &Layout, ) -> Result<Self> { let elem_count = b * m * n; let dev = &self.device; let slice = match (&self.slice, &rhs.slice) { (CudaStorageSlice::BF16(lhs), CudaStorageSlice::BF16(rhs)) => { let lhs = &lhs.slice(lhs_l.start_offset()..); let rhs = &rhs.slice(rhs_l.start_offset()..); let cfg = gemm_config(bf16::ONE, bf16::ZERO, (b, m, n, k), lhs_l, rhs_l)?; let mut out = unsafe { dev.alloc::<bf16>(elem_count) }.w()?; unsafe { gemm_strided_batched_bf16(&self.device.blas, cfg, rhs, lhs, &mut out) } .w()?; CudaStorageSlice::BF16(out) } (CudaStorageSlice::F16(lhs), CudaStorageSlice::F16(rhs)) => { let lhs = &lhs.slice(lhs_l.start_offset()..); let rhs = &rhs.slice(rhs_l.start_offset()..); let cfg = gemm_config(f16::ONE, f16::ZERO, (b, m, n, k), lhs_l, rhs_l)?; let mut out = unsafe { dev.alloc::<f16>(elem_count) }.w()?; unsafe { gemm_strided_batched_f16(&self.device.blas, cfg, rhs, lhs, &mut out) } .w()?; CudaStorageSlice::F16(out) } (CudaStorageSlice::F32(lhs), CudaStorageSlice::F32(rhs)) => { let lhs = &lhs.slice(lhs_l.start_offset()..); let rhs = &rhs.slice(rhs_l.start_offset()..); let cfg = gemm_config(1., 0., (b, m, n, k), lhs_l, rhs_l)?; let mut out = unsafe { dev.alloc::<f32>(elem_count) }.w()?; unsafe { gemm_strided_batched_f32(&self.device.blas, cfg, rhs, lhs, &mut out) } .w()?; CudaStorageSlice::F32(out) } (CudaStorageSlice::F64(lhs), CudaStorageSlice::F64(rhs)) => { let lhs = &lhs.slice(lhs_l.start_offset()..); let rhs = &rhs.slice(rhs_l.start_offset()..); let cfg = gemm_config(1., 0., (b, m, n, k), lhs_l, rhs_l)?; let mut out = unsafe { dev.alloc::<f64>(elem_count) }.w()?; unsafe { self.device .blas .gemm_strided_batched(cfg, rhs, lhs, &mut out) } .w()?; CudaStorageSlice::F64(out) } _ => Err(CudaError::InternalError("dtype mismatch in matmul op"))?, }; let device = dev.clone(); Ok(Self { slice, device }) } fn copy2d( &self, dst: &mut Self, d1: usize, d2: usize, src_s: usize, dst_s: usize, src_o: usize, dst_o: usize, ) -> Result<()> { let dev = &self.device; let d1 = d1 as u32; let d2 = d2 as u32; // Nothing to copy so we exit early to avoid launching a kernel and some potential invalid // argument with a null pointer. if d1 == 0 || d2 == 0 { return Ok(()); } let dst_s = dst_s as u32; let src_s = src_s as u32; let (src, dst, kname) = match (&self.slice, &mut dst.slice) { (S::U8(s), S::U8(d)) => ( *s.slice(src_o..).device_ptr(), *d.slice(dst_o..).device_ptr(), "copy2d_u8", ), (S::U32(s), S::U32(d)) => ( *s.slice(src_o..).device_ptr(), *d.slice(dst_o..).device_ptr(), "copy2d_u32", ), (S::I64(s), S::I64(d)) => ( *s.slice(src_o..).device_ptr(), *d.slice(dst_o..).device_ptr(), "copy2d_i64", ), (S::BF16(s), S::BF16(d)) => ( *s.slice(src_o..).device_ptr(), *d.slice(dst_o..).device_ptr(), "copy2d_bf16", ), (S::F16(s), S::F16(d)) => ( *s.slice(src_o..).device_ptr(), *d.slice(dst_o..).device_ptr(), "copy2d_f16", ), (S::F32(s), S::F32(d)) => ( *s.slice(src_o..).device_ptr(), *d.slice(dst_o..).device_ptr(), "copy2d_f32", ), (S::F64(s), S::F64(d)) => ( *s.slice(src_o..).device_ptr(), *d.slice(dst_o..).device_ptr(), "copy2d_f64", ), _ => Err(CudaError::InternalError("dtype mismatch in copy2d"))?, }; let func = dev.get_or_load_func(kname, kernels::FILL)?; let cfg = LaunchConfig::for_num_elems(d1 * d2); let params = (src, dst, d1, d2, src_s, dst_s); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; Ok(()) } fn copy_strided_src(&self, dst: &mut Self, dst_offset: usize, src_l: &Layout) -> Result<()> { let src_shape = src_l.shape(); let dims = src_shape.dims(); let el_count = src_shape.elem_count(); if el_count == 0 { return Ok(()); } let cfg = LaunchConfig::for_num_elems(el_count as u32); let dev = &self.device; let ds = SlicePtrOrNull::params_from_layout(dev, src_l)?; match (&self.slice, &mut dst.slice) { (CudaStorageSlice::BF16(src), CudaStorageSlice::BF16(dst)) => { let (src, mut dst) = slice_src_and_dst(src, src_l, dst, dst_offset); if src_l.is_contiguous() { dev.dtod_copy(&src, &mut dst).w()? } else { let func = dev.get_or_load_func("ucopy_bf16", kernels::UNARY)?; // SAFETY: Set later by running the kernel. let params = (el_count, dims.len(), &ds, &src, &mut dst); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()? } } (CudaStorageSlice::F16(src), CudaStorageSlice::F16(dst)) => { let (src, mut dst) = slice_src_and_dst(src, src_l, dst, dst_offset); if src_l.is_contiguous() { dev.dtod_copy(&src, &mut dst).w()? } else { let func = dev.get_or_load_func("ucopy_f16", kernels::UNARY)?; // SAFETY: Set later by running the kernel. let params = (el_count, dims.len(), &ds, &src, &mut dst); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()? } } (CudaStorageSlice::F32(src), CudaStorageSlice::F32(dst)) => { let (src, mut dst) = slice_src_and_dst(src, src_l, dst, dst_offset); if src_l.is_contiguous() { dev.dtod_copy(&src, &mut dst).w()? } else { let func = dev.get_or_load_func("ucopy_f32", kernels::UNARY)?; // SAFETY: Set later by running the kernel. let params = (el_count, dims.len(), &ds, &src, &mut dst); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()? } } (CudaStorageSlice::U8(src), CudaStorageSlice::U8(dst)) => { let (src, mut dst) = slice_src_and_dst(src, src_l, dst, dst_offset); if src_l.is_contiguous() { dev.dtod_copy(&src, &mut dst).w()? } else { let func = dev.get_or_load_func("ucopy_u8", kernels::UNARY)?; // SAFETY: Set later by running the kernel. let params = (el_count, dims.len(), &ds, &src, &mut dst); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()? } } (CudaStorageSlice::U32(src), CudaStorageSlice::U32(dst)) => { let (src, mut dst) = slice_src_and_dst(src, src_l, dst, dst_offset); if src_l.is_contiguous() { dev.dtod_copy(&src, &mut dst).w()? } else { let func = dev.get_or_load_func("ucopy_u32", kernels::UNARY)?; // SAFETY: Set later by running the kernel. let params = (el_count, dims.len(), &ds, &src, &mut dst); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()? } } (CudaStorageSlice::I64(src), CudaStorageSlice::I64(dst)) => { let (src, mut dst) = slice_src_and_dst(src, src_l, dst, dst_offset); if src_l.is_contiguous() { dev.dtod_copy(&src, &mut dst).w()? } else { let func = dev.get_or_load_func("ucopy_i64", kernels::UNARY)?; // SAFETY: Set later by running the kernel. let params = (el_count, dims.len(), &ds, &src, &mut dst); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()? } } (CudaStorageSlice::F64(src), CudaStorageSlice::F64(dst)) => { let (src, mut dst) = slice_src_and_dst(src, src_l, dst, dst_offset); if src_l.is_contiguous() { dev.dtod_copy(&src, &mut dst).w()? } else { let func = dev.get_or_load_func("ucopy_f64", kernels::UNARY)?; // SAFETY: Set later by running the kernel. let params = (el_count, dims.len(), &ds, &src, &mut dst); // SAFETY: ffi. unsafe { func.launch(cfg, params) }.w()?; } } _ => Err(CudaError::InternalError( "dtype mismatch in copy_strided op", ))?, } Ok(()) } } // Default for the reduced precision setting is false, similar to pytorch. // https://github.com/pytorch/pytorch/issues/123157 static MM_F16_REDUCED_PRECISION: std::sync::atomic::AtomicBool = std::sync::atomic::AtomicBool::new(false); static MM_BF16_REDUCED_PRECISION: std::sync::atomic::AtomicBool = std::sync::atomic::AtomicBool::new(false); static MM_F32_REDUCED_PRECISION: std::sync::atomic::AtomicBool = std::sync::atomic::AtomicBool::new(false); /// This bool controls whether reduced precision reductions (e.g., with tf32 accumulation type) are /// allowed with f32 GEMMs. pub fn gemm_reduced_precision_f32() -> bool { MM_F32_REDUCED_PRECISION.load(std::sync::atomic::Ordering::Relaxed) } /// This bool controls whether reduced precision reductions (e.g., with tf32 accumulation type) are /// allowed with f32 GEMMs. pub fn set_gemm_reduced_precision_f32(b: bool) { MM_F32_REDUCED_PRECISION.store(b, std::sync::atomic::Ordering::Relaxed) } /// This bool controls whether reduced precision reductions (e.g., with fp16 accumulation type) are /// allowed with f16 GEMMs. pub fn gemm_reduced_precision_f16() -> bool { MM_F16_REDUCED_PRECISION.load(std::sync::atomic::Ordering::Relaxed) } /// This bool controls whether reduced precision reductions (e.g., with fp16 accumulation type) are /// allowed with f16 GEMMs. pub fn set_gemm_reduced_precision_f16(b: bool) { MM_F16_REDUCED_PRECISION.store(b, std::sync::atomic::Ordering::Relaxed) } /// This bool controls whether reduced precision reductions (e.g., with fp16 accumulation type) are /// allowed with bf16 GEMMs. pub fn gemm_reduced_precision_bf16() -> bool { MM_BF16_REDUCED_PRECISION.load(std::sync::atomic::Ordering::Relaxed) } /// This bool controls whether reduced precision reductions (e.g., with fp16 accumulation type) are /// allowed with bf16 GEMMs. pub fn set_gemm_reduced_precision_bf16(b: bool) { MM_BF16_REDUCED_PRECISION.store(b, std::sync::atomic::Ordering::Relaxed) } unsafe fn gemm_strided_batched_f32( cublas: &cudarc::cublas::CudaBlas, cfg: StridedBatchedConfig<f32>, a: &cudarc::driver::CudaView<f32>, b: &cudarc::driver::CudaView<f32>, c: &mut CudaSlice<f32>, ) -> std::result::Result<(), cudarc::cublas::result::CublasError> { use cudarc::cublas::sys; use cudarc::driver::DevicePtrMut; let compute_type = if gemm_reduced_precision_f32() { sys::cublasComputeType_t::CUBLAS_COMPUTE_32F_FAST_TF32 } else { sys::cublasComputeType_t::CUBLAS_COMPUTE_32F }; let alpha = &cfg.gemm.alpha as *const f32 as *const _; let beta = &cfg.gemm.beta as *const f32 as *const _; cudarc::cublas::result::gemm_strided_batched_ex( *cublas.handle(), cfg.gemm.transa, cfg.gemm.transb, cfg.gemm.m, cfg.gemm.n, cfg.gemm.k, alpha, *a.device_ptr() as *const _, sys::cudaDataType_t::CUDA_R_32F, cfg.gemm.lda, cfg.stride_a, *b.device_ptr() as *const _, sys::cudaDataType_t::CUDA_R_32F, cfg.gemm.ldb, cfg.stride_b, beta, *c.device_ptr_mut() as *mut _, sys::cudaDataType_t::CUDA_R_32F, cfg.gemm.ldc, cfg.stride_c, cfg.batch_size, compute_type, sys::cublasGemmAlgo_t::CUBLAS_GEMM_DEFAULT_TENSOR_OP, ) } unsafe fn gemm_strided_batched_f16( cublas: &cudarc::cublas::CudaBlas, cfg: StridedBatchedConfig<f16>, a: &cudarc::driver::CudaView<f16>, b: &cudarc::driver::CudaView<f16>, c: &mut CudaSlice<f16>, ) -> std::result::Result<(), cudarc::cublas::result::CublasError> { use cudarc::cublas::sys; use cudarc::driver::DevicePtrMut; let alpha = cfg.gemm.alpha; let beta = cfg.gemm.beta; let alpha_f32: f32 = cfg.gemm.alpha.to_f32(); let beta_f32: f32 = cfg.gemm.beta.to_f32(); let (compute_type, alpha, beta) = if gemm_reduced_precision_f16() { ( sys::cublasComputeType_t::CUBLAS_COMPUTE_16F, (&alpha) as *const f16 as *const _, (&beta) as *const f16 as *const _, ) } else { ( sys::cublasComputeType_t::CUBLAS_COMPUTE_32F, (&alpha_f32) as *const f32 as *const _, (&beta_f32) as *const f32 as *const _, ) }; cudarc::cublas::result::gemm_strided_batched_ex( *cublas.handle(), cfg.gemm.transa, cfg.gemm.transb, cfg.gemm.m, cfg.gemm.n, cfg.gemm.k, alpha, *a.device_ptr() as *const _, sys::cudaDataType_t::CUDA_R_16F, cfg.gemm.lda, cfg.stride_a, *b.device_ptr() as *const _, sys::cudaDataType_t::CUDA_R_16F, cfg.gemm.ldb, cfg.stride_b, beta, *c.device_ptr_mut() as *mut _, sys::cudaDataType_t::CUDA_R_16F, cfg.gemm.ldc, cfg.stride_c, cfg.batch_size, compute_type, sys::cublasGemmAlgo_t::CUBLAS_GEMM_DEFAULT_TENSOR_OP, ) } unsafe fn gemm_strided_batched_bf16( cublas: &cudarc::cublas::CudaBlas, cfg: StridedBatchedConfig<bf16>, a: &cudarc::driver::CudaView<bf16>, b: &cudarc::driver::CudaView<bf16>, c: &mut CudaSlice<bf16>, ) -> std::result::Result<(), cudarc::cublas::result::CublasError> { use cudarc::cublas::sys; use cudarc::driver::DevicePtrMut; let alpha_f32: f32 = cfg.gemm.alpha.to_f32(); let beta_f32: f32 = cfg.gemm.beta.to_f32(); // The type for alpha and beta depends on the computeType. // https://docs.nvidia.com/cuda/cublas/index.html#cublasgemmstridedbatchedex let (compute_type, alpha, beta) = if gemm_reduced_precision_bf16() { ( sys::cublasComputeType_t::CUBLAS_COMPUTE_32F_FAST_16BF, (&alpha_f32) as *const f32 as *const _, (&beta_f32) as *const f32 as *const _, ) } else { ( sys::cublasComputeType_t::CUBLAS_COMPUTE_32F, (&alpha_f32) as *const f32 as *const _, (&beta_f32) as *const f32 as *const _, ) }; cudarc::cublas::result::gemm_strided_batched_ex( *cublas.handle(), cfg.gemm.transa, cfg.gemm.transb, cfg.gemm.m, cfg.gemm.n, cfg.gemm.k, alpha, *a.device_ptr() as *const _, sys::cudaDataType_t::CUDA_R_16BF, cfg.gemm.lda, cfg.stride_a, *b.device_ptr() as *const _, sys::cudaDataType_t::CUDA_R_16BF, cfg.gemm.ldb, cfg.stride_b, beta, *c.device_ptr_mut() as *mut _, sys::cudaDataType_t::CUDA_R_16BF, cfg.gemm.ldc, cfg.stride_c, cfg.batch_size, compute_type, sys::cublasGemmAlgo_t::CUBLAS_GEMM_DEFAULT_TENSOR_OP, ) }
candle/candle-core/src/cuda_backend/mod.rs/0
{ "file_path": "candle/candle-core/src/cuda_backend/mod.rs", "repo_id": "candle", "token_count": 41866 }
//! Tensor Opertion Enums and Traits //! #![allow(clippy::redundant_closure_call)] use crate::Tensor; use half::{bf16, f16}; use num_traits::float::Float; #[derive(Clone, Copy, PartialEq, Eq)] pub enum CmpOp { Eq, Ne, Le, Ge, Lt, Gt, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum ReduceOp { Sum, Min, Max, ArgMin, ArgMax, } impl ReduceOp { pub(crate) fn name(&self) -> &'static str { match self { Self::ArgMax => "argmax", Self::ArgMin => "argmin", Self::Min => "min", Self::Max => "max", Self::Sum => "sum", } } } // These ops return the same type as their input type. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum BinaryOp { Add, Mul, Sub, Div, Maximum, Minimum, } // Unary ops with no argument #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum UnaryOp { Exp, Log, Sin, Cos, Abs, Neg, Recip, Sqr, Sqrt, Gelu, GeluErf, Erf, Relu, Silu, Tanh, Floor, Ceil, Round, Sign, } #[derive(Clone)] pub enum Op { Binary(Tensor, Tensor, BinaryOp), Unary(Tensor, UnaryOp), Cmp(Tensor, CmpOp), // The third argument is the reduced shape with `keepdim=true`. Reduce(Tensor, ReduceOp, Vec<usize>), Matmul(Tensor, Tensor), Gather(Tensor, Tensor, usize), ScatterAdd(Tensor, Tensor, Tensor, usize), IndexSelect(Tensor, Tensor, usize), IndexAdd(Tensor, Tensor, Tensor, usize), WhereCond(Tensor, Tensor, Tensor), #[allow(dead_code)] Conv1D { arg: Tensor, kernel: Tensor, padding: usize, stride: usize, dilation: usize, }, #[allow(dead_code)] ConvTranspose1D { arg: Tensor, kernel: Tensor, padding: usize, output_padding: usize, stride: usize, dilation: usize, }, #[allow(dead_code)] Conv2D { arg: Tensor, kernel: Tensor, padding: usize, stride: usize, dilation: usize, }, #[allow(dead_code)] ConvTranspose2D { arg: Tensor, kernel: Tensor, padding: usize, output_padding: usize, stride: usize, dilation: usize, }, AvgPool2D { arg: Tensor, kernel_size: (usize, usize), stride: (usize, usize), }, MaxPool2D { arg: Tensor, kernel_size: (usize, usize), stride: (usize, usize), }, UpsampleNearest1D { arg: Tensor, target_size: usize, }, UpsampleNearest2D { arg: Tensor, target_h: usize, target_w: usize, }, Cat(Vec<Tensor>, usize), #[allow(dead_code)] // add is currently unused. Affine { arg: Tensor, mul: f64, add: f64, }, ToDType(Tensor), Copy(Tensor), Broadcast(Tensor), Narrow(Tensor, usize, usize, usize), SliceScatter0(Tensor, Tensor, usize), Reshape(Tensor), ToDevice(Tensor), Transpose(Tensor, usize, usize), Permute(Tensor, Vec<usize>), Elu(Tensor, f64), Powf(Tensor, f64), CustomOp1( Tensor, std::sync::Arc<Box<dyn crate::CustomOp1 + Send + Sync>>, ), CustomOp2( Tensor, Tensor, std::sync::Arc<Box<dyn crate::CustomOp2 + Send + Sync>>, ), CustomOp3( Tensor, Tensor, Tensor, std::sync::Arc<Box<dyn crate::CustomOp3 + Send + Sync>>, ), } pub trait UnaryOpT { const NAME: &'static str; const KERNEL: &'static str; const V: Self; fn bf16(v1: bf16) -> bf16; fn f16(v1: f16) -> f16; fn f32(v1: f32) -> f32; fn f64(v1: f64) -> f64; fn u8(v1: u8) -> u8; fn u32(v1: u32) -> u32; fn i64(v1: i64) -> i64; // There is no very good way to represent optional function in traits so we go for an explicit // boolean flag to mark the function as existing. const BF16_VEC: bool = false; fn bf16_vec(_xs: &[bf16], _ys: &mut [bf16]) {} const F16_VEC: bool = false; fn f16_vec(_xs: &[f16], _ys: &mut [f16]) {} const F32_VEC: bool = false; fn f32_vec(_xs: &[f32], _ys: &mut [f32]) {} const F64_VEC: bool = false; fn f64_vec(_xs: &[f64], _ys: &mut [f64]) {} } pub trait BinaryOpT { const NAME: &'static str; const KERNEL: &'static str; const V: Self; fn bf16(v1: bf16, v2: bf16) -> bf16; fn f16(v1: f16, v2: f16) -> f16; fn f32(v1: f32, v2: f32) -> f32; fn f64(v1: f64, v2: f64) -> f64; fn u8(v1: u8, v2: u8) -> u8; fn u32(v1: u32, v2: u32) -> u32; fn i64(v1: i64, v2: i64) -> i64; const BF16_VEC: bool = false; fn bf16_vec(_xs1: &[bf16], _xs2: &[bf16], _ys: &mut [bf16]) {} const F16_VEC: bool = false; fn f16_vec(_xs1: &[f16], _xs2: &[f16], _ys: &mut [f16]) {} const F32_VEC: bool = false; fn f32_vec(_xs1: &[f32], _xs2: &[f32], _ys: &mut [f32]) {} const F64_VEC: bool = false; fn f64_vec(_xs1: &[f64], _xs2: &[f64], _ys: &mut [f64]) {} const U8_VEC: bool = false; fn u8_vec(_xs1: &[u8], _xs2: &[u8], _ys: &mut [u8]) {} const U32_VEC: bool = false; fn u32_vec(_xs1: &[u32], _xs2: &[u32], _ys: &mut [u32]) {} const I64_VEC: bool = false; fn i64_vec(_xs1: &[i64], _xs2: &[i64], _ys: &mut [i64]) {} } pub(crate) struct Add; pub(crate) struct Div; pub(crate) struct Mul; pub(crate) struct Sub; pub(crate) struct Maximum; pub(crate) struct Minimum; pub(crate) struct Exp; pub(crate) struct Log; pub(crate) struct Sin; pub(crate) struct Cos; pub(crate) struct Abs; pub(crate) struct Neg; pub(crate) struct Recip; pub(crate) struct Sqr; pub(crate) struct Sqrt; pub(crate) struct Gelu; pub(crate) struct GeluErf; pub(crate) struct Erf; pub(crate) struct Relu; pub(crate) struct Silu; pub(crate) struct Tanh; pub(crate) struct Floor; pub(crate) struct Ceil; pub(crate) struct Round; pub(crate) struct Sign; macro_rules! bin_op { ($op:ident, $name: literal, $e: expr, $f32_vec: ident, $f64_vec: ident) => { impl BinaryOpT for $op { const NAME: &'static str = $name; const KERNEL: &'static str = concat!("b", $name); const V: Self = $op; #[inline(always)] fn bf16(v1: bf16, v2: bf16) -> bf16 { $e(v1, v2) } #[inline(always)] fn f16(v1: f16, v2: f16) -> f16 { $e(v1, v2) } #[inline(always)] fn f32(v1: f32, v2: f32) -> f32 { $e(v1, v2) } #[inline(always)] fn f64(v1: f64, v2: f64) -> f64 { $e(v1, v2) } #[inline(always)] fn u8(v1: u8, v2: u8) -> u8 { $e(v1, v2) } #[inline(always)] fn u32(v1: u32, v2: u32) -> u32 { $e(v1, v2) } #[inline(always)] fn i64(v1: i64, v2: i64) -> i64 { $e(v1, v2) } #[cfg(feature = "mkl")] const F32_VEC: bool = true; #[cfg(feature = "mkl")] const F64_VEC: bool = true; #[cfg(feature = "mkl")] #[inline(always)] fn f32_vec(xs1: &[f32], xs2: &[f32], ys: &mut [f32]) { crate::mkl::$f32_vec(xs1, xs2, ys) } #[cfg(feature = "mkl")] #[inline(always)] fn f64_vec(xs1: &[f64], xs2: &[f64], ys: &mut [f64]) { crate::mkl::$f64_vec(xs1, xs2, ys) } #[cfg(feature = "accelerate")] const F32_VEC: bool = true; #[cfg(feature = "accelerate")] const F64_VEC: bool = true; #[cfg(feature = "accelerate")] #[inline(always)] fn f32_vec(xs1: &[f32], xs2: &[f32], ys: &mut [f32]) { crate::accelerate::$f32_vec(xs1, xs2, ys) } #[cfg(feature = "accelerate")] #[inline(always)] fn f64_vec(xs1: &[f64], xs2: &[f64], ys: &mut [f64]) { crate::accelerate::$f64_vec(xs1, xs2, ys) } } }; } bin_op!(Add, "add", |v1, v2| v1 + v2, vs_add, vd_add); bin_op!(Sub, "sub", |v1, v2| v1 - v2, vs_sub, vd_sub); bin_op!(Mul, "mul", |v1, v2| v1 * v2, vs_mul, vd_mul); bin_op!(Div, "div", |v1, v2| v1 / v2, vs_div, vd_div); bin_op!( Minimum, "minimum", |v1, v2| if v1 > v2 { v2 } else { v1 }, vs_min, vd_min ); bin_op!( Maximum, "maximum", |v1, v2| if v1 < v2 { v2 } else { v1 }, vs_max, vd_max ); #[allow(clippy::redundant_closure_call)] macro_rules! unary_op { ($op: ident, $name: literal, $a: ident, $e: expr) => { impl UnaryOpT for $op { const NAME: &'static str = $name; const KERNEL: &'static str = concat!("u", $name); const V: Self = $op; #[inline(always)] fn bf16($a: bf16) -> bf16 { $e } #[inline(always)] fn f16($a: f16) -> f16 { $e } #[inline(always)] fn f32($a: f32) -> f32 { $e } #[inline(always)] fn f64($a: f64) -> f64 { $e } #[inline(always)] fn u8(_: u8) -> u8 { todo!("no unary function for u8") } #[inline(always)] fn u32(_: u32) -> u32 { todo!("no unary function for u32") } #[inline(always)] fn i64(_: i64) -> i64 { todo!("no unary function for i64") } } }; ($op: ident, $name: literal, $a: ident, $e: expr, $f32_vec:ident, $f64_vec:ident) => { impl UnaryOpT for $op { const NAME: &'static str = $name; const KERNEL: &'static str = concat!("u", $name); const V: Self = $op; #[inline(always)] fn bf16($a: bf16) -> bf16 { $e } #[inline(always)] fn f16($a: f16) -> f16 { $e } #[inline(always)] fn f32($a: f32) -> f32 { $e } #[inline(always)] fn f64($a: f64) -> f64 { $e } #[inline(always)] fn u8(_: u8) -> u8 { todo!("no unary function for u8") } #[inline(always)] fn u32(_: u32) -> u32 { todo!("no unary function for u32") } #[inline(always)] fn i64(_: i64) -> i64 { todo!("no unary function for i64") } #[cfg(feature = "mkl")] const F32_VEC: bool = true; #[cfg(feature = "mkl")] const F64_VEC: bool = true; #[cfg(feature = "mkl")] #[inline(always)] fn f32_vec(xs: &[f32], ys: &mut [f32]) { crate::mkl::$f32_vec(xs, ys) } #[cfg(feature = "mkl")] #[inline(always)] fn f64_vec(xs: &[f64], ys: &mut [f64]) { crate::mkl::$f64_vec(xs, ys) } #[cfg(feature = "accelerate")] const F32_VEC: bool = true; #[cfg(feature = "accelerate")] const F64_VEC: bool = true; #[cfg(feature = "accelerate")] #[inline(always)] fn f32_vec(xs: &[f32], ys: &mut [f32]) { crate::accelerate::$f32_vec(xs, ys) } #[cfg(feature = "accelerate")] #[inline(always)] fn f64_vec(xs: &[f64], ys: &mut [f64]) { crate::accelerate::$f64_vec(xs, ys) } } }; } unary_op!(Exp, "exp", v, v.exp(), vs_exp, vd_exp); unary_op!(Log, "log", v, v.ln(), vs_ln, vd_ln); unary_op!(Sin, "sin", v, v.sin(), vs_sin, vd_sin); unary_op!(Cos, "cos", v, v.cos(), vs_cos, vd_cos); unary_op!(Tanh, "tanh", v, v.tanh(), vs_tanh, vd_tanh); unary_op!(Neg, "neg", v, -v); unary_op!(Recip, "recip", v, v.recip()); unary_op!(Sqr, "sqr", v, v * v, vs_sqr, vd_sqr); unary_op!(Sqrt, "sqrt", v, v.sqrt(), vs_sqrt, vd_sqrt); // Hardcode the value for sqrt(2/pi) // https://github.com/huggingface/candle/issues/1982 #[allow(clippy::excessive_precision)] const SQRT_TWO_OVER_PI_F32: f32 = 0.79788456080286535587989211986876373; #[allow(clippy::excessive_precision)] const SQRT_TWO_OVER_PI_F64: f64 = 0.79788456080286535587989211986876373; /// Tanh based approximation of the `gelu` operation /// GeluErf is the more precise one. /// <https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions> impl UnaryOpT for Gelu { const NAME: &'static str = "gelu"; const V: Self = Gelu; #[inline(always)] fn bf16(v: bf16) -> bf16 { bf16::from_f32_const(0.5) * v * (bf16::ONE + bf16::tanh( bf16::from_f32_const(SQRT_TWO_OVER_PI_F32) * v * (bf16::ONE + bf16::from_f32_const(0.044715) * v * v), )) } #[inline(always)] fn f16(v: f16) -> f16 { f16::from_f32_const(0.5) * v * (f16::ONE + f16::tanh( f16::from_f32_const(SQRT_TWO_OVER_PI_F32) * v * (f16::ONE + f16::from_f32_const(0.044715) * v * v), )) } #[inline(always)] fn f32(v: f32) -> f32 { 0.5 * v * (1.0 + f32::tanh(SQRT_TWO_OVER_PI_F32 * v * (1.0 + 0.044715 * v * v))) } #[inline(always)] fn f64(v: f64) -> f64 { 0.5 * v * (1.0 + f64::tanh(SQRT_TWO_OVER_PI_F64 * v * (1.0 + 0.044715 * v * v))) } #[inline(always)] fn u8(_: u8) -> u8 { 0 } #[inline(always)] fn u32(_: u32) -> u32 { 0 } #[inline(always)] fn i64(_: i64) -> i64 { 0 } const KERNEL: &'static str = "ugelu"; #[cfg(feature = "mkl")] const F32_VEC: bool = true; #[cfg(feature = "mkl")] #[inline(always)] fn f32_vec(xs: &[f32], ys: &mut [f32]) { crate::mkl::vs_gelu(xs, ys) } #[cfg(feature = "mkl")] const F64_VEC: bool = true; #[cfg(feature = "mkl")] #[inline(always)] fn f64_vec(xs: &[f64], ys: &mut [f64]) { crate::mkl::vd_gelu(xs, ys) } #[cfg(feature = "accelerate")] const F32_VEC: bool = true; #[cfg(feature = "accelerate")] #[inline(always)] fn f32_vec(xs: &[f32], ys: &mut [f32]) { crate::accelerate::vs_gelu(xs, ys) } #[cfg(feature = "accelerate")] const F64_VEC: bool = true; #[cfg(feature = "accelerate")] #[inline(always)] fn f64_vec(xs: &[f64], ys: &mut [f64]) { crate::accelerate::vd_gelu(xs, ys) } } /// `erf` operation /// <https://en.wikipedia.org/wiki/Error_function> impl UnaryOpT for Erf { const NAME: &'static str = "erf"; const KERNEL: &'static str = "uerf"; const V: Self = Erf; #[inline(always)] fn bf16(v: bf16) -> bf16 { bf16::from_f64(Self::f64(v.to_f64())) } #[inline(always)] fn f16(v: f16) -> f16 { f16::from_f64(Self::f64(v.to_f64())) } #[inline(always)] fn f32(v: f32) -> f32 { Self::f64(v as f64) as f32 } #[inline(always)] fn f64(v: f64) -> f64 { crate::cpu::erf::erf(v) } #[inline(always)] fn u8(_: u8) -> u8 { 0 } #[inline(always)] fn u32(_: u32) -> u32 { 0 } #[inline(always)] fn i64(_: i64) -> i64 { 0 } } /// Silu operation impl UnaryOpT for Silu { const NAME: &'static str = "silu"; const V: Self = Silu; #[inline(always)] fn bf16(v: bf16) -> bf16 { v / (bf16::ONE + (-v).exp()) } #[inline(always)] fn f16(v: f16) -> f16 { v / (f16::ONE + (-v).exp()) } #[inline(always)] fn f32(v: f32) -> f32 { v / (1.0 + (-v).exp()) } #[inline(always)] fn f64(v: f64) -> f64 { v / (1.0 + (-v).exp()) } #[inline(always)] fn u8(_: u8) -> u8 { 0 } #[inline(always)] fn u32(_: u32) -> u32 { 0 } #[inline(always)] fn i64(_: i64) -> i64 { 0 } const KERNEL: &'static str = "usilu"; #[cfg(feature = "mkl")] const F32_VEC: bool = true; #[cfg(feature = "mkl")] #[inline(always)] fn f32_vec(xs: &[f32], ys: &mut [f32]) { crate::mkl::vs_silu(xs, ys) } #[cfg(feature = "mkl")] const F64_VEC: bool = true; #[cfg(feature = "mkl")] #[inline(always)] fn f64_vec(xs: &[f64], ys: &mut [f64]) { crate::mkl::vd_silu(xs, ys) } #[cfg(feature = "accelerate")] const F32_VEC: bool = true; #[cfg(feature = "accelerate")] #[inline(always)] fn f32_vec(xs: &[f32], ys: &mut [f32]) { crate::accelerate::vs_silu(xs, ys) } #[cfg(feature = "accelerate")] const F64_VEC: bool = true; #[cfg(feature = "accelerate")] #[inline(always)] fn f64_vec(xs: &[f64], ys: &mut [f64]) { crate::accelerate::vd_silu(xs, ys) } } impl UnaryOpT for Abs { const NAME: &'static str = "abs"; const KERNEL: &'static str = "uabs"; const V: Self = Abs; #[inline(always)] fn bf16(v: bf16) -> bf16 { v.abs() } #[inline(always)] fn f16(v: f16) -> f16 { v.abs() } #[inline(always)] fn f32(v: f32) -> f32 { v.abs() } #[inline(always)] fn f64(v: f64) -> f64 { v.abs() } #[inline(always)] fn u8(v: u8) -> u8 { v } #[inline(always)] fn u32(v: u32) -> u32 { v } #[inline(always)] fn i64(v: i64) -> i64 { v.abs() } } impl UnaryOpT for Ceil { const NAME: &'static str = "ceil"; const KERNEL: &'static str = "uceil"; const V: Self = Ceil; #[inline(always)] fn bf16(v: bf16) -> bf16 { v.ceil() } #[inline(always)] fn f16(v: f16) -> f16 { v.ceil() } #[inline(always)] fn f32(v: f32) -> f32 { v.ceil() } #[inline(always)] fn f64(v: f64) -> f64 { v.ceil() } #[inline(always)] fn u8(v: u8) -> u8 { v } #[inline(always)] fn u32(v: u32) -> u32 { v } #[inline(always)] fn i64(v: i64) -> i64 { v } } impl UnaryOpT for Floor { const NAME: &'static str = "floor"; const KERNEL: &'static str = "ufloor"; const V: Self = Floor; #[inline(always)] fn bf16(v: bf16) -> bf16 { v.floor() } #[inline(always)] fn f16(v: f16) -> f16 { v.floor() } #[inline(always)] fn f32(v: f32) -> f32 { v.floor() } #[inline(always)] fn f64(v: f64) -> f64 { v.floor() } #[inline(always)] fn u8(v: u8) -> u8 { v } #[inline(always)] fn u32(v: u32) -> u32 { v } #[inline(always)] fn i64(v: i64) -> i64 { v } } impl UnaryOpT for Round { const NAME: &'static str = "round"; const KERNEL: &'static str = "uround"; const V: Self = Round; #[inline(always)] fn bf16(v: bf16) -> bf16 { v.round() } #[inline(always)] fn f16(v: f16) -> f16 { v.round() } #[inline(always)] fn f32(v: f32) -> f32 { v.round() } #[inline(always)] fn f64(v: f64) -> f64 { v.round() } #[inline(always)] fn u8(v: u8) -> u8 { v } #[inline(always)] fn u32(v: u32) -> u32 { v } #[inline(always)] fn i64(v: i64) -> i64 { v } } impl UnaryOpT for GeluErf { const NAME: &'static str = "gelu_erf"; const KERNEL: &'static str = "ugelu_erf"; const V: Self = GeluErf; #[inline(always)] fn bf16(v: bf16) -> bf16 { bf16::from_f64(Self::f64(v.to_f64())) } #[inline(always)] fn f16(v: f16) -> f16 { f16::from_f64(Self::f64(v.to_f64())) } #[inline(always)] fn f32(v: f32) -> f32 { Self::f64(v as f64) as f32 } #[inline(always)] fn f64(v: f64) -> f64 { (crate::cpu::erf::erf(v / 2f64.sqrt()) + 1.) * 0.5 * v } #[inline(always)] fn u8(_: u8) -> u8 { 0 } #[inline(always)] fn u32(_: u32) -> u32 { 0 } #[inline(always)] fn i64(_: i64) -> i64 { 0 } } impl UnaryOpT for Relu { const NAME: &'static str = "relu"; const KERNEL: &'static str = "urelu"; const V: Self = Relu; #[inline(always)] fn bf16(v: bf16) -> bf16 { v.max(bf16::ZERO) } #[inline(always)] fn f16(v: f16) -> f16 { v.max(f16::ZERO) } #[inline(always)] fn f32(v: f32) -> f32 { v.max(0f32) } #[inline(always)] fn f64(v: f64) -> f64 { v.max(0f64) } #[inline(always)] fn u8(v: u8) -> u8 { v } #[inline(always)] fn u32(v: u32) -> u32 { v } #[inline(always)] fn i64(v: i64) -> i64 { v } } /// `BackpropOp` is a wrapper around `Option<Op>`. The main goal is to ensure that dependencies are /// properly checked when creating a new value #[derive(Clone)] pub struct BackpropOp(Option<Op>); impl BackpropOp { pub(crate) fn none() -> Self { BackpropOp(None) } pub(crate) fn new1(arg: &Tensor, f: impl Fn(Tensor) -> Op) -> Self { let op = if arg.track_op() { Some(f(arg.clone())) } else { None }; Self(op) } pub(crate) fn new2(arg1: &Tensor, arg2: &Tensor, f: impl Fn(Tensor, Tensor) -> Op) -> Self { let op = if arg1.track_op() || arg2.track_op() { Some(f(arg1.clone(), arg2.clone())) } else { None }; Self(op) } pub(crate) fn new3( arg1: &Tensor, arg2: &Tensor, arg3: &Tensor, f: impl Fn(Tensor, Tensor, Tensor) -> Op, ) -> Self { let op = if arg1.track_op() || arg2.track_op() || arg3.track_op() { Some(f(arg1.clone(), arg2.clone(), arg3.clone())) } else { None }; Self(op) } pub(crate) fn new<A: AsRef<Tensor>>(args: &[A], f: impl Fn(Vec<Tensor>) -> Op) -> Self { let op = if args.iter().any(|arg| arg.as_ref().track_op()) { let args: Vec<Tensor> = args.iter().map(|arg| arg.as_ref().clone()).collect(); Some(f(args)) } else { None }; Self(op) } pub(crate) fn is_none(&self) -> bool { self.0.is_none() } } impl std::ops::Deref for BackpropOp { type Target = Option<Op>; fn deref(&self) -> &Self::Target { &self.0 } } impl UnaryOpT for Sign { const NAME: &'static str = "sign"; const KERNEL: &'static str = "usign"; const V: Self = Sign; #[inline(always)] fn bf16(v: bf16) -> bf16 { bf16::from((v > bf16::ZERO) as i8) - bf16::from((v < bf16::ZERO) as i8) } #[inline(always)] fn f16(v: f16) -> f16 { f16::from((v > f16::ZERO) as i8) - f16::from((v < f16::ZERO) as i8) } #[inline(always)] fn f32(v: f32) -> f32 { f32::from(v > 0.) - f32::from(v < 0.) } #[inline(always)] fn f64(v: f64) -> f64 { f64::from(v > 0.) - f64::from(v < 0.) } #[inline(always)] fn u8(v: u8) -> u8 { u8::min(1, v) } #[inline(always)] fn u32(v: u32) -> u32 { u32::min(1, v) } #[inline(always)] fn i64(v: i64) -> i64 { (v > 0) as i64 - (v < 0) as i64 } }
candle/candle-core/src/op.rs/0
{ "file_path": "candle/candle-core/src/op.rs", "repo_id": "candle", "token_count": 13513 }
//! The shape of a tensor is a tuple with the size of each of its dimensions. #![allow(clippy::redundant_closure_call)] use crate::{Error, Result}; #[derive(Clone, PartialEq, Eq)] pub struct Shape(Vec<usize>); pub const SCALAR: Shape = Shape(vec![]); impl std::fmt::Debug for Shape { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{:?}", &self.dims()) } } impl<const C: usize> From<&[usize; C]> for Shape { fn from(dims: &[usize; C]) -> Self { Self(dims.to_vec()) } } impl From<&[usize]> for Shape { fn from(dims: &[usize]) -> Self { Self(dims.to_vec()) } } impl From<&Shape> for Shape { fn from(shape: &Shape) -> Self { Self(shape.0.to_vec()) } } impl From<()> for Shape { fn from(_: ()) -> Self { Self(vec![]) } } impl From<usize> for Shape { fn from(d1: usize) -> Self { Self(vec![d1]) } } impl From<(usize,)> for Shape { fn from(d1: (usize,)) -> Self { Self(vec![d1.0]) } } impl From<(usize, usize)> for Shape { fn from(d12: (usize, usize)) -> Self { Self(vec![d12.0, d12.1]) } } impl From<(usize, usize, usize)> for Shape { fn from(d123: (usize, usize, usize)) -> Self { Self(vec![d123.0, d123.1, d123.2]) } } impl From<(usize, usize, usize, usize)> for Shape { fn from(d1234: (usize, usize, usize, usize)) -> Self { Self(vec![d1234.0, d1234.1, d1234.2, d1234.3]) } } impl From<(usize, usize, usize, usize, usize)> for Shape { fn from(d12345: (usize, usize, usize, usize, usize)) -> Self { Self(vec![d12345.0, d12345.1, d12345.2, d12345.3, d12345.4]) } } impl From<(usize, usize, usize, usize, usize, usize)> for Shape { fn from(d123456: (usize, usize, usize, usize, usize, usize)) -> Self { Self(vec![ d123456.0, d123456.1, d123456.2, d123456.3, d123456.4, d123456.5, ]) } } impl From<Vec<usize>> for Shape { fn from(dims: Vec<usize>) -> Self { Self(dims) } } macro_rules! extract_dims { ($fn_name:ident, $cnt:tt, $dims:expr, $out_type:ty) => { pub fn $fn_name(dims: &[usize]) -> Result<$out_type> { if dims.len() != $cnt { Err(Error::UnexpectedNumberOfDims { expected: $cnt, got: dims.len(), shape: Shape::from(dims), } .bt()) } else { Ok($dims(dims)) } } impl Shape { pub fn $fn_name(&self) -> Result<$out_type> { $fn_name(self.0.as_slice()) } } impl crate::Tensor { pub fn $fn_name(&self) -> Result<$out_type> { self.shape().$fn_name() } } impl std::convert::TryInto<$out_type> for Shape { type Error = crate::Error; fn try_into(self) -> std::result::Result<$out_type, Self::Error> { self.$fn_name() } } }; } impl Shape { pub fn from_dims(dims: &[usize]) -> Self { Self(dims.to_vec()) } /// The rank is the number of dimensions, 0 for a scalar value, 1 for a vector, etc. pub fn rank(&self) -> usize { self.0.len() } pub fn into_dims(self) -> Vec<usize> { self.0 } /// The dimensions as a slice of `usize`. pub fn dims(&self) -> &[usize] { &self.0 } /// The dimension size for a specified dimension index. pub fn dim<D: Dim>(&self, dim: D) -> Result<usize> { let dim = dim.to_index(self, "dim")?; Ok(self.dims()[dim]) } /// The total number of elements, this is the product of all dimension sizes. pub fn elem_count(&self) -> usize { self.0.iter().product() } /// The strides given in number of elements for a contiguous n-dimensional /// arrays using this shape. pub(crate) fn stride_contiguous(&self) -> Vec<usize> { let mut stride: Vec<_> = self .0 .iter() .rev() .scan(1, |prod, u| { let prod_pre_mult = *prod; *prod *= u; Some(prod_pre_mult) }) .collect(); stride.reverse(); stride } /// Returns true if the strides are C contiguous (aka row major). pub fn is_contiguous(&self, stride: &[usize]) -> bool { if self.0.len() != stride.len() { return false; } let mut acc = 1; for (&stride, &dim) in stride.iter().zip(self.0.iter()).rev() { if dim > 1 && stride != acc { return false; } acc *= dim; } true } /// Returns true if the strides are Fortran contiguous (aka column major). pub fn is_fortran_contiguous(&self, stride: &[usize]) -> bool { if self.0.len() != stride.len() { return false; } let mut acc = 1; for (&stride, &dim) in stride.iter().zip(self.0.iter()) { if dim > 1 && stride != acc { return false; } acc *= dim; } true } /// Modifies the shape by adding a list of additional dimensions at the end of the existing /// dimensions. pub fn extend(mut self, additional_dims: &[usize]) -> Self { self.0.extend(additional_dims); self } /// Check whether the two shapes are compatible for broadcast, and if it is the case return the /// broadcasted shape. This is to be used for binary pointwise ops. pub fn broadcast_shape_binary_op(&self, rhs: &Self, op: &'static str) -> Result<Shape> { let lhs = self; let lhs_dims = lhs.dims(); let rhs_dims = rhs.dims(); let lhs_ndims = lhs_dims.len(); let rhs_ndims = rhs_dims.len(); let bcast_ndims = usize::max(lhs_ndims, rhs_ndims); let mut bcast_dims = vec![0; bcast_ndims]; for (idx, bcast_value) in bcast_dims.iter_mut().enumerate() { let rev_idx = bcast_ndims - idx; let l_value = if lhs_ndims < rev_idx { 1 } else { lhs_dims[lhs_ndims - rev_idx] }; let r_value = if rhs_ndims < rev_idx { 1 } else { rhs_dims[rhs_ndims - rev_idx] }; *bcast_value = if l_value == r_value { l_value } else if l_value == 1 { r_value } else if r_value == 1 { l_value } else { Err(Error::ShapeMismatchBinaryOp { lhs: lhs.clone(), rhs: rhs.clone(), op, } .bt())? } } Ok(Shape::from(bcast_dims)) } pub(crate) fn broadcast_shape_matmul(&self, rhs: &Self) -> Result<(Shape, Shape)> { let lhs = self; let lhs_dims = lhs.dims(); let rhs_dims = rhs.dims(); if lhs_dims.len() < 2 || rhs_dims.len() < 2 { crate::bail!("only 2d matrixes are supported {lhs:?} {rhs:?}") } let (m, lhs_k) = (lhs_dims[lhs_dims.len() - 2], lhs_dims[lhs_dims.len() - 1]); let (rhs_k, n) = (rhs_dims[rhs_dims.len() - 2], rhs_dims[rhs_dims.len() - 1]); if lhs_k != rhs_k { crate::bail!("different inner dimensions in broadcast matmul {lhs:?} {rhs:?}") } let lhs_b = Self::from(&lhs_dims[..lhs_dims.len() - 2]); let rhs_b = Self::from(&rhs_dims[..rhs_dims.len() - 2]); let bcast = lhs_b.broadcast_shape_binary_op(&rhs_b, "broadcast_matmul")?; let bcast_dims = bcast.dims(); let bcast_lhs = [bcast_dims, &[m, lhs_k]].concat(); let bcast_rhs = [bcast_dims, &[rhs_k, n]].concat(); Ok((Shape::from(bcast_lhs), Shape::from(bcast_rhs))) } } pub trait Dim { fn to_index(&self, shape: &Shape, op: &'static str) -> Result<usize>; fn to_index_plus_one(&self, shape: &Shape, op: &'static str) -> Result<usize>; } impl Dim for usize { fn to_index(&self, shape: &Shape, op: &'static str) -> Result<usize> { let dim = *self; if dim >= shape.dims().len() { Err(Error::DimOutOfRange { shape: shape.clone(), dim: dim as i32, op, } .bt())? } else { Ok(dim) } } fn to_index_plus_one(&self, shape: &Shape, op: &'static str) -> Result<usize> { let dim = *self; if dim > shape.dims().len() { Err(Error::DimOutOfRange { shape: shape.clone(), dim: dim as i32, op, } .bt())? } else { Ok(dim) } } } #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)] pub enum D { Minus1, Minus2, Minus(usize), } impl D { fn out_of_range(&self, shape: &Shape, op: &'static str) -> Error { let dim = match self { Self::Minus1 => -1, Self::Minus2 => -2, Self::Minus(u) => -(*u as i32), }; Error::DimOutOfRange { shape: shape.clone(), dim, op, } .bt() } } impl Dim for D { fn to_index(&self, shape: &Shape, op: &'static str) -> Result<usize> { let rank = shape.rank(); match self { Self::Minus1 if rank >= 1 => Ok(rank - 1), Self::Minus2 if rank >= 2 => Ok(rank - 2), Self::Minus(u) if *u > 0 && rank >= *u => Ok(rank - *u), _ => Err(self.out_of_range(shape, op)), } } fn to_index_plus_one(&self, shape: &Shape, op: &'static str) -> Result<usize> { let rank = shape.rank(); match self { Self::Minus1 => Ok(rank), Self::Minus2 if rank >= 1 => Ok(rank - 1), Self::Minus(u) if *u > 0 && rank + 1 >= *u => Ok(rank + 1 - *u), _ => Err(self.out_of_range(shape, op)), } } } pub trait Dims: Sized { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>>; fn to_indexes(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let dims = self.to_indexes_internal(shape, op)?; for (i, &dim) in dims.iter().enumerate() { if dims[..i].contains(&dim) { Err(Error::DuplicateDimIndex { shape: shape.clone(), dims: dims.clone(), op, } .bt())? } if dim >= shape.rank() { Err(Error::DimOutOfRange { shape: shape.clone(), dim: dim as i32, op, } .bt())? } } Ok(dims) } } impl Dims for Vec<usize> { fn to_indexes_internal(self, _: &Shape, _: &'static str) -> Result<Vec<usize>> { Ok(self) } } impl<const N: usize> Dims for [usize; N] { fn to_indexes_internal(self, _: &Shape, _: &'static str) -> Result<Vec<usize>> { Ok(self.to_vec()) } } impl Dims for &[usize] { fn to_indexes_internal(self, _: &Shape, _: &'static str) -> Result<Vec<usize>> { Ok(self.to_vec()) } } impl Dims for () { fn to_indexes_internal(self, _: &Shape, _: &'static str) -> Result<Vec<usize>> { Ok(vec![]) } } impl<D: Dim + Sized> Dims for D { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let dim = self.to_index(shape, op)?; Ok(vec![dim]) } } impl<D: Dim> Dims for (D,) { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let dim = self.0.to_index(shape, op)?; Ok(vec![dim]) } } impl<D1: Dim, D2: Dim> Dims for (D1, D2) { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let d0 = self.0.to_index(shape, op)?; let d1 = self.1.to_index(shape, op)?; Ok(vec![d0, d1]) } } impl<D1: Dim, D2: Dim, D3: Dim> Dims for (D1, D2, D3) { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let d0 = self.0.to_index(shape, op)?; let d1 = self.1.to_index(shape, op)?; let d2 = self.2.to_index(shape, op)?; Ok(vec![d0, d1, d2]) } } impl<D1: Dim, D2: Dim, D3: Dim, D4: Dim> Dims for (D1, D2, D3, D4) { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let d0 = self.0.to_index(shape, op)?; let d1 = self.1.to_index(shape, op)?; let d2 = self.2.to_index(shape, op)?; let d3 = self.3.to_index(shape, op)?; Ok(vec![d0, d1, d2, d3]) } } impl<D1: Dim, D2: Dim, D3: Dim, D4: Dim, D5: Dim> Dims for (D1, D2, D3, D4, D5) { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let d0 = self.0.to_index(shape, op)?; let d1 = self.1.to_index(shape, op)?; let d2 = self.2.to_index(shape, op)?; let d3 = self.3.to_index(shape, op)?; let d4 = self.4.to_index(shape, op)?; Ok(vec![d0, d1, d2, d3, d4]) } } impl<D1: Dim, D2: Dim, D3: Dim, D4: Dim, D5: Dim, D6: Dim> Dims for (D1, D2, D3, D4, D5, D6) { fn to_indexes_internal(self, shape: &Shape, op: &'static str) -> Result<Vec<usize>> { let d0 = self.0.to_index(shape, op)?; let d1 = self.1.to_index(shape, op)?; let d2 = self.2.to_index(shape, op)?; let d3 = self.3.to_index(shape, op)?; let d4 = self.4.to_index(shape, op)?; let d5 = self.5.to_index(shape, op)?; Ok(vec![d0, d1, d2, d3, d4, d5]) } } extract_dims!(dims0, 0, |_: &[usize]| (), ()); extract_dims!(dims1, 1, |d: &[usize]| d[0], usize); extract_dims!(dims2, 2, |d: &[usize]| (d[0], d[1]), (usize, usize)); extract_dims!( dims3, 3, |d: &[usize]| (d[0], d[1], d[2]), (usize, usize, usize) ); extract_dims!( dims4, 4, |d: &[usize]| (d[0], d[1], d[2], d[3]), (usize, usize, usize, usize) ); extract_dims!( dims5, 5, |d: &[usize]| (d[0], d[1], d[2], d[3], d[4]), (usize, usize, usize, usize, usize) ); pub trait ShapeWithOneHole { fn into_shape(self, el_count: usize) -> Result<Shape>; } impl<S: Into<Shape>> ShapeWithOneHole for S { fn into_shape(self, _el_count: usize) -> Result<Shape> { Ok(self.into()) } } impl ShapeWithOneHole for ((),) { fn into_shape(self, el_count: usize) -> Result<Shape> { Ok(el_count.into()) } } fn hole_size(el_count: usize, prod_d: usize, s: &dyn std::fmt::Debug) -> Result<usize> { if prod_d == 0 { crate::bail!("cannot reshape tensor of {el_count} elements to {s:?}") } if el_count % prod_d != 0 { crate::bail!("cannot reshape tensor with {el_count} elements to {s:?}") } Ok(el_count / prod_d) } impl ShapeWithOneHole for ((), usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let ((), d1) = self; Ok((hole_size(el_count, d1, &self)?, d1).into()) } } impl ShapeWithOneHole for (usize, ()) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, ()) = self; Ok((d1, hole_size(el_count, d1, &self)?).into()) } } impl ShapeWithOneHole for ((), usize, usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let ((), d1, d2) = self; Ok((hole_size(el_count, d1 * d2, &self)?, d1, d2).into()) } } impl ShapeWithOneHole for (usize, (), usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, (), d2) = self; Ok((d1, hole_size(el_count, d1 * d2, &self)?, d2).into()) } } impl ShapeWithOneHole for (usize, usize, ()) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, d2, ()) = self; Ok((d1, d2, hole_size(el_count, d1 * d2, &self)?).into()) } } impl ShapeWithOneHole for ((), usize, usize, usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let ((), d1, d2, d3) = self; let d = hole_size(el_count, d1 * d2 * d3, &self)?; Ok((d, d1, d2, d3).into()) } } impl ShapeWithOneHole for (usize, (), usize, usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, (), d2, d3) = self; let d = hole_size(el_count, d1 * d2 * d3, &self)?; Ok((d1, d, d2, d3).into()) } } impl ShapeWithOneHole for (usize, usize, (), usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, d2, (), d3) = self; let d = hole_size(el_count, d1 * d2 * d3, &self)?; Ok((d1, d2, d, d3).into()) } } impl ShapeWithOneHole for (usize, usize, usize, ()) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, d2, d3, ()) = self; let d = hole_size(el_count, d1 * d2 * d3, &self)?; Ok((d1, d2, d3, d).into()) } } impl ShapeWithOneHole for ((), usize, usize, usize, usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let ((), d1, d2, d3, d4) = self; let d = hole_size(el_count, d1 * d2 * d3 * d4, &self)?; Ok((d, d1, d2, d3, d4).into()) } } impl ShapeWithOneHole for (usize, (), usize, usize, usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, (), d2, d3, d4) = self; let d = hole_size(el_count, d1 * d2 * d3 * d4, &self)?; Ok((d1, d, d2, d3, d4).into()) } } impl ShapeWithOneHole for (usize, usize, (), usize, usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, d2, (), d3, d4) = self; let d = hole_size(el_count, d1 * d2 * d3 * d4, &self)?; Ok((d1, d2, d, d3, d4).into()) } } impl ShapeWithOneHole for (usize, usize, usize, (), usize) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, d2, d3, (), d4) = self; let d = hole_size(el_count, d1 * d2 * d3 * d4, &self)?; Ok((d1, d2, d3, d, d4).into()) } } impl ShapeWithOneHole for (usize, usize, usize, usize, ()) { fn into_shape(self, el_count: usize) -> Result<Shape> { let (d1, d2, d3, d4, ()) = self; let d = hole_size(el_count, d1 * d2 * d3 * d4, &self)?; Ok((d1, d2, d3, d4, d).into()) } } #[cfg(test)] mod tests { use super::*; #[test] fn stride() { let shape = Shape::from(()); assert_eq!(shape.stride_contiguous(), Vec::<usize>::new()); let shape = Shape::from(42); assert_eq!(shape.stride_contiguous(), [1]); let shape = Shape::from((42, 1337)); assert_eq!(shape.stride_contiguous(), [1337, 1]); let shape = Shape::from((299, 792, 458)); assert_eq!(shape.stride_contiguous(), [458 * 792, 458, 1]); } }
candle/candle-core/src/shape.rs/0
{ "file_path": "candle/candle-core/src/shape.rs", "repo_id": "candle", "token_count": 10016 }
use candle::{test_device, Device, IndexOp, Result, Tensor}; use candle_core as candle; fn contiguous(device: &Device) -> Result<()> { let tensor = Tensor::arange(0u32, 24u32, device)?.reshape((2, 3, 4))?; assert_eq!( tensor.to_vec3::<u32>()?, &[ [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]] ] ); assert_eq!( tensor.t()?.contiguous()?.to_vec3::<u32>()?, &[ [[0, 4, 8], [1, 5, 9], [2, 6, 10], [3, 7, 11]], [[12, 16, 20], [13, 17, 21], [14, 18, 22], [15, 19, 23]] ] ); assert_eq!( tensor.transpose(0, 1)?.contiguous()?.to_vec3::<u32>()?, &[ [[0, 1, 2, 3], [12, 13, 14, 15]], [[4, 5, 6, 7], [16, 17, 18, 19]], [[8, 9, 10, 11], [20, 21, 22, 23]] ] ); assert_eq!( tensor.transpose(0, 1)?.flatten_all()?.to_vec1::<u32>()?, &[0, 1, 2, 3, 12, 13, 14, 15, 4, 5, 6, 7, 16, 17, 18, 19, 8, 9, 10, 11, 20, 21, 22, 23] ); assert_eq!( tensor .i(1..)? .transpose(0, 1)? .contiguous()? .to_vec3::<u32>()?, &[[[12, 13, 14, 15]], [[16, 17, 18, 19]], [[20, 21, 22, 23]]] ); assert_eq!( tensor.transpose(0, 2)?.contiguous()?.to_vec3::<u32>()?, &[ [[0, 12], [4, 16], [8, 20]], [[1, 13], [5, 17], [9, 21]], [[2, 14], [6, 18], [10, 22]], [[3, 15], [7, 19], [11, 23]] ] ); Ok(()) } test_device!(contiguous, contiguous_cpu, contiguous_gpu, contiguous_metal); #[test] fn strided_blocks() -> Result<()> { use candle::Device::Cpu; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 0); assert_eq!(len, 24); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 26u32, &Cpu)? .i(2..)? .reshape((2, 3, 4))?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 2); assert_eq!(len, 24); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; let tensor = tensor.i(1)?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 12); assert_eq!(len, 12); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; let tensor = tensor.i((.., 1))?.contiguous()?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { start_offset, len } => { assert_eq!(start_offset, 0); assert_eq!(len, 8); assert_eq!(tensor.to_vec2::<u32>()?, &[[4, 5, 6, 7], [16, 17, 18, 19]]); } candle::StridedBlocks::MultipleBlocks { .. } => { panic!("unexpected block structure") } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; let tensor = tensor.i((.., 1))?; match tensor.strided_blocks() { candle::StridedBlocks::SingleBlock { .. } => { panic!("unexpected block structure") } candle::StridedBlocks::MultipleBlocks { block_len, block_start_index, } => { assert_eq!(block_len, 4); assert_eq!(block_start_index.collect::<Vec<_>>(), &[4, 16]) } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; match tensor.t()?.strided_blocks() { candle::StridedBlocks::SingleBlock { .. } => { panic!("unexpected block structure") } candle::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { assert_eq!(block_len, 1); assert_eq!( block_start_index.collect::<Vec<_>>(), &[ 0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11, 12, 16, 20, 13, 17, 21, 14, 18, 22, 15, 19, 23 ] ) } }; let tensor = Tensor::arange(0u32, 24u32, &Cpu)?.reshape((2, 3, 4))?; match tensor.transpose(0, 1)?.strided_blocks() { candle::StridedBlocks::SingleBlock { .. } => { panic!("unexpected block structure") } candle::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { assert_eq!(block_len, 4); assert_eq!( block_start_index.collect::<Vec<_>>(), &[0, 12, 4, 16, 8, 20] ) } }; Ok(()) }
candle/candle-core/tests/layout_tests.rs/0
{ "file_path": "candle/candle-core/tests/layout_tests.rs", "repo_id": "candle", "token_count": 2819 }
use hf_hub::{ api::sync::{Api, ApiRepo}, Repo, RepoType, }; use parquet::file::reader::SerializedFileReader; use std::fs::File; #[derive(thiserror::Error, Debug)] pub enum Error { #[error("ApiError : {0}")] ApiError(#[from] hf_hub::api::sync::ApiError), #[error("IoError : {0}")] IoError(#[from] std::io::Error), #[error("ParquetError : {0}")] ParquetError(#[from] parquet::errors::ParquetError), } fn sibling_to_parquet( rfilename: &str, repo: &ApiRepo, ) -> Result<SerializedFileReader<File>, Error> { let local = repo.get(rfilename)?; let file = File::open(local)?; let reader = SerializedFileReader::new(file)?; Ok(reader) } pub fn from_hub(api: &Api, dataset_id: String) -> Result<Vec<SerializedFileReader<File>>, Error> { let repo = Repo::with_revision( dataset_id, RepoType::Dataset, "refs/convert/parquet".to_string(), ); let repo = api.repo(repo); let info = repo.info()?; let files: Result<Vec<_>, _> = info .siblings .into_iter() .filter_map(|s| -> Option<Result<_, _>> { let filename = s.rfilename; if filename.ends_with(".parquet") { let reader_result = sibling_to_parquet(&filename, &repo); Some(reader_result) } else { None } }) .collect(); let files = files?; Ok(files) } #[cfg(test)] mod tests { use super::*; use parquet::file::reader::FileReader; #[test] fn test_dataset() { let api = Api::new().unwrap(); let files = from_hub( &api, "hf-internal-testing/dummy_image_text_data".to_string(), ) .unwrap(); assert_eq!(files.len(), 1); assert_eq!(files[0].metadata().file_metadata().num_rows(), 20); } }
candle/candle-datasets/src/hub.rs/0
{ "file_path": "candle/candle-datasets/src/hub.rs", "repo_id": "candle", "token_count": 900 }
# candle-starcoder: code generation model [StarCoder/BigCode](https://huggingface.co/bigcode/starcoderbase-1b) is a LLM model specialized to code generation. The initial model was trained on 80 programming languages. ## Running some example ```bash cargo run --example bigcode --release -- --prompt "fn fact(n: u64) -> u64 " > fn fact(n: u64) -> u64 { > if n == 0 { > 1 > } else { > n * fact(n - 1) > } > } ```
candle/candle-examples/examples/bigcode/README.md/0
{ "file_path": "candle/candle-examples/examples/bigcode/README.md", "repo_id": "candle", "token_count": 180 }
#include <stdint.h> #include "reduction_utils.cuh" template <typename scalar_t> __device__ void rms_norm_kernel(scalar_t *__restrict__ out, // [num_tokens, hidden_size] const scalar_t *__restrict__ input, // [num_tokens, hidden_size] const float epsilon, const uint32_t num_tokens, const uint32_t hidden_size) { __shared__ float s_variance; float variance = 0.0f; for (int idx = threadIdx.x; idx < hidden_size; idx += blockDim.x) { const float x = (float)input[blockIdx.x * hidden_size + idx]; variance += x * x; } variance = blockReduceSum<float>(variance); if (threadIdx.x == 0) { s_variance = rsqrtf(variance / hidden_size + epsilon); } __syncthreads(); for (int idx = threadIdx.x; idx < hidden_size; idx += blockDim.x) { float x = (float)input[blockIdx.x * hidden_size + idx]; out[blockIdx.x * hidden_size + idx] = ((scalar_t)(x * s_variance)); } } extern "C" __global__ void rms_f32( float *__restrict__ out, // [num_tokens, hidden_size] const float *__restrict__ input, // [num_tokens, hidden_size] const float epsilon, const uint32_t num_tokens, const uint32_t hidden_size) { rms_norm_kernel(out, input, epsilon, num_tokens, hidden_size); }
candle/candle-examples/examples/custom-ops/kernels/layernorm_kernels.cu/0
{ "file_path": "candle/candle-examples/examples/custom-ops/kernels/layernorm_kernels.cu", "repo_id": "candle", "token_count": 561 }
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::{Parser, ValueEnum}; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::efficientvit; #[derive(Clone, Copy, Debug, ValueEnum)] enum Which { M0, M1, M2, M3, M4, M5, } impl Which { fn model_filename(&self) -> String { let name = match self { Self::M0 => "m0", Self::M1 => "m1", Self::M2 => "m2", Self::M3 => "m3", Self::M4 => "m4", Self::M5 => "m5", }; format!("timm/efficientvit_{}.r224_in1k", name) } fn config(&self) -> efficientvit::Config { match self { Self::M0 => efficientvit::Config::m0(), Self::M1 => efficientvit::Config::m1(), Self::M2 => efficientvit::Config::m2(), Self::M3 => efficientvit::Config::m3(), Self::M4 => efficientvit::Config::m4(), Self::M5 => efficientvit::Config::m5(), } } } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, #[arg(value_enum, long, default_value_t=Which::M0)] which: Which, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let model_name = args.which.model_filename(); let api = hf_hub::api::sync::Api::new()?; let api = api.model(model_name); api.get("model.safetensors")? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let model = efficientvit::efficientvit(&args.which.config(), 1000, vb)?; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
candle/candle-examples/examples/efficientvit/main.rs/0
{ "file_path": "candle/candle-examples/examples/efficientvit/main.rs", "repo_id": "candle", "token_count": 1278 }
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::Parser; use candle_transformers::models::gemma::{Config as Config1, Model as Model1}; use candle_transformers::models::gemma2::{Config as Config2, Model as Model2}; use candle::{DType, Device, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "2b")] Base2B, #[value(name = "7b")] Base7B, #[value(name = "2b-it")] Instruct2B, #[value(name = "7b-it")] Instruct7B, #[value(name = "1.1-2b-it")] InstructV1_1_2B, #[value(name = "1.1-7b-it")] InstructV1_1_7B, #[value(name = "code-2b")] CodeBase2B, #[value(name = "code-7b")] CodeBase7B, #[value(name = "code-2b-it")] CodeInstruct2B, #[value(name = "code-7b-it")] CodeInstruct7B, #[value(name = "2-2b")] BaseV2_2B, #[value(name = "2-2b-it")] InstructV2_2B, #[value(name = "2-9b")] BaseV2_9B, #[value(name = "2-9b-it")] InstructV2_9B, } impl Which { fn is_v1(&self) -> bool { match self { Self::Base2B | Self::Base7B | Self::Instruct2B | Self::Instruct7B | Self::InstructV1_1_2B | Self::InstructV1_1_7B | Self::CodeBase2B | Self::CodeBase7B | Self::CodeInstruct2B | Self::CodeInstruct7B => true, Self::BaseV2_2B | Self::InstructV2_2B | Self::BaseV2_9B | Self::InstructV2_9B => false, } } } enum Model { V1(Model1), V2(Model2), } impl Model { fn forward(&mut self, input_ids: &Tensor, pos: usize) -> candle::Result<Tensor> { match self { Self::V1(m) => m.forward(input_ids, pos), Self::V2(m) => m.forward(input_ids, pos), } } } struct TextGeneration { model: Model, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, repeat_penalty: f32, repeat_last_n: usize, device: &Device, ) -> Self { let logits_processor = LogitsProcessor::new(seed, temp, top_p); Self { model, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let mut generated_tokens = 0usize; let eos_token = match self.tokenizer.get_token("<eos>") { Some(token) => token, None => anyhow::bail!("cannot find the <eos> token"), }; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; let logits = self.model.forward(&input, start_pos)?; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; if next_token == eos_token { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 10000)] sample_len: usize, #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] config_file: Option<String>, #[arg(long)] weight_files: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, /// The model to use. #[arg(long, default_value = "2-2b")] which: Which, #[arg(long)] use_flash_attn: bool, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature.unwrap_or(0.), args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let model_id = match &args.model_id { Some(model_id) => model_id.to_string(), None => match args.which { Which::InstructV1_1_2B => "google/gemma-1.1-2b-it".to_string(), Which::InstructV1_1_7B => "google/gemma-1.1-7b-it".to_string(), Which::Base2B => "google/gemma-2b".to_string(), Which::Base7B => "google/gemma-7b".to_string(), Which::Instruct2B => "google/gemma-2b-it".to_string(), Which::Instruct7B => "google/gemma-7b-it".to_string(), Which::CodeBase2B => "google/codegemma-2b".to_string(), Which::CodeBase7B => "google/codegemma-7b".to_string(), Which::CodeInstruct2B => "google/codegemma-2b-it".to_string(), Which::CodeInstruct7B => "google/codegemma-7b-it".to_string(), Which::BaseV2_2B => "google/gemma-2-2b".to_string(), Which::InstructV2_2B => "google/gemma-2-2b-it".to_string(), Which::BaseV2_9B => "google/gemma-2-9b".to_string(), Which::InstructV2_9B => "google/gemma-2-9b-it".to_string(), }, }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let config_filename = match args.config_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("config.json")?, }; let filenames = match args.weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")?, }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let dtype = if device.is_cuda() { DType::BF16 } else { DType::F32 }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; let model = if args.which.is_v1() { let config: Config1 = serde_json::from_reader(std::fs::File::open(config_filename)?)?; let model = Model1::new(args.use_flash_attn, &config, vb)?; Model::V1(model) } else { let config: Config2 = serde_json::from_reader(std::fs::File::open(config_filename)?)?; let model = Model2::new(args.use_flash_attn, &config, vb)?; Model::V2(model) }; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, tokenizer, args.seed, args.temperature, args.top_p, args.repeat_penalty, args.repeat_last_n, &device, ); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
candle/candle-examples/examples/gemma/main.rs/0
{ "file_path": "candle/candle-examples/examples/gemma/main.rs", "repo_id": "candle", "token_count": 5150 }
// An implementation of LLaMA https://github.com/facebookresearch/llama // // This is based on nanoGPT in a similar way to: // https://github.com/Lightning-AI/lit-llama/blob/main/lit_llama/model.py // // The tokenizer config can be retrieved from: // https://huggingface.co/hf-internal-testing/llama-tokenizer/raw/main/tokenizer.json #[cfg(feature = "mkl")] extern crate intel_mkl_src; use anyhow::{bail, Error as E, Result}; use clap::{Parser, ValueEnum}; use candle::{DType, Device, Tensor}; use candle_transformers::generation::LogitsProcessor; use candle_transformers::models::llama::LlamaEosToks; use cudarc::driver::safe::CudaDevice; use cudarc::nccl::safe::{Comm, Id}; use hf_hub::{api::sync::Api, Repo, RepoType}; use std::io::Write; use std::rc::Rc; mod model; use model::{Config, Llama}; const MAX_SEQ_LEN: usize = 4096; const DEFAULT_PROMPT: &str = "My favorite theorem is "; #[derive(Clone, Debug, Copy, PartialEq, Eq, ValueEnum)] enum Which { V2_7b, V2_70b, V3_8b, V3_70b, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { #[arg(long)] num_shards: usize, #[arg(long)] rank: Option<usize>, /// The temperature used to generate samples. #[arg(long, default_value_t = 0.8)] temperature: f64, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, default_value_t = 100)] sample_len: usize, /// Disable the key-value cache. #[arg(long)] no_kv_cache: bool, /// The initial prompt. #[arg(long)] prompt: Option<String>, #[arg(long)] model_id: Option<String>, #[arg(long)] revision: Option<String>, #[arg(long)] dtype: Option<String>, #[arg(long, default_value = "v3-8b")] which: Which, #[arg(long, default_value = "nccl_id.txt")] comm_file: String, } fn main() -> Result<()> { use tokenizers::Tokenizer; let args = Args::parse(); let dtype = match args.dtype.as_deref() { Some("f16") => DType::F16, Some("bf16") => DType::BF16, Some("f32") => DType::F32, Some(dtype) => bail!("Unsupported dtype {dtype}"), None => match args.which { Which::V2_7b | Which::V2_70b => DType::F16, Which::V3_8b | Which::V3_70b => DType::BF16, }, }; let comm_file = std::path::PathBuf::from(&args.comm_file); if comm_file.exists() { bail!("comm file {comm_file:?} already exists, please remove it first") } let api = Api::new()?; let model_id = match args.model_id { Some(model) => model, None => match args.which { Which::V2_7b => "meta-llama/Llama-2-7b-hf".to_string(), Which::V2_70b => "meta-llama/Llama-2-70b-hf".to_string(), Which::V3_8b => "meta-llama/Meta-Llama-3-8B".to_string(), Which::V3_70b => "meta-llama/Meta-Llama-3-70B".to_string(), }, }; println!("loading the model weights from {model_id}"); let revision = args.revision.unwrap_or("main".to_string()); let api = api.repo(Repo::with_revision(model_id, RepoType::Model, revision)); let config_filename = api.get("config.json")?; let config: Config = serde_json::from_slice(&std::fs::read(config_filename)?)?; let tokenizer_filename = api.get("tokenizer.json")?; let filenames = candle_examples::hub_load_safetensors(&api, "model.safetensors.index.json")?; let rank = match args.rank { None => { println!("creating {} child processes", args.num_shards); let children: Vec<_> = (0..args.num_shards) .map(|rank| { let mut args: std::collections::VecDeque<_> = std::env::args().collect(); args.push_back("--rank".to_string()); args.push_back(format!("{rank}")); let name = args.pop_front().unwrap(); std::process::Command::new(name).args(args).spawn().unwrap() }) .collect(); for mut child in children { child.wait()?; } return Ok(()); } Some(rank) => rank, }; let num_shards = args.num_shards; // Primitive IPC let id = if rank == 0 { let id = Id::new().unwrap(); let tmp_file = comm_file.with_extension(".comm.tgz"); std::fs::File::create(&tmp_file)? .write_all(&id.internal().iter().map(|&i| i as u8).collect::<Vec<_>>())?; std::fs::rename(&tmp_file, &comm_file)?; id } else { while !comm_file.exists() { std::thread::sleep(std::time::Duration::from_secs(1)); } let data = std::fs::read(&comm_file)?; let internal: [i8; 128] = data .into_iter() .map(|i| i as i8) .collect::<Vec<_>>() .try_into() .unwrap(); let id: Id = Id::uninit(internal); id }; let device = CudaDevice::new(rank)?; let comm = match Comm::from_rank(device, rank, num_shards, id) { Ok(comm) => Rc::new(comm), Err(err) => anyhow::bail!("nccl error {:?}", err.0), }; if rank == 0 { std::fs::remove_file(comm_file)?; } println!("Rank {rank:?} spawned"); let device = Device::new_cuda(rank)?; let cache = model::Cache::new(dtype, &config, &device)?; println!("building the model"); let vb = unsafe { candle_nn::var_builder::ShardedSafeTensors::var_builder(&filenames, dtype, &device)? }; let llama = Llama::load(vb, &cache, &config, comm)?; let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let prompt = args.prompt.as_ref().map_or(DEFAULT_PROMPT, |p| p.as_str()); let mut tokens = tokenizer .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let mut tokenizer = candle_examples::token_output_stream::TokenOutputStream::new(tokenizer); println!("starting the inference loop"); let temperature = if args.temperature <= 0. { None } else { Some(args.temperature) }; let mut logits_processor = LogitsProcessor::new(args.seed, temperature, args.top_p); let mut new_tokens = vec![]; let mut start_gen = std::time::Instant::now(); let mut index_pos = 0; for index in 0..args.sample_len { // Only start timing at the second token as processing the first token waits for all the // weights to be loaded in an async way. if index == 1 { start_gen = std::time::Instant::now() }; let context_size = if index > 0 { 1 } else { tokens.len() }; let ctxt = &tokens[tokens.len().saturating_sub(context_size)..]; let input = Tensor::new(ctxt, &device)?.unsqueeze(0)?; let logits = llama.forward(&input, index_pos)?; let logits = logits.squeeze(0)?; index_pos += ctxt.len(); let next_token = logits_processor.sample(&logits)?; tokens.push(next_token); new_tokens.push(next_token); match config.eos_token_id { Some(LlamaEosToks::Single(eos_tok_id)) if next_token == eos_tok_id => { break; } Some(LlamaEosToks::Multiple(ref eos_ids)) if eos_ids.contains(&next_token) => { break; } _ => (), } if rank == 0 { if let Some(t) = tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } } println!(); if rank == 0 { let dt = start_gen.elapsed(); println!( "\n\n{} tokens generated ({} token/s)\n", args.sample_len, (args.sample_len - 1) as f64 / dt.as_secs_f64(), ); } Ok(()) }
candle/candle-examples/examples/llama_multiprocess/main.rs/0
{ "file_path": "candle/candle-examples/examples/llama_multiprocess/main.rs", "repo_id": "candle", "token_count": 3774 }
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Result; use clap::Parser; use std::io::Write; use candle_transformers::generation::LogitsProcessor; use candle_transformers::models::encodec; use candle_transformers::models::metavoice::{adapters, gpt, tokenizers, transformer}; use candle_transformers::models::quantized_metavoice::transformer as qtransformer; use candle::{DType, IndexOp, Tensor}; use candle_nn::VarBuilder; use hf_hub::api::sync::Api; use rand::{distributions::Distribution, SeedableRng}; pub const ENCODEC_NTOKENS: u32 = 1024; #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum ArgDType { F32, F16, Bf16, } enum Transformer { Normal(transformer::Model), Quantized(qtransformer::Model), } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// Use the quantized version of the model. #[arg(long)] quantized: bool, /// The guidance scale. #[arg(long, default_value_t = 3.0)] guidance_scale: f64, /// The temperature used to generate samples. #[arg(long, default_value_t = 1.0)] temperature: f64, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The maximum number of tokens to generate for the first stage. #[arg(long, default_value_t = 2000)] max_tokens: u64, /// The output file using the wav format. #[arg(long, default_value = "out.wav")] out_file: String, #[arg(long)] first_stage_meta: Option<String>, #[arg(long)] first_stage_weights: Option<String>, #[arg(long)] second_stage_weights: Option<String>, #[arg(long)] encodec_weights: Option<String>, #[arg(long)] spk_emb: Option<String>, #[arg(long, default_value = "f32")] dtype: ArgDType, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); let device = candle_examples::device(args.cpu)?; let api = Api::new()?; let repo = api.model("lmz/candle-metavoice".to_string()); let first_stage_meta = match &args.first_stage_meta { Some(w) => std::path::PathBuf::from(w), None => repo.get("first_stage.meta.json")?, }; let first_stage_meta: serde_json::Value = serde_json::from_reader(&std::fs::File::open(first_stage_meta)?)?; let first_stage_tokenizer = match first_stage_meta.as_object() { None => anyhow::bail!("not a json object"), Some(j) => match j.get("tokenizer") { None => anyhow::bail!("no tokenizer key"), Some(j) => j, }, }; let fs_tokenizer = tokenizers::BPE::from_json(first_stage_tokenizer, 512)?; let second_stage_weights = match &args.second_stage_weights { Some(w) => std::path::PathBuf::from(w), None => repo.get("second_stage.safetensors")?, }; let encodec_weights = match args.encodec_weights { Some(w) => std::path::PathBuf::from(w), None => Api::new()? .model("facebook/encodec_24khz".to_string()) .get("model.safetensors")?, }; let dtype = match args.dtype { ArgDType::F32 => DType::F32, ArgDType::F16 => DType::F16, ArgDType::Bf16 => DType::BF16, }; let first_stage_config = transformer::Config::cfg1b_v0_1(); let mut first_stage_model = if args.quantized { let filename = match &args.first_stage_weights { Some(w) => std::path::PathBuf::from(w), None => repo.get("first_stage_q4k.gguf")?, }; let vb = candle_transformers::quantized_var_builder::VarBuilder::from_gguf(filename, &device)?; let first_stage_model = qtransformer::Model::new(&first_stage_config, vb)?; Transformer::Quantized(first_stage_model) } else { let first_stage_weights = match &args.first_stage_weights { Some(w) => std::path::PathBuf::from(w), None => repo.get("first_stage.safetensors")?, }; let first_stage_vb = unsafe { VarBuilder::from_mmaped_safetensors(&[first_stage_weights], dtype, &device)? }; let first_stage_model = transformer::Model::new(&first_stage_config, first_stage_vb)?; Transformer::Normal(first_stage_model) }; let second_stage_vb = unsafe { VarBuilder::from_mmaped_safetensors(&[second_stage_weights], dtype, &device)? }; let second_stage_config = gpt::Config::cfg1b_v0_1(); let second_stage_model = gpt::Model::new(second_stage_config.clone(), second_stage_vb)?; let encodec_device = if device.is_metal() { &candle::Device::Cpu } else { &device }; let encodec_vb = unsafe { VarBuilder::from_mmaped_safetensors(&[encodec_weights], dtype, encodec_device)? }; let encodec_config = encodec::Config::default(); let encodec_model = encodec::Model::new(&encodec_config, encodec_vb)?; println!("prompt: '{}'", args.prompt); let prompt_tokens = fs_tokenizer.encode(&args.prompt)?; let mut tokens = prompt_tokens.clone(); println!("{tokens:?}"); let spk_emb_file = match &args.spk_emb { Some(w) => std::path::PathBuf::from(w), None => repo.get("spk_emb.safetensors")?, }; let spk_emb = candle::safetensors::load(&spk_emb_file, &candle::Device::Cpu)?; let spk_emb = match spk_emb.get("spk_emb") { None => anyhow::bail!("missing spk_emb tensor in {spk_emb_file:?}"), Some(spk_emb) => spk_emb.to_dtype(dtype)?, }; let spk_emb = spk_emb.to_device(&device)?; let mut logits_processor = LogitsProcessor::new(args.seed, Some(args.temperature), Some(0.95)); // First stage generation. for index in 0..args.max_tokens { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &device)?; let input = Tensor::stack(&[&input, &input], 0)?; let logits = match &mut first_stage_model { Transformer::Normal(m) => m.forward(&input, &spk_emb, tokens.len() - context_size)?, Transformer::Quantized(m) => { m.forward(&input, &spk_emb, tokens.len() - context_size)? } }; let logits0 = logits.i((0, 0))?; let logits1 = logits.i((1, 0))?; let logits = ((logits0 * args.guidance_scale)? + logits1 * (1. - args.guidance_scale))?; let logits = logits.to_dtype(DType::F32)?; let next_token = logits_processor.sample(&logits)?; tokens.push(next_token); print!("."); std::io::stdout().flush()?; if next_token == 2048 { break; } } println!(); let fie2c = adapters::FlattenedInterleavedEncodec2Codebook::new(ENCODEC_NTOKENS); let (text_ids, ids1, ids2) = fie2c.decode(&tokens); println!("text ids len: {}", text_ids.len()); let mut rng = rand::rngs::StdRng::seed_from_u64(args.seed + 1337); // TODO: Use the config rather than hardcoding the offset here. let encoded_text: Vec<_> = prompt_tokens.iter().map(|v| v - 1024).collect(); let mut hierarchies_in1 = [encoded_text.as_slice(), ids1.as_slice(), &[ENCODEC_NTOKENS]].concat(); let mut hierarchies_in2 = [ vec![ENCODEC_NTOKENS; encoded_text.len()].as_slice(), ids2.as_slice(), &[ENCODEC_NTOKENS], ] .concat(); hierarchies_in1.resize(second_stage_config.block_size, ENCODEC_NTOKENS); hierarchies_in2.resize(second_stage_config.block_size, ENCODEC_NTOKENS); let in_x1 = Tensor::new(hierarchies_in1, &device)?; let in_x2 = Tensor::new(hierarchies_in2, &device)?; let in_x = Tensor::stack(&[in_x1, in_x2], 0)?.unsqueeze(0)?; let logits = second_stage_model.forward(&in_x)?; println!("sampling from logits..."); let mut codes = vec![]; for logits in logits.iter() { let logits = logits.squeeze(0)?; let (seq_len, _) = logits.dims2()?; let mut codes_ = Vec::with_capacity(seq_len); for step in 0..seq_len { let logits = logits.i(step)?.to_dtype(DType::F32)?; let logits = &(&logits / 1.0)?; let prs = candle_nn::ops::softmax_last_dim(logits)?.to_vec1::<f32>()?; let distr = rand::distributions::WeightedIndex::new(prs.as_slice())?; let sample = distr.sample(&mut rng) as u32; codes_.push(sample) } codes.push(codes_) } let codes = Tensor::new(codes, &device)?.unsqueeze(0)?; let codes = Tensor::cat(&[in_x, codes], 1)?; println!("codes: {codes}"); let tilted_encodec = adapters::TiltedEncodec::new(ENCODEC_NTOKENS); let codes = codes.i(0)?.to_vec2::<u32>()?; let (text_ids, audio_ids) = tilted_encodec.decode(&codes); println!("text_ids len: {:?}", text_ids.len()); let audio_ids = Tensor::new(audio_ids, encodec_device)?.unsqueeze(0)?; println!("audio_ids shape: {:?}", audio_ids.shape()); let pcm = encodec_model.decode(&audio_ids)?; println!("output pcm shape: {:?}", pcm.shape()); let pcm = pcm.i(0)?.i(0)?.to_dtype(DType::F32)?; let pcm = candle_examples::audio::normalize_loudness(&pcm, 24_000, true)?; let pcm = pcm.to_vec1::<f32>()?; let mut output = std::fs::File::create(&args.out_file)?; candle_examples::wav::write_pcm_as_wav(&mut output, &pcm, 24_000)?; Ok(()) }
candle/candle-examples/examples/metavoice/main.rs/0
{ "file_path": "candle/candle-examples/examples/metavoice/main.rs", "repo_id": "candle", "token_count": 4560 }
use std::path::PathBuf; use anyhow::{Error as E, Result}; use candle::{Device, Tensor}; use candle_nn::VarBuilder; use candle_transformers::models::modernbert; use clap::{Parser, ValueEnum}; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::{PaddingParams, Tokenizer}; #[derive(Debug, Clone, ValueEnum)] enum Model { ModernBertBase, ModernBertLarge, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, #[arg(long, default_value = "modern-bert-base")] model: Model, // Path to the tokenizer file. #[arg(long)] tokenizer_file: Option<String>, // Path to the weight files. #[arg(long)] weight_files: Option<String>, // Path to the config file. #[arg(long)] config_file: Option<String>, /// When set, compute embeddings for this prompt. #[arg(long)] prompt: Option<String>, } fn main() -> Result<()> { let args = Args::parse(); let api = Api::new()?; let model_id = match &args.model_id { Some(model_id) => model_id.to_string(), None => match args.model { Model::ModernBertBase => "answerdotai/ModernBERT-base".to_string(), Model::ModernBertLarge => "answerdotai/ModernBERT-large".to_string(), }, }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let config_filename = match args.config_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("config.json")?, }; let weights_filename = match args.weight_files { Some(files) => PathBuf::from(files), None => match repo.get("model.safetensors") { Ok(safetensors) => safetensors, Err(_) => match repo.get("pytorch_model.bin") { Ok(pytorch_model) => pytorch_model, Err(e) => { anyhow::bail!("Model weights not found. The weights should either be a `model.safetensors` or `pytorch_model.bin` file. Error: {e}") } }, }, }; let config = std::fs::read_to_string(config_filename)?; let config: modernbert::Config = serde_json::from_str(&config)?; let mut tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let device = candle_examples::device(args.cpu)?; let vb = if weights_filename.ends_with("model.safetensors") { unsafe { VarBuilder::from_mmaped_safetensors(&[weights_filename], candle::DType::F32, &device) .unwrap() } } else { println!("Loading weights from pytorch_model.bin"); VarBuilder::from_pth(&weights_filename, candle::DType::F32, &device).unwrap() }; tokenizer .with_padding(Some(PaddingParams { strategy: tokenizers::PaddingStrategy::BatchLongest, pad_id: config.pad_token_id, ..Default::default() })) .with_truncation(None) .map_err(E::msg)?; let prompt = match &args.prompt { Some(p) => vec![p.as_str()], None => vec![ "Hello I'm a [MASK] model.", "I'm a [MASK] boy.", "I'm [MASK] in berlin.", "The capital of France is [MASK].", ], }; let model = modernbert::ModernBertForMaskedLM::load(vb, &config)?; let input_ids = tokenize_batch(&tokenizer, prompt.clone(), &device)?; let attention_mask = get_attention_mask(&tokenizer, prompt.clone(), &device)?; let output = model .forward(&input_ids, &attention_mask)? .to_dtype(candle::DType::F32)?; let max_outs = output.argmax(2)?; let max_out = max_outs.to_vec2::<u32>()?; let max_out_refs: Vec<&[u32]> = max_out.iter().map(|v| v.as_slice()).collect(); let decoded = tokenizer.decode_batch(&max_out_refs, true).unwrap(); for (i, sentence) in decoded.iter().enumerate() { println!("Sentence: {} : {}", i + 1, sentence); } Ok(()) } pub fn tokenize_batch( tokenizer: &Tokenizer, input: Vec<&str>, device: &Device, ) -> anyhow::Result<Tensor> { let tokens = tokenizer.encode_batch(input, true).map_err(E::msg)?; let token_ids = tokens .iter() .map(|tokens| { let tokens = tokens.get_ids().to_vec(); Tensor::new(tokens.as_slice(), device) }) .collect::<candle::Result<Vec<_>>>()?; Ok(Tensor::stack(&token_ids, 0)?) } pub fn get_attention_mask( tokenizer: &Tokenizer, input: Vec<&str>, device: &Device, ) -> anyhow::Result<Tensor> { let tokens = tokenizer.encode_batch(input, true).map_err(E::msg)?; let attention_mask = tokens .iter() .map(|tokens| { let tokens = tokens.get_attention_mask().to_vec(); Tensor::new(tokens.as_slice(), device) }) .collect::<candle::Result<Vec<_>>>()?; Ok(Tensor::stack(&attention_mask, 0)?) }
candle/candle-examples/examples/modernbert/main.rs/0
{ "file_path": "candle/candle-examples/examples/modernbert/main.rs", "repo_id": "candle", "token_count": 2425 }
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Error as E; use clap::Parser; use candle::{DType, IndexOp, Tensor}; use candle_nn::VarBuilder; use candle_transformers::models::parler_tts::{Config, Model}; use tokenizers::Tokenizer; #[derive(Parser)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// Display the token for the specified prompt. #[arg(long)] verbose_prompt: bool, #[arg(long, default_value = "Hey, how are you doing today?")] prompt: String, #[arg( long, default_value = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up." )] description: String, /// The temperature used to generate samples. #[arg(long, default_value_t = 0.0)] temperature: f64, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 0)] seed: u64, #[arg(long, default_value_t = 5000)] sample_len: usize, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.0)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, #[arg(long)] model_id: Option<String>, #[arg(long)] revision: Option<String>, #[arg(long)] quantized: bool, /// Use f16 precision for all the computations rather than f32. #[arg(long)] f16: bool, #[arg(long)] model_file: Option<String>, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] config_file: Option<String>, #[arg(long, default_value_t = 512)] max_steps: usize, /// The output wav file. #[arg(long, default_value = "out.wav")] out_file: String, #[arg(long, default_value = "large-v1")] which: Which, } #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "large-v1")] LargeV1, #[value(name = "mini-v1")] MiniV1, } fn main() -> anyhow::Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature, args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = hf_hub::api::sync::Api::new()?; let model_id = match args.model_id { Some(model_id) => model_id.to_string(), None => match args.which { Which::LargeV1 => "parler-tts/parler-tts-large-v1".to_string(), Which::MiniV1 => "parler-tts/parler-tts-mini-v1".to_string(), }, }; let revision = match args.revision { Some(r) => r, None => "main".to_string(), }; let repo = api.repo(hf_hub::Repo::with_revision( model_id, hf_hub::RepoType::Model, revision, )); let model_files = match args.model_file { Some(m) => vec![m.into()], None => match args.which { Which::MiniV1 => vec![repo.get("model.safetensors")?], Which::LargeV1 => { candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")? } }, }; let config = match args.config_file { Some(m) => m.into(), None => repo.get("config.json")?, }; let tokenizer = match args.tokenizer_file { Some(m) => m.into(), None => repo.get("tokenizer.json")?, }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer).map_err(E::msg)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&model_files, DType::F32, &device)? }; let config: Config = serde_json::from_reader(std::fs::File::open(config)?)?; let mut model = Model::new(&config, vb)?; println!("loaded the model in {:?}", start.elapsed()); let description_tokens = tokenizer .encode(args.description, true) .map_err(E::msg)? .get_ids() .to_vec(); let description_tokens = Tensor::new(description_tokens, &device)?.unsqueeze(0)?; let prompt_tokens = tokenizer .encode(args.prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let prompt_tokens = Tensor::new(prompt_tokens, &device)?.unsqueeze(0)?; let lp = candle_transformers::generation::LogitsProcessor::new( args.seed, Some(args.temperature), args.top_p, ); println!("starting generation..."); let codes = model.generate(&prompt_tokens, &description_tokens, lp, args.max_steps)?; println!("generated codes\n{codes}"); let codes = codes.to_dtype(DType::I64)?; codes.save_safetensors("codes", "out.safetensors")?; let codes = codes.unsqueeze(0)?; let pcm = model .audio_encoder .decode_codes(&codes.to_device(&device)?)?; println!("{pcm}"); let pcm = pcm.i((0, 0))?; let pcm = candle_examples::audio::normalize_loudness(&pcm, 24_000, true)?; let pcm = pcm.to_vec1::<f32>()?; let mut output = std::fs::File::create(&args.out_file)?; candle_examples::wav::write_pcm_as_wav(&mut output, &pcm, config.audio_encoder.sampling_rate)?; Ok(()) }
candle/candle-examples/examples/parler-tts/main.rs/0
{ "file_path": "candle/candle-examples/examples/parler-tts/main.rs", "repo_id": "candle", "token_count": 2678 }
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::Parser; use candle_transformers::models::quantized_recurrent_gemma::Model as QModel; use candle_transformers::models::recurrent_gemma::{Config, Model as BModel}; use candle::{DType, Device, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; enum Model { B(BModel), Q(QModel), } impl Model { fn forward(&mut self, xs: &Tensor, pos: usize) -> candle::Result<Tensor> { match self { Self::B(m) => m.forward(xs, pos), Self::Q(m) => m.forward(xs, pos), } } } #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "2b")] Base2B, #[value(name = "2b-it")] Instruct2B, } struct TextGeneration { model: Model, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, top_k: usize, repeat_penalty: f32, repeat_last_n: usize, device: &Device, ) -> Self { let sampling = match temp { None => candle_transformers::generation::Sampling::ArgMax, Some(temperature) => match top_p { None => candle_transformers::generation::Sampling::TopK { temperature, k: top_k, }, Some(top_p) => candle_transformers::generation::Sampling::TopKThenTopP { temperature, k: top_k, p: top_p, }, }, }; let logits_processor = LogitsProcessor::from_sampling(seed, sampling); Self { model, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let mut generated_tokens = 0usize; let eos_token = match self.tokenizer.get_token("<eos>") { Some(token) => token, None => anyhow::bail!("cannot find the <eos> token"), }; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; let logits = self.model.forward(&input, start_pos)?; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; if next_token == eos_token { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, #[arg(long, default_value_t = 250)] top_k: usize, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 8000)] sample_len: usize, #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] config_file: Option<String>, #[arg(long)] weight_files: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, /// The model to use. #[arg(long, default_value = "2b")] which: Which, #[arg(long)] quantized: bool, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature.unwrap_or(0.), args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let model_id = match &args.model_id { Some(model_id) => model_id.to_string(), None => match args.which { Which::Base2B => "google/recurrentgemma-2b".to_string(), Which::Instruct2B => "google/recurrentgemma-2b-it".to_string(), }, }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let config_filename = match args.config_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("config.json")?, }; let filenames = match args.weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => { if args.quantized { let filename = match args.which { Which::Base2B => "recurrent-gemma-2b-q4k.gguf", Which::Instruct2B => "recurrent-gemma-7b-q4k.gguf", }; let filename = api.model("lmz/candle-gemma".to_string()).get(filename)?; vec![filename] } else { candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")? } } }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let config: Config = serde_json::from_reader(std::fs::File::open(config_filename)?)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let dtype = if device.is_cuda() { DType::BF16 } else { DType::F32 }; let model = if args.quantized { let vb = candle_transformers::quantized_var_builder::VarBuilder::from_gguf( &filenames[0], &device, )?; Model::Q(QModel::new(&config, vb.pp("model"))?) } else { let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; Model::B(BModel::new(&config, vb.pp("model"))?) }; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, tokenizer, args.seed, args.temperature, args.top_p, args.top_k, args.repeat_penalty, args.repeat_last_n, &device, ); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
candle/candle-examples/examples/recurrent-gemma/main.rs/0
{ "file_path": "candle/candle-examples/examples/recurrent-gemma/main.rs", "repo_id": "candle", "token_count": 4698 }
## candle-rwkv The [RWKV model](https://wiki.rwkv.com/) is a recurrent neural network model with performance on par with transformer architectures. Several variants are available, candle implements the v5 and v6 versions and can be used with Eagle 7B([blog post](https://blog.rwkv.com/p/eagle-7b-soaring-past-transformers)). ```bash $ cargo run --example rwkv --release -- --prompt "The smallest prime is " avx: true, neon: false, simd128: false, f16c: true temp: 0.00 repeat-penalty: 1.10 repeat-last-n: 64 The smallest prime is ϕ(2) = 2. The smallest composite is ϕ(3) = 3. The smallest perfect number is ϕ(5) = 5. The smallest perfect square is ϕ(4) = 4. The smallest perfect cube is ϕ(6) = 6. ```
candle/candle-examples/examples/rwkv/README.md/0
{ "file_path": "candle/candle-examples/examples/rwkv/README.md", "repo_id": "candle", "token_count": 235 }
# candle-stable-diffusion-3: Candle Implementation of Stable Diffusion 3/3.5 ![](assets/stable-diffusion-3.jpg) *A cute rusty robot holding a candle torch in its hand, with glowing neon text \"LETS GO RUSTY\" displayed on its chest, bright background, high quality, 4k*, generated by Stable Diffusion 3 Medium Stable Diffusion 3 Medium is a text-to-image model based on Multimodal Diffusion Transformer (MMDiT) architecture. - [huggingface repo](https://huggingface.co/stabilityai/stable-diffusion-3-medium) - [research paper](https://arxiv.org/pdf/2403.03206) - [announcement blog post](https://stability.ai/news/stable-diffusion-3-medium) Stable Diffusion 3.5 is a family of text-to-image models with latest improvements: - [announcement blog post](https://stability.ai/news/introducing-stable-diffusion-3-5) It has three variants: - [Stable Diffusion 3.5 Large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large) @ 8.1b params, with scaled and slightly modified MMDiT architecture. - [Stable Diffusion 3.5 Large Turbo](https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo) distilled version that enables 4-step inference. - [Stable Diffusion 3.5 Medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium) @ 2.5b params, with improved MMDiT-X architecture. ## Getting access to the weights The weights of Stable Diffusion 3/3.5 is released by Stability AI under the Stability Community License. You will need to accept the conditions and acquire a license by visiting the repos on HuggingFace Hub to gain access to the weights for your HuggingFace account. To allow your computer to gain access to the public-gated repos on HuggingFace, you might need to create a [HuggingFace User Access Tokens](https://huggingface.co/docs/hub/en/security-tokens) (recommended) and log in on your computer if you haven't done that before. A convenient way to do the login is to use [huggingface-cli](https://huggingface.co/docs/huggingface_hub/en/guides/cli): ```shell huggingface-cli login ``` and you will be prompted to enter your token. On the first run, the weights will be automatically downloaded from the Huggingface Hub. After the download, the weights will be [cached](https://huggingface.co/docs/datasets/en/cache) and remain accessible locally. ## Running the model ```shell cargo run --example stable-diffusion-3 --release --features=cuda -- \ --which 3-medium --height 1024 --width 1024 \ --prompt 'A cute rusty robot holding a candle torch in its hand, with glowing neon text \"LETS GO RUSTY\" displayed on its chest, bright background, high quality, 4k' ``` To use different models, changed the value of `--which` option. (Possible values: `3-medium`, `3.5-large`, `3.5-large-turbo` and `3.5-medium`). To display other options available, ```shell cargo run --example stable-diffusion-3 --release --features=cuda -- --help ``` If GPU supports, Flash-Attention is a strongly recommended feature as it can greatly improve the speed of inference, as MMDiT is a transformer model heavily depends on attentions. To utilize [candle-flash-attn](https://github.com/huggingface/candle/tree/main/candle-flash-attn) in the demo, you will need both `--features flash-attn` and `--use-flash-attn`. ```shell cargo run --example stable-diffusion-3 --release --features=cuda,flash-attn -- --use-flash-attn ... ``` ## Performance Benchmark Below benchmark is done with Stable Diffusion 3 Medium by generating 1024-by-1024 image from 28 steps of Euler sampling and measure the average speed (iteration per seconds). [candle](https://github.com/huggingface/candle) and [candle-flash-attn](https://github.com/huggingface/candle/tree/main/candle-flash-attn) is based on the commit of [0d96ec3](https://github.com/huggingface/candle/commit/0d96ec31e8be03f844ed0aed636d6217dee9c7bc). System specs (Desktop PCIE 5 x8/x8 dual-GPU setup): - Operating System: Ubuntu 23.10 - CPU: i9 12900K w/o overclocking. - RAM: 64G dual-channel DDR5 @ 4800 MT/s | Speed (iter/s) | w/o flash-attn | w/ flash-attn | | -------------- | -------------- | ------------- | | RTX 3090 Ti | 0.83 | 2.15 | | RTX 4090 | 1.72 | 4.06 |
candle/candle-examples/examples/stable-diffusion-3/README.md/0
{ "file_path": "candle/candle-examples/examples/stable-diffusion-3/README.md", "repo_id": "candle", "token_count": 1343 }
use candle::{IndexOp, Result, Tensor, D}; use tokenizers::Tokenizer; const LANGUAGES: [(&str, &str); 99] = [ ("en", "english"), ("zh", "chinese"), ("de", "german"), ("es", "spanish"), ("ru", "russian"), ("ko", "korean"), ("fr", "french"), ("ja", "japanese"), ("pt", "portuguese"), ("tr", "turkish"), ("pl", "polish"), ("ca", "catalan"), ("nl", "dutch"), ("ar", "arabic"), ("sv", "swedish"), ("it", "italian"), ("id", "indonesian"), ("hi", "hindi"), ("fi", "finnish"), ("vi", "vietnamese"), ("he", "hebrew"), ("uk", "ukrainian"), ("el", "greek"), ("ms", "malay"), ("cs", "czech"), ("ro", "romanian"), ("da", "danish"), ("hu", "hungarian"), ("ta", "tamil"), ("no", "norwegian"), ("th", "thai"), ("ur", "urdu"), ("hr", "croatian"), ("bg", "bulgarian"), ("lt", "lithuanian"), ("la", "latin"), ("mi", "maori"), ("ml", "malayalam"), ("cy", "welsh"), ("sk", "slovak"), ("te", "telugu"), ("fa", "persian"), ("lv", "latvian"), ("bn", "bengali"), ("sr", "serbian"), ("az", "azerbaijani"), ("sl", "slovenian"), ("kn", "kannada"), ("et", "estonian"), ("mk", "macedonian"), ("br", "breton"), ("eu", "basque"), ("is", "icelandic"), ("hy", "armenian"), ("ne", "nepali"), ("mn", "mongolian"), ("bs", "bosnian"), ("kk", "kazakh"), ("sq", "albanian"), ("sw", "swahili"), ("gl", "galician"), ("mr", "marathi"), ("pa", "punjabi"), ("si", "sinhala"), ("km", "khmer"), ("sn", "shona"), ("yo", "yoruba"), ("so", "somali"), ("af", "afrikaans"), ("oc", "occitan"), ("ka", "georgian"), ("be", "belarusian"), ("tg", "tajik"), ("sd", "sindhi"), ("gu", "gujarati"), ("am", "amharic"), ("yi", "yiddish"), ("lo", "lao"), ("uz", "uzbek"), ("fo", "faroese"), ("ht", "haitian creole"), ("ps", "pashto"), ("tk", "turkmen"), ("nn", "nynorsk"), ("mt", "maltese"), ("sa", "sanskrit"), ("lb", "luxembourgish"), ("my", "myanmar"), ("bo", "tibetan"), ("tl", "tagalog"), ("mg", "malagasy"), ("as", "assamese"), ("tt", "tatar"), ("haw", "hawaiian"), ("ln", "lingala"), ("ha", "hausa"), ("ba", "bashkir"), ("jw", "javanese"), ("su", "sundanese"), ]; /// Returns the token id for the selected language. pub fn detect_language( model: &mut super::Model, tokenizer: &Tokenizer, mel: &Tensor, ) -> Result<u32> { let (_bsize, _, seq_len) = mel.dims3()?; let mel = mel.narrow( 2, 0, usize::min(seq_len, model.config().max_source_positions), )?; let device = mel.device(); let language_token_ids = LANGUAGES .iter() .map(|(t, _)| crate::token_id(tokenizer, &format!("<|{t}|>"))) .collect::<Result<Vec<_>>>()?; let sot_token = crate::token_id(tokenizer, crate::m::SOT_TOKEN)?; let audio_features = model.encoder_forward(&mel, true)?; let tokens = Tensor::new(&[[sot_token]], device)?; let language_token_ids = Tensor::new(language_token_ids.as_slice(), device)?; let ys = model.decoder_forward(&tokens, &audio_features, true)?; let logits = model.decoder_final_linear(&ys.i(..1)?)?.i(0)?.i(0)?; let logits = logits.index_select(&language_token_ids, 0)?; let probs = candle_nn::ops::softmax(&logits, D::Minus1)?; let probs = probs.to_vec1::<f32>()?; let mut probs = LANGUAGES.iter().zip(probs.iter()).collect::<Vec<_>>(); probs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for ((_, language), p) in probs.iter().take(5) { println!("{language}: {p}") } let language = crate::token_id(tokenizer, &format!("<|{}|>", probs[0].0 .0))?; Ok(language) }
candle/candle-examples/examples/whisper/multilingual.rs/0
{ "file_path": "candle/candle-examples/examples/whisper/multilingual.rs", "repo_id": "candle", "token_count": 1846 }
/****************************************************************************** * Copyright (c) 2024, Tri Dao. ******************************************************************************/ #pragma once #include "philox.cuh" #include "utils.h" namespace flash { struct Dropout { const unsigned long long seed, offset; const uint8_t p_dropout_in_uint8_t; __forceinline__ __device__ Dropout(const unsigned long long seed, const unsigned long long offset, const uint8_t p_dropout_in_uint8_t, const int bid, const int hid, const int tid, const int nheads) : seed(seed) , offset(offset + (bid * nheads + hid) * 32 + tid % 32) , p_dropout_in_uint8_t(p_dropout_in_uint8_t) { } template <bool encode_dropout_in_sign_bit=false, typename Engine, typename Layout> __forceinline__ __device__ void apply_dropout(Tensor<Engine, Layout> &tensor_, int block_row_start, int block_col_start, int block_row_stride) { // convert shape from (4, MMA_M, MMA_N) to (8, MMA_M, MMA_N / 2) Tensor tensor = make_tensor(tensor_.data(), flash::convert_layout_acc_dropout(tensor_.layout())); using T = typename Engine::value_type; auto encode_dropout = [](bool keep, T val) { return keep ? val : (encode_dropout_in_sign_bit ? -val : T(0)); }; static_assert(decltype(size<2>(tensor))::value % 2 == 0); const uint16_t p_dropout_8bit_in_uint16_t = uint16_t(p_dropout_in_uint8_t); const uint32_t p_dropout_8bit_in_uint32_t = (uint32_t(p_dropout_8bit_in_uint16_t) << 16) | uint32_t(p_dropout_8bit_in_uint16_t); // if (cute::thread0()) { printf("threshold2 = 0x%x\n", p_dropout_8bit_in_uint32_t); } #pragma unroll for (int m = 0; m < size<1>(tensor); ++m, block_row_start += block_row_stride) { uint2 rowcol = make_uint2(block_row_start, block_col_start); #pragma unroll for (int n = 0; n < size<2>(tensor) / 2; ++n, ++rowcol.y) { // if (cute::thread(32, 0)) { printf("m = %d, n = %d, row = %d, col = %d\n", m, n, int(rowcol.x), int(rowcol.y));} uint4 random_uint4 = flash::philox(seed, reinterpret_cast<unsigned long long&>(rowcol), offset); // if (cute::thread0()) { printf("philox = %u, %d, %d, %d\n", random_uint4.x, random_uint4.y, random_uint4.z, random_uint4.w);} uint8_t (&rnd_8)[16] = reinterpret_cast<uint8_t (&)[16]>(random_uint4); // Special implementation for 16-bit types: we duplicate the threshold to the // low and high 16 bits of a 32-bit value, then use the f16x2 comparison instruction // to get a mask. The low 16 bits of the mask will be either 0xffff or 0x0000, // and the high 16 bits will be either 0xffff or 0x0000, depending on whether // the random value is less than the threshold. // We then do a bit-wise AND between the mask and the original value (in 32-bit). // We're exploiting the fact that floating point comparison is equivalent to integer // comparison, since we're comparing unsigned integers whose top 8-bits are zero. if (!encode_dropout_in_sign_bit && (std::is_same<T, cutlass::half_t>::value || std::is_same<T, cutlass::bfloat16_t>::value)) { uint16_t rnd_16[16]; #pragma unroll for (int i = 0; i < 16; i++) { rnd_16[i] = uint16_t(rnd_8[i]); } uint32_t (&rnd_32)[8] = reinterpret_cast<uint32_t (&)[8]>(rnd_16); #pragma unroll for (int j = 0; j < 2; j++) { Tensor tensor_uint32 = recast<uint32_t>(tensor(_, m, n * 2 + j)); // if (cute::thread0()) { printf("random = 0x%x, 0x%x, 0x%x, 0x%x\n", rnd_32[j * 4 + 0], rnd_32[j * 4 + 1], rnd_32[j * 4 + 2], rnd_32[j * 4 + 3]); } // if (cute::thread0()) { printf("tensor_uint32 = 0x%x, 0x%x, 0x%x, 0x%x\n", tensor_uint32(0), tensor_uint32(1), tensor_uint32(2), tensor_uint32(3)); } #pragma unroll for (int i = 0; i < 4; i++) { uint32_t mask; asm volatile("set.le.u32.f16x2 %0, %1, %2;\n" : "=r"(mask) : "r"(rnd_32[j * 4 + i]), "r"(p_dropout_8bit_in_uint32_t)); tensor_uint32(i) &= mask; } // if (cute::thread0()) { printf("tensor_uint32 = 0x%x, 0x%x, 0x%x, 0x%x\n", tensor_uint32(0), tensor_uint32(1), tensor_uint32(2), tensor_uint32(3)); } } } else { #pragma unroll for (int j = 0; j < 2; j++) { #pragma unroll for (int i = 0; i < 8; i++) { tensor(i, m, n * 2 + j) = encode_dropout(rnd_8[j * 8 + i] <= p_dropout_in_uint8_t, tensor(i, m, n * 2 + j)); } Tensor tensor_uint32 = recast<uint32_t>(tensor(_, m, n * 2 + j)); // if (cute::thread0()) { printf("tensor_uint32 = 0x%x, 0x%x, 0x%x, 0x%x\n", tensor_uint32(0), tensor_uint32(1), tensor_uint32(2), tensor_uint32(3)); } } } // // if ((threadIdx.x == 0) && (blockIdx.x == 0) && (blockIdx.y == 0)) { // // printf("n = %d, ph Philox: %u, %u, %u, %u\n", n, rnd_8.x, rnd_8.y, rnd_8.z, rnd_8.w); // // } } } } }; } // namespace flash
candle/candle-flash-attn/kernels/dropout.h/0
{ "file_path": "candle/candle-flash-attn/kernels/dropout.h", "repo_id": "candle", "token_count": 3021 }
/****************************************************************************** * Copyright (c) 2023, Tri Dao. ******************************************************************************/ #pragma once #include <assert.h> #include <stdint.h> #include <stdlib.h> #include <cuda_fp16.h> #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 #include <cuda_bf16.h> #endif #include <cute/tensor.hpp> #include <cutlass/array.h> #include <cutlass/cutlass.h> #include <cutlass/numeric_conversion.h> #include <cutlass/numeric_types.h> //////////////////////////////////////////////////////////////////////////////////////////////////// namespace flash { //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename T> __forceinline__ __device__ uint32_t relu2(const uint32_t x); template<> __forceinline__ __device__ uint32_t relu2<cutlass::half_t>(const uint32_t x) { uint32_t res; const uint32_t zero = 0u; #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 asm volatile("max.f16x2 %0, %1, %2;\n" : "=r"(res) : "r"(x), "r"(zero)); #else asm volatile( \ "{\n" \ "\t .reg .f16x2 sela;\n" \ "\t set.gtu.u32.f16x2 sela, %1, %2;\n" \ "\t and.b32 %0, sela, %1;\n" "}\n" : "=r"(res) : "r"(x), "r"(zero)); #endif return res; } #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 template<> __forceinline__ __device__ uint32_t relu2<cutlass::bfloat16_t>(const uint32_t x) { uint32_t res; const uint32_t zero = 0u; asm volatile("max.bf16x2 %0, %1, %2;\n" : "=r"(res) : "r"(x), "r"(zero)); return res; } #endif //////////////////////////////////////////////////////////////////////////////////////////////////// #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 template<typename T> __forceinline__ __device__ uint32_t convert_relu2(const float2 x); template<> __forceinline__ __device__ uint32_t convert_relu2<cutlass::half_t>(const float2 x) { uint32_t res; const uint32_t a = reinterpret_cast<const uint32_t&>(x.x); const uint32_t b = reinterpret_cast<const uint32_t&>(x.y); asm volatile("cvt.rn.relu.f16x2.f32 %0, %1, %2;\n" : "=r"(res) : "r"(b), "r"(a)); return res; } template<> __forceinline__ __device__ uint32_t convert_relu2<cutlass::bfloat16_t>(const float2 x) { uint32_t res; const uint32_t a = reinterpret_cast<const uint32_t&>(x.x); const uint32_t b = reinterpret_cast<const uint32_t&>(x.y); asm volatile("cvt.rn.relu.bf16x2.f32 %0, %1, %2;\n" : "=r"(res) : "r"(b), "r"(a)); return res; } #endif //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename T> struct MaxOp { __device__ __forceinline__ T operator()(T const & x, T const & y) { return x > y ? x : y; } }; template <> struct MaxOp<float> { // This is slightly faster __device__ __forceinline__ float operator()(float const &x, float const &y) { return max(x, y); } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename T> struct SumOp { __device__ __forceinline__ T operator()(T const & x, T const & y) { return x + y; } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<int THREADS> struct Allreduce { static_assert(THREADS == 32 || THREADS == 16 || THREADS == 8 || THREADS == 4); template<typename T, typename Operator> static __device__ __forceinline__ T run(T x, Operator &op) { constexpr int OFFSET = THREADS / 2; x = op(x, __shfl_xor_sync(uint32_t(-1), x, OFFSET)); return Allreduce<OFFSET>::run(x, op); } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<> struct Allreduce<2> { template<typename T, typename Operator> static __device__ __forceinline__ T run(T x, Operator &op) { x = op(x, __shfl_xor_sync(uint32_t(-1), x, 1)); return x; } }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<bool A_in_regs=false, bool B_in_regs=false, typename Tensor0, typename Tensor1, typename Tensor2, typename Tensor3, typename Tensor4, typename TiledMma, typename TiledCopyA, typename TiledCopyB, typename ThrCopyA, typename ThrCopyB> __forceinline__ __device__ void gemm(Tensor0 &acc, Tensor1 &tCrA, Tensor2 &tCrB, Tensor3 const& tCsA, Tensor4 const& tCsB, TiledMma tiled_mma, TiledCopyA smem_tiled_copy_A, TiledCopyB smem_tiled_copy_B, ThrCopyA smem_thr_copy_A, ThrCopyB smem_thr_copy_B) { CUTE_STATIC_ASSERT_V(size<1>(tCrA) == size<1>(acc)); // MMA_M CUTE_STATIC_ASSERT_V(size<1>(tCrB) == size<2>(acc)); // MMA_N CUTE_STATIC_ASSERT_V(size<2>(tCrA) == size<2>(tCrB)); // MMA_K Tensor tCrA_copy_view = smem_thr_copy_A.retile_D(tCrA); CUTE_STATIC_ASSERT_V(size<1>(tCsA) == size<1>(tCrA_copy_view)); // M Tensor tCrB_copy_view = smem_thr_copy_B.retile_D(tCrB); CUTE_STATIC_ASSERT_V(size<1>(tCsB) == size<1>(tCrB_copy_view)); // N if (!A_in_regs) { cute::copy(smem_tiled_copy_A, tCsA(_, _, _0{}), tCrA_copy_view(_, _, _0{})); } if (!B_in_regs) { cute::copy(smem_tiled_copy_B, tCsB(_, _, _0{}), tCrB_copy_view(_, _, _0{})); } #pragma unroll for (int i = 0; i < size<2>(tCrA); ++i) { if (i < size<2>(tCrA) - 1) { if (!A_in_regs) { cute::copy(smem_tiled_copy_A, tCsA(_, _, i + 1), tCrA_copy_view(_, _, i + 1)); } if (!B_in_regs) { cute::copy(smem_tiled_copy_B, tCsB(_, _, i + 1), tCrB_copy_view(_, _, i + 1)); } } cute::gemm(tiled_mma, tCrA(_, _, i), tCrB(_, _, i), acc); } } //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename Tensor0, typename Tensor1, typename Tensor2, typename Tensor3, typename TiledMma, typename TiledCopy, typename ThrCopy> __forceinline__ __device__ void gemm_rs(Tensor0 &acc, Tensor1 &tCrA, Tensor2 &tCrB, Tensor3 const& tCsB, TiledMma tiled_mma, TiledCopy smem_tiled_copy_B, ThrCopy smem_thr_copy_B) { CUTE_STATIC_ASSERT_V(size<1>(tCrA) == size<1>(acc)); // MMA_M CUTE_STATIC_ASSERT_V(size<1>(tCrB) == size<2>(acc)); // MMA_N CUTE_STATIC_ASSERT_V(size<2>(tCrA) == size<2>(tCrB)); // MMA_K Tensor tCrB_copy_view = smem_thr_copy_B.retile_D(tCrB); CUTE_STATIC_ASSERT_V(size<1>(tCsB) == size<1>(tCrB_copy_view)); // N cute::copy(smem_tiled_copy_B, tCsB(_, _, _0{}), tCrB_copy_view(_, _, _0{})); #pragma unroll for (int i = 0; i < size<2>(tCrA); ++i) { if (i < size<2>(tCrA) - 1) { cute::copy(smem_tiled_copy_B, tCsB(_, _, i + 1), tCrB_copy_view(_, _, i + 1)); } cute::gemm(tiled_mma, tCrA(_, _, i), tCrB(_, _, i), acc); } } //////////////////////////////////////////////////////////////////////////////////////////////////// // Convert acc_layout from (MMA=4, MMA_M, MMA_N) to (nrow=(2, MMA_M), ncol=(2, MMA_N)) template<typename Layout> __forceinline__ __device__ auto convert_layout_acc_rowcol(Layout acc_layout) { static_assert(decltype(size<0>(acc_layout))::value == 4); static_assert(decltype(rank(acc_layout))::value == 3); auto l = logical_divide(acc_layout, Shape<_2>{}); // ((2, 2), MMA_M, MMA_N) return make_layout(make_layout(get<0, 1>(l), get<1>(l)), make_layout(get<0, 0>(l), get<2>(l))); }; //////////////////////////////////////////////////////////////////////////////////////////////////// // Convert acc_layout from (MMA=4, MMA_M, MMA_N) to ((4, 2), MMA_M, MMA_N / 2) // if using m16n8k16, or to (4, MMA_M, MMA_N) if using m16n8k8. template<typename MMA_traits, typename Layout> __forceinline__ __device__ auto convert_layout_acc_Aregs(Layout acc_layout) { using X = Underscore; static_assert(decltype(size<0>(acc_layout))::value == 4); static_assert(decltype(rank(acc_layout))::value == 3); constexpr int mma_shape_K = get<2>(typename MMA_traits::Shape_MNK{}); static_assert(mma_shape_K == 8 || mma_shape_K == 16); if constexpr (mma_shape_K == 8) { return acc_layout; } else { auto l = logical_divide(acc_layout, Shape<X, X, _2>{}); // (4, MMA_M, (2, MMA_N / 2))) return make_layout(make_layout(get<0>(l), get<2, 0>(l)), get<1>(l), get<2, 1>(l)); } }; //////////////////////////////////////////////////////////////////////////////////////////////////// // Convert acc_layout from (MMA=4, MMA_M, MMA_N) to ((4, 2), MMA_M, MMA_N / 2) template<typename Layout> __forceinline__ __device__ auto convert_layout_acc_dropout(Layout acc_layout) { using X = Underscore; static_assert(decltype(size<0>(acc_layout))::value == 4); static_assert(decltype(rank(acc_layout))::value == 3); auto l = logical_divide(acc_layout, Shape<X, X, _2>{}); // (4, MMA_M, (2, MMA_N / 2))) return make_layout(make_layout(get<0>(l), get<2, 0>(l)), get<1>(l), get<2, 1>(l)); }; //////////////////////////////////////////////////////////////////////////////////////////////////// template <typename To_type, typename Engine, typename Layout> __forceinline__ __device__ auto convert_type(Tensor<Engine, Layout> const &tensor) { using From_type = typename Engine::value_type; constexpr int numel = decltype(size(tensor))::value; cutlass::NumericArrayConverter<To_type, From_type, numel> convert_op; // HACK: this requires tensor to be "contiguous" auto frag = convert_op(*reinterpret_cast<const cutlass::Array<From_type, numel> *>(tensor.data())); return make_tensor(make_rmem_ptr<To_type>(&frag), tensor.layout()); } //////////////////////////////////////////////////////////////////////////////////////////////////// template <typename Engine, typename Layout> __forceinline__ __device__ void relu_(Tensor<Engine, Layout> &tensor) { constexpr int numel = decltype(size(tensor))::value; static_assert(numel % 2 == 0); using value_t = typename Engine::value_type; // HACK: this requires tensor to be "contiguous" Tensor tensor_uint32 = recast<uint32_t>(tensor); #pragma unroll for (int i = 0; i < size(tensor_uint32); ++i) { tensor_uint32(i) = relu2<value_t>(tensor_uint32(i)); } } //////////////////////////////////////////////////////////////////////////////////////////////////// // On SM80 and above, we can fuse fp32 -> fp16/bf16 conversion and relu into 1 instruction template <typename To_type, typename Engine, typename Layout> __forceinline__ __device__ auto convert_type_relu(Tensor<Engine, Layout> const &tensor) { using From_type = typename Engine::value_type; static_assert(std::is_same_v<To_type, cutlass::half_t> || std::is_same_v<To_type, cutlass::bfloat16_t>); static_assert(std::is_same_v<float, From_type>); constexpr int numel = decltype(size(tensor))::value; static_assert(numel % 2 == 0); #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 // HACK: this requires tensor to be "contiguous" Tensor tensor_float2 = recast<float2>(tensor); Tensor out_uint32 = make_tensor<uint32_t>(tensor_float2.layout()); #pragma unroll for (int i = 0; i < size(out_uint32); ++i) { out_uint32(i) = convert_relu2<To_type>(tensor_float2(i)); } Tensor out = make_tensor(make_rmem_ptr<To_type>(out_uint32.data()), tensor.layout()); #else Tensor out = flash::convert_type<To_type>(tensor); flash::relu_(out); #endif return out; } //////////////////////////////////////////////////////////////////////////////////////////////////// // Blocks until all but N previous cp.async.commit_group operations have committed. // This differs from cute::cp_async_wait in that when N = 0 we don't call cp.async.wait_all // (which is equivalent to commit_group then wait_group 0). // Instead we just call cp.async.wait_group 0, which is slightly faster. // https://github.com/NVIDIA/cutlass/blob/master/include/cute/arch/copy_sm80.hpp#L113 template <int N> CUTE_HOST_DEVICE void cp_async_wait() { #if defined(CUTE_ARCH_CP_ASYNC_SM80_ENABLED) asm volatile("cp.async.wait_group %0;\n" :: "n"(N)); #endif } //////////////////////////////////////////////////////////////////////////////////////////////////// template <bool Is_even_MN=true, bool Is_even_K=true, bool Clear_OOB_MN=false, bool Clear_OOB_K=true, typename TiledCopy, typename Engine0, typename Layout0, typename Engine1, typename Layout1, typename Engine2, typename Layout2, typename Engine3, typename Layout3> __forceinline__ __device__ void copy(TiledCopy tiled_copy, Tensor<Engine0, Layout0> const &S, Tensor<Engine1, Layout1> &D, Tensor<Engine2, Layout2> const &identity_MN, Tensor<Engine3, Layout3> const &predicate_K, const int max_MN=0) { CUTE_STATIC_ASSERT_V(rank(S) == Int<3>{}); CUTE_STATIC_ASSERT_V(rank(D) == Int<3>{}); CUTE_STATIC_ASSERT_V(size<0>(S) == size<0>(D)); // MMA CUTE_STATIC_ASSERT_V(size<1>(S) == size<1>(D)); // MMA_M CUTE_STATIC_ASSERT_V(size<2>(S) == size<2>(D)); // MMA_K // There's no case where !Clear_OOB_K && Clear_OOB_MN static_assert(!(Clear_OOB_MN && !Clear_OOB_K)); #pragma unroll for (int m = 0; m < size<1>(S); ++m) { if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) { #pragma unroll for (int k = 0; k < size<2>(S); ++k) { if (Is_even_K || predicate_K(k)) { cute::copy(tiled_copy, S(_, m, k), D(_, m, k)); } else if (Clear_OOB_K) { cute::clear(D(_, m, k)); } } } else if (Clear_OOB_MN) { cute::clear(D(_, m, _)); } } // TD [2023-04-13]: Strange that the code below can cause race condition. // I think it's because the copies are under an if statement. // if (Is_even_K) { // #pragma unroll // for (int m = 0; m < size<1>(S); ++m) { // if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) { // copy(tiled_copy, S(_, m, _), D(_, m, _)); // } else if (Clear_OOB_MN) { // clear(D(_, m, _)); // } // } // } else { // It's slightly faster in this case if iterate over K first // #pragma unroll // for (int k = 0; k < size<2>(S); ++k) { // if (predicate_K(k)) { // #pragma unroll // for (int m = 0; m < size<1>(S); ++m) { // if (Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN) { // copy(tiled_copy, S(_, m, k), D(_, m, k)); // } else if (Clear_OOB_MN) { // clear(D(_, m, k)); // } // } // } else if (Clear_OOB_K) { // There's no case where !Clear_OOB_K && Clear_OOB_MN // if (Clear_OOB_MN || Is_even_MN) { // clear(D(_, _, k)); // } else { // #pragma unroll // for (int m = 0; m < size<1>(S); ++m) { // if (!(Is_even_MN || get<0>(identity_MN(0, m, 0)) < max_MN)) { // clear(D(_, m, k)); // } // } // } // } // } // } } //////////////////////////////////////////////////////////////////////////////////////////////////// template <bool Is_even_K=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1, typename Engine2, typename Layout2, typename Engine3, typename Layout3> __forceinline__ __device__ void copy_w_min_idx(Tensor<Engine0, Layout0> const &S, Tensor<Engine1, Layout1> &D, Tensor<Engine2, Layout2> const &identity_MN, Tensor<Engine3, Layout3> const &predicate_K, const int max_MN=0, const int min_MN=0) { CUTE_STATIC_ASSERT_V(rank(S) == Int<3>{}); CUTE_STATIC_ASSERT_V(rank(D) == Int<3>{}); CUTE_STATIC_ASSERT_V(size<0>(S) == size<0>(D)); // MMA CUTE_STATIC_ASSERT_V(size<1>(S) == size<1>(D)); // MMA_M CUTE_STATIC_ASSERT_V(size<2>(S) == size<2>(D)); // MMA_K // if (threadIdx.x == 0 && blockIdx.z == 0) { printf("blockIdx.y = %d, max_MN = %d, min_MN = %d\n", blockIdx.y, max_MN, min_MN); } #pragma unroll for (int m = 0; m < size<1>(S); ++m) { // if (threadIdx.x == 0 && blockIdx.z == 0) { printf("blockIdx.y = %d, m = %d\n", blockIdx.y, get<0>(identity_MN(0, m, 0))); } if (get<0>(identity_MN(0, m, 0)) >= min_MN && get<0>(identity_MN(0, m, 0)) < max_MN) { // if (threadIdx.x == 0 && blockIdx.z == 0) { printf("Inner loop, blockIdx.y = %d, m = %d\n", blockIdx.y, get<0>(identity_MN(0, m, 0))); } #pragma unroll for (int k = 0; k < size<2>(S); ++k) { if (Is_even_K || predicate_K(k)) { cute::copy(S(_, m, k), D(_, m, k)); } } } } } //////////////////////////////////////////////////////////////////////////////////////////////////// template <typename Engine, typename Layout> __forceinline__ __device__ void apply_softcap(Tensor<Engine, Layout> &tensor, const float softcap){ #pragma unroll for (int i = 0; i < size(tensor); ++i) { tensor(i) = cutlass::fast_tanh(tensor(i) * softcap); } } template <typename Engine0, typename Layout0, typename Engine1, typename Layout1> __forceinline__ __device__ void calculate_dtanh(Tensor<Engine0, Layout0> &src_tensor, Tensor<Engine1, Layout1> &dst_tensor, const float softcap){ #pragma unroll for (int i = 0; i < size(src_tensor); ++i) { dst_tensor(i) = (1.f - (src_tensor(i) * src_tensor(i))) * softcap; } } //////////////////////////////////////////////////////////////////////////////////////////////////// } // namespace flash
candle/candle-flash-attn/kernels/utils.h/0
{ "file_path": "candle/candle-flash-attn/kernels/utils.h", "repo_id": "candle", "token_count": 8100 }