ELECTRA¶
The ELECTRA model was proposed in the paper. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ELECTRA is a new pre-training approach which trains two transformer models: the generator and the discriminator. The generator’s role is to replace tokens in a sequence, and is therefore trained as a masked language model. The discriminator, which is the model we’re interested in, tries to identify which tokens were replaced by the generator in the sequence.
The abstract from the paper is the following:
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
Tips:
ELECTRA is the pre-training approach, therefore there is nearly no changes done to the underlying model: BERT. The only change is the separation of the embedding size and the hidden size -> The embedding size is generally smaller, while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection layer is used.
The ELECTRA checkpoints saved using Google Research’s implementation contain both the generator and discriminator. The conversion script requires the user to name which model to export into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all available ELECTRA models, however. This means that the discriminator may be loaded in the ElectraForMaskedLM model, and the generator may be loaded in the ElectraForPreTraining model (the classification head will be randomly initialized as it doesn’t exist in the generator).
The original code can be found here.
ElectraConfig¶
-
class
transformers.ElectraConfig(vocab_size=30522, embedding_size=128, hidden_size=256, num_hidden_layers=12, num_attention_heads=4, intermediate_size=1024, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, pad_token_id=0, **kwargs)[source]¶ This is the configuration class to store the configuration of a
ElectraModel. It is used to instantiate an ELECTRA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ELECTRA google/electra-small-discriminator architecture.Configuration objects inherit from
PretrainedConfigand can be used to control the model outputs. Read the documentation fromPretrainedConfigfor more information.- Parameters
vocab_size (
int, optional, defaults to 30522) – Vocabulary size of the ELECTRA model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method ofElectraModel.embedding_size (
int, optional, defaults to 128) – Dimensionality of the encoder layers and the pooler layer.hidden_size (
int, optional, defaults to 256) – Dimensionality of the encoder layers and the pooler layer.num_hidden_layers (
int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.num_attention_heads (
int, optional, defaults to 4) – Number of attention heads for each attention layer in the Transformer encoder.intermediate_size (
int, optional, defaults to 1024) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.hidden_act (
strorfunction, optional, defaults to “gelu”) – The non-linear activation function (function or string) in the encoder and pooler. If string, “gelu”, “relu”, “swish” and “gelu_new” are supported.hidden_dropout_prob (
float, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.attention_probs_dropout_prob (
float, optional, defaults to 0.1) – The dropout ratio for the attention probabilities.max_position_embeddings (
int, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).type_vocab_size (
int, optional, defaults to 2) – The vocabulary size of the token_type_ids passed intoElectraModel.initializer_range (
float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.layer_norm_eps (
float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.
Example:
from transformers import ElectraModel, ElectraConfig # Initializing a ELECTRA electra-base-uncased style configuration configuration = ElectraConfig() # Initializing a model from the electra-base-uncased style configuration model = ElectraModel(configuration) # Accessing the model configuration configuration = model.config
ElectraTokenizer¶
-
class
transformers.ElectraTokenizer(vocab_file, do_lower_case=True, do_basic_tokenize=True, never_split=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', tokenize_chinese_chars=True, **kwargs)[source]¶ Constructs an Electra tokenizer.
ElectraTokenizeris identical toBertTokenizerand runs end-to-end tokenization: punctuation splitting + wordpiece.Refer to superclass
BertTokenizerfor usage examples and documentation concerning parameters.
ElectraTokenizerFast¶
-
class
transformers.ElectraTokenizerFast(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', clean_text=True, tokenize_chinese_chars=True, strip_accents=True, wordpieces_prefix='##', **kwargs)[source]¶ Constructs a “Fast” Electra Fast tokenizer (backed by HuggingFace’s tokenizers library).
ElectraTokenizerFastis identical toBertTokenizerFastand runs end-to-end tokenization: punctuation splitting + wordpiece.Refer to superclass
BertTokenizerFastfor usage examples and documentation concerning parameters.
ElectraModel¶
-
class
transformers.ElectraModel(config)[source]¶ The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different.Both the generator and discriminator checkpoints may be loaded into this model. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
config_class¶ alias of
transformers.configuration_electra.ElectraConfig
-
forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None)[source]¶ The
ElectraModelforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]:0corresponds to a sentence A token,1corresponds to a sentence B tokenposition_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1].head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.encoder_hidden_states (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.encoder_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.
- Returns
- last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)): Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (
tuple(torch.FloatTensor), optional, returned whenconfig.output_hidden_states=True): Tuple of
torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(torch.FloatTensor), optional, returned whenconfig.output_attentions=True): Tuple of
torch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- last_hidden_state (
- Return type
tuple(torch.FloatTensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
Examples:
from transformers import ElectraModel, ElectraTokenizer import torch tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator') model = ElectraModel.from_pretrained('google/electra-small-discriminator') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
ElectraForPreTraining¶
-
class
transformers.ElectraForPreTraining(config)[source]¶ Electra model with a binary classification head on top as used during pre-training for identifying generated tokens.
It is recommended to load the discriminator checkpoint into that model. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None)[source]¶ The
ElectraForPreTrainingforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]:0corresponds to a sentence A token,1corresponds to a sentence B tokenposition_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1].head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.encoder_hidden_states (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.encoder_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.labels (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) – Labels for computing the ELECTRA loss. Input should be a sequence of tokens (seeinput_idsdocstring) Indices should be in[0, 1].0indicates the token is an original token,1indicates the token was replaced.
- Returns
- loss (optional, returned when
labelsis provided)torch.FloatTensorof shape(1,): Total loss of the ELECTRA objective.
- scores (
torch.FloatTensorof shape(batch_size, sequence_length)) Prediction scores of the head (scores for each token before SoftMax).
- hidden_states (
tuple(torch.FloatTensor), optional, returned whenconfig.output_hidden_states=True): Tuple of
torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(torch.FloatTensor), optional, returned whenconfig.output_attentions=True): Tuple of
torch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- loss (optional, returned when
- Return type
tuple(torch.FloatTensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
Examples:
from transformers import ElectraTokenizer, ElectraForPreTraining import torch tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator') model = ElectraForPreTraining.from_pretrained('google/electra-small-discriminator') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) prediction_scores, seq_relationship_scores = outputs[:2]
ElectraForMaskedLM¶
-
class
transformers.ElectraForMaskedLM(config)[source]¶ Electra model with a language modeling head on top.
Even though both the discriminator and generator may be loaded into this model, the generator is the only model of the two to have been trained for the masked language modeling task. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, masked_lm_labels=None)[source]¶ The
ElectraForMaskedLMforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]:0corresponds to a sentence A token,1corresponds to a sentence B tokenposition_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1].head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.encoder_hidden_states (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.encoder_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.masked_lm_labels (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) – Labels for computing the masked language modeling loss. Indices should be in[-100, 0, ..., config.vocab_size](seeinput_idsdocstring) Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
- Returns
- masked_lm_loss (optional, returned when
masked_lm_labelsis provided)torch.FloatTensorof shape(1,): Masked language modeling loss.
- prediction_scores (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (
tuple(torch.FloatTensor), optional, returned whenconfig.output_hidden_states=True): Tuple of
torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(torch.FloatTensor), optional, returned whenconfig.output_attentions=True): Tuple of
torch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples:
from transformers import ElectraTokenizer, ElectraForMaskedLM import torch tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-generator') model = ElectraForMaskedLM.from_pretrained('google/electra-small-generator') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, masked_lm_labels=input_ids) loss, prediction_scores = outputs[:2]
- masked_lm_loss (optional, returned when
- Return type
tuple(torch.FloatTensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
ElectraForTokenClassification¶
-
class
transformers.ElectraForTokenClassification(config)[source]¶ Electra model with a token classification head on top.
Both the discriminator and generator may be loaded into this model. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None)[source]¶ The
ElectraForTokenClassificationforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.token_type_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]:0corresponds to a sentence A token,1corresponds to a sentence B tokenposition_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1].head_mask (
torch.FloatTensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.encoder_hidden_states (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional, defaults toNone) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.encoder_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional, defaults toNone) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.labels (
torch.LongTensorof shape(batch_size, sequence_length), optional, defaults toNone) – Labels for computing the token classification loss. Indices should be in[0, ..., config.num_labels - 1].
- Returns
- loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) : Classification loss.
- scores (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_labels)) Classification scores (before SoftMax).
- hidden_states (
tuple(torch.FloatTensor), optional, returned whenconfig.output_hidden_states=True): Tuple of
torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(torch.FloatTensor), optional, returned whenconfig.output_attentions=True): Tuple of
torch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- loss (
- Return type
tuple(torch.FloatTensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
Examples:
from transformers import ElectraTokenizer, ElectraForTokenClassification import torch tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator') model = ElectraForTokenClassification.from_pretrained('google/electra-small-discriminator') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=labels) loss, scores = outputs[:2]
TFElectraModel¶
-
class
transformers.TFElectraModel(*args, **kwargs)[source]¶ The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different.Both the generator and discriminator checkpoints may be loaded into this model. This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()method which currently requires having all the tensors in the first argument of the model call function:model(inputs).If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with input_ids only and nothing else:
model(inputs_ids)a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids])a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({'input_ids': input_ids, 'token_type_ids': token_type_ids})
- Parameters
config (
ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()method to load the model weights.
-
call(inputs, **kwargs)[source]¶ The
TFElectraModelforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.head_mask (
Numpy arrayortf.Tensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length, embedding_dim), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.training (
boolean, optional, defaults toFalse) – Whether to activate dropout modules (if set toTrue) during training or to de-activate them (if set toFalse) for evaluation.
- Returns
- last_hidden_state (
tf.Tensorof shape(batch_size, sequence_length, hidden_size)): Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (
tuple(tf.Tensor), optional, returned whenconfig.output_hidden_states=True): tuple of
tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(tf.Tensor), optional, returned whenconfig.output_attentions=True): tuple of
tf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length):Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- last_hidden_state (
- Return type
tuple(tf.Tensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
Examples:
import tensorflow as tf from transformers import ElectraTokenizer, TFElectraModel tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator') model = TFElectraModel.from_pretrained('google/electra-small-discriminator') input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1 outputs = model(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
TFElectraForPreTraining¶
-
class
transformers.TFElectraForPreTraining(*args, **kwargs)[source]¶ Electra model with a binary classification head on top as used during pre-training for identifying generated tokens.
Even though both the discriminator and generator may be loaded into this model, the discriminator is the only model of the two to have the correct classification head to be used for this model.
This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()method which currently requires having all the tensors in the first argument of the model call function:model(inputs).If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with input_ids only and nothing else:
model(inputs_ids)a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids])a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({'input_ids': input_ids, 'token_type_ids': token_type_ids})
- Parameters:
- config (
ElectraConfig): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
from_pretrained()method to load the model weights.
- config (
-
call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, training=False)[source]¶ The
TFElectraForPreTrainingforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.head_mask (
Numpy arrayortf.Tensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length, embedding_dim), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.training (
boolean, optional, defaults toFalse) – Whether to activate dropout modules (if set toTrue) during training or to de-activate them (if set toFalse) for evaluation.
- Returns
- scores (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length, config.num_labels)): Prediction scores of the head (scores for each token before SoftMax).
- hidden_states (
tuple(tf.Tensor), optional, returned whenconfig.output_hidden_states=True): tuple of
tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(tf.Tensor), optional, returned whenconfig.output_attentions=True): tuple of
tf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length):Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- scores (
- Return type
tuple(tf.Tensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
Examples:
import tensorflow as tf from transformers import ElectraTokenizer, TFElectraForPreTraining tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator') model = TFElectraForPreTraining.from_pretrained('google/electra-small-discriminator') input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1 outputs = model(input_ids) scores = outputs[0]
TFElectraForMaskedLM¶
-
class
transformers.TFElectraForMaskedLM(*args, **kwargs)[source]¶ Electra model with a language modeling head on top.
Even though both the discriminator and generator may be loaded into this model, the generator is the only model of the two to have been trained for the masked language modeling task.
This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()method which currently requires having all the tensors in the first argument of the model call function:model(inputs).If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with input_ids only and nothing else:
model(inputs_ids)a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids])a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({'input_ids': input_ids, 'token_type_ids': token_type_ids})
- Parameters:
- config (
ElectraConfig): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
from_pretrained()method to load the model weights.
- config (
-
call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, training=False)[source]¶ The
TFElectraForMaskedLMforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.head_mask (
Numpy arrayortf.Tensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length, embedding_dim), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.training (
boolean, optional, defaults toFalse) – Whether to activate dropout modules (if set toTrue) during training or to de-activate them (if set toFalse) for evaluation.
- Returns
- prediction_scores (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length, config.vocab_size)): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (
tuple(tf.Tensor), optional, returned whenconfig.output_hidden_states=True): tuple of
tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(tf.Tensor), optional, returned whenconfig.output_attentions=True): tuple of
tf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length):Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- prediction_scores (
- Return type
tuple(tf.Tensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
Examples:
import tensorflow as tf from transformers import ElectraTokenizer, TFElectraForMaskedLM tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-generator') model = TFElectraForMaskedLM.from_pretrained('google/electra-small-generator') input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1 outputs = model(input_ids) prediction_scores = outputs[0]
TFElectraForTokenClassification¶
-
class
transformers.TFElectraForTokenClassification(*args, **kwargs)[source]¶ Electra model with a token classification head on top.
- Both the discriminator and generator may be loaded into this model.
This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()method which currently requires having all the tensors in the first argument of the model call function:model(inputs).If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with input_ids only and nothing else:
model(inputs_ids)a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])ormodel([input_ids, attention_mask, token_type_ids])a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({'input_ids': input_ids, 'token_type_ids': token_type_ids})
- Parameters:
- config (
ElectraConfig): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
from_pretrained()method to load the model weights.
- config (
-
call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, training=False)[source]¶ The
TFElectraForTokenClassificationforward method, overrides the__call__()special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length)) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
transformers.ElectraTokenizer. Seetransformers.PreTrainedTokenizer.encode()andtransformers.PreTrainedTokenizer.encode_plus()for details.attention_mask (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length), optional, defaults toNone) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]:1for tokens that are NOT MASKED,0for MASKED tokens.head_mask (
Numpy arrayortf.Tensorof shape(num_heads,)or(num_layers, num_heads), optional, defaults toNone) – Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]:1indicates the head is not masked,0indicates the head is masked.inputs_embeds (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length, embedding_dim), optional, defaults toNone) – Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.training (
boolean, optional, defaults toFalse) – Whether to activate dropout modules (if set toTrue) during training or to de-activate them (if set toFalse) for evaluation.
- Returns
- scores (
Numpy arrayortf.Tensorof shape(batch_size, sequence_length, config.num_labels)): Classification scores (before SoftMax).
- hidden_states (
tuple(tf.Tensor), optional, returned whenconfig.output_hidden_states=True): tuple of
tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (
tuple(tf.Tensor), optional, returned whenconfig.output_attentions=True): tuple of
tf.Tensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length):Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- scores (
- Return type
tuple(tf.Tensor)comprising various elements depending on the configuration (ElectraConfig) and inputs
Examples:
import tensorflow as tf from transformers import ElectraTokenizer, TFElectraForTokenClassification tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator') model = TFElectraForTokenClassification.from_pretrained('google/electra-small-discriminator') input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1 outputs = model(input_ids) scores = outputs[0]