repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 687 | closed | Updating tests and doc | - Fix GPT-2 test
- Update the documentation | 06-14-2019 15:18:23 | 06-14-2019 15:18:23 | # [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@cad88e1`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #687 +/- ##
=========================================
Coverage ? 67.14%
=========================================
Files ? 18
Lines ? 3847
Branches ? 0
=========================================
Hits ? 2583
Misses ? 1264
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [pytorch\_pretrained\_bert/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `79.39% <100%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=footer). Last update [cad88e1...44e9ddd](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@cad88e1`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #687 +/- ##
=========================================
Coverage ? 67.14%
=========================================
Files ? 18
Lines ? 3847
Branches ? 0
=========================================
Hits ? 2583
Misses ? 1264
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [pytorch\_pretrained\_bert/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `79.39% <100%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=footer). Last update [cad88e1...44e9ddd](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 686 | closed | How to use GPT2 to predict and fit a word into an existing sentence? | Hi, I'd like to know if I can use GPT2 to decorate a simple sentence such as "Peter was sad because his sister had eaten all his candy." to get sth like "Tuesday morning the ten years old Peter was sitting in his room and was sad because his mean sister Clara had eaten all his tasty candy with her friends."
Using BERT I can use BertForMaskedLM to insert [MASK] tokens into my simple sentence but results are not very good (a lot of repetitions and words do not really fit in).
Since I heard GPT2 was better for text generation I'd now like to experiment with your fantastic library but I can not really find a starting point how to insert text in an existing text instead of (what all tutorials to) add it to the end of the sentence. | 06-14-2019 10:59:11 | 06-14-2019 10:59:11 | You would need an insertion-based transformer model like Google's recent KERMIT (http://arxiv.org/abs/1906.01604). But unfortunately, we currently don't have this model in the library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 685 | closed | Add method to directly load TF Checkpoints for Bert models | ## Summary
In this PR, I changed some documentation, and added `from_tf_ckpt()` method to `BertPreTrainedModel`.
This method allows users to directly load TensorFlow checkpoints (e.g. `model.ckpt-XXXX` files) for a task specific Bert model like `BertForTokenClassification` or `BertForSequenceClassification`.
**For example:**
```python
model = BertForSequenceClassification.from_tf_ckpt("/path/to/bert/bert_config.json",
"/path/to/bert/model.ckpt-12000",
num_labels=num_labels)
```
## Why this is needed:
This functionality has been requested by a number of people, like #676, https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-501778493, #580, https://github.com/huggingface/pytorch-pretrained-BERT/issues/580#issuecomment-497286535, https://github.com/huggingface/pytorch-pretrained-BERT/issues/438#issuecomment-479405364 | 06-14-2019 08:10:40 | 06-14-2019 08:10:40 | Hi, I'm not convinced we need this additional option, see my [comment](https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-502252962) in the associated issue thread.<|||||>As mentioned in, https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-506134506, I recognise that directly loading TF checkpoints is a rather niche use case and will close this PR.<|||||>I don't think it's a niche but the `from_tf` option was there for importing from tf files so I would rather fix it to work in all cases rather than have several ways to import from a tf checkpoint.<|||||>I'll have a look. |
transformers | 684 | closed | Implementation of 15% words masking in pretraining | In the BERT paper, they randomly mask 15% words for pretraining, and that's exactly what they do in the TF version.
https://github.com/google-research/bert/blob/0fce551b55caabcfba52c61e18f34b541aef186a/create_pretraining_data.py#L342
However, the implementation here is a little bit different, instead of randomly select 15% tokens, it assigns a probability of 15% to each token, that is, each token has a probability of 15% to be masked. That means, each time we might have less than or more than 15% tokens masked.
So, is it correct to mask tokens with the expectation of 0.15 rather than fixed 15%?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/f9cde97b313c3218e1b29ea73a42414dfefadb40/examples/lm_finetuning/simple_lm_finetuning.py#L276-L301 | 06-14-2019 01:24:19 | 06-14-2019 01:24:19 | It should be fine. A bit of randomness in the pre-processing of the inputs is never bad when training a deep learning model.<|||||>> It should be fine. A bit of randomness in the pre-processing of the inputs is never bad when training a deep learning model.
I found the same problem that the implementation is different from tensorflow. But the key point is not the fixed 15% prob of all the token. If we use the implementation of pytorch will produce two extreme case especially for short sentences like article title,usually 10-20 characters.
case 1. sentence with too much '[MASK]'
case 2. sentence with none '[MASK]'
both case1 and case2 would cause the drop of performance. case 1 make the model difficult to predict and case2 would not produce the loss.
Given a corpus with an average sentence length of 10. The implementation of tensorflow would generate 1 '[MASK]' for the sentences, but the implementation of pytorch would have :
0.85^10 = 0.19 to generate 0 '[MASK]'
0.15 * 0.85^9 * 10 =0.34 to generate 1 '[MASK]'
0.15^2 * 0.85^8 * 45 =0.27 to generate 2 '[MASK]'
0.15^2 * 0.85^7 * 120 =0.12 to generate 3 '[MAKS]'
...
If we roughly consider the sentence with 15% '[MASK]' is appropriate, only 1/2 '[MASK]' is useful for training models. So only 0.34 + 0.27 = 0.61 training case is useful.
And we found it is this is a very serious problem for short text.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 683 | closed | Fp16 | 06-13-2019 21:12:21 | 06-13-2019 21:12:21 | ||
transformers | 682 | closed | Can't find gpt2 vocab file. | When I run this
```
tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model_name_or_path='gpt2',cache_dir=None)
```
I am getting this
```
Model name 'gpt2' was not found in model name list (gpt2). We assumed 'gpt2' was a path or url but couldn't find files https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json and https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt at this path or url.
```
Can anyone tell me where I am doing wrong. | 06-13-2019 17:19:41 | 06-13-2019 17:19:41 | I got the solution.
|
transformers | 681 | closed | Can BertForMaskedLM be used to predict out-of-vocabulary words? | Hi,
I have this text:
```
[CLS] This is a picture of a boa.
```
And would like to have the predictions of the `BertForMaskedLM` model for the word `boa`, without masking this word.
However, when I tokenize the text to give it to the network, I get:
```
['[CLS]', 'This', 'is', 'a', 'picture', 'of', 'a', 'b', '##oa', '.']
```
And the network gives me predictions for `b` and `##oa`. But nothing relevant. Could I get predictions for `boa`?
| 06-13-2019 15:43:47 | 06-13-2019 15:43:47 | What do yuu get when you multiply the probabilities for words in these 2 places. Probability for b times probability for "##oa".
['[CLS]', 'This', 'is', 'a', 'picture', 'of', 'a', '[MASK]', '[MASK]', '.']
['[CLS]', 'This', 'is', 'a', 'picture', 'of', 'a', '[MASK]', '##oa', '.']
But there is also whole word masking model is realized by Google team, I hope it will be added here soon. It mask all the parts of these words at the same time, so it would give a better accuracy (1% for some tasks).
https://github.com/google-research/bert
<|||||>Nothing interesting.
Moreover, I would not like to mask words. I just want to get predictions with the word in clear.<|||||>Well, another solution is to recreate bert with a large vocabulary. This would take days on TPU.
<|||||>Okay thanks for the answer<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 680 | closed | Limit on the input text length? | Hi,
I often get this error:
```
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 268, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/torch/nn/functional.py", line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:193
```
It only happens for long texts. It doesn't fail on chunks of a long text that is failing.
Is there a limitation on the length of the input text? | 06-13-2019 09:17:14 | 06-13-2019 09:17:14 | Yes, 512 tokens for Bert.<|||||>Thank you :) <|||||>Is there a way to bypass this limit? To increase the number of words? |
transformers | 679 | closed | Why the output of models are random. | I tried to get word representations using the full-retrained bert model for several times, whereas the outputs of model are different for a same word in each time. Did I neglect something? Not knowing the reason and asking for help sincerely.
The code is:
`from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM `
`model = BertModel.from_pretrained('bert-base-uncased')`
`x=torch.LongTensor([[6541]])`
`y0=model(x)[0]`
`y1=model(x)[0]`
In theory, y0 should be equal to y1. However, they are different.
Both the length of y0 and y1 is 12, in accordance to the 12 layers of 'bert-base-uncased' model. However, each 12 element of y0 and y1 are different.
| 06-13-2019 08:53:25 | 06-13-2019 08:53:25 | They won't be able to help you if you don't provide a code for reproducing your issue, as this is not an expected behaviour.<|||||>Thanks a lot for reminding. The issue is renewed with the code.<|||||>That's true! I can reproduce it also on my computer... Really weird!<|||||>You should use `model.eval()` to desactivate dropout like in the usage examples of the readme.<|||||>It solves the problem, thanks!
|
transformers | 678 | closed | Transformer XL ProjectedAdaptiveLogSoftmax bug (maybe?) | In <a href="https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_transfo_xl_utilities.py#L120">this line</a>, shouldn't the output be assigned to `out` when `n_clusters` is 0? Otherwise we run into `UnboundLocalError: local variable 'out' referenced before assignment` | 06-12-2019 20:59:35 | 06-12-2019 20:59:35 | Yes! We don't see that when we use the pre-trained model because the number of clusters is greater than zero anyway. Will fix.<|||||>Thank you. I created a PR since it was a small bug. #690 |
transformers | 677 | closed | Download the model without executing a Python script | Hi,
Is there a command to download a model (e.g. BertForMaskedLM) without having to execute a Python script?
For example, in Spacy, we can do `python -m spacy download en`. | 06-12-2019 15:57:14 | 06-12-2019 15:57:14 | Is this what you want?
```python
PRETRAINED_MODEL_ARCHIVE_MAP = {
'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz",
'bert-large-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased.tar.gz",
'bert-base-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased.tar.gz",
'bert-large-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz",
'bert-base-multilingual-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased.tar.gz",
'bert-base-multilingual-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz",
'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese.tar.gz",
}
```
from https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py
I feel like I'm not exactly understanding your question. They have hosted the pytorch dumps of the pretrained BERT models that were released by Google and hosted them on AWS. After downloading the model, what do you wish to do with it?<|||||>Thank you for the answer.
So with your code, I now have the url. But, where should I put the files in my filesystem?
Spacy provide a command to download the weights, put them at the good location in the filesystem, and use them. Is there an equivalent in your repository?
For example, [DeepPavlov](https://github.com/deepmipt/DeepPavlov) provides this command:
```
python -m deeppavlov install -d squad_bert
```
to install and download a model.<|||||>Does my question make sense?<|||||>Not really. What is the reason you are trying to do that?
This library will automatically download pre-trained weights, you don't need to do that yourself (even though you can).<|||||>> Not really.
Do you understand what Spacy & Deeppavlov enable to do? If yes, I am asking if there is something similar here. But, if you don't understand, it surely means it is not possible.
> What is the reason you are trying to do that?
Because, when I put my code in production, I don't want to make the first query very long because it has to download the model.<|||||>I happen to know quite well SpaCy (if you look at the Huggingface github, you will see we have developed a coreference resolution extension for SpaCy, [NeuralCoref](https://github.com/huggingface/neuralcoref), which interfaces directly with the cython internals of SpaCy) so I know the download process they use which is there mainly because they need the models to install as python packages (which we don't need to do here).
You actually shouldn't have to do anything special to avoid a long first query (and we don't do anything special at HuggingFace with the model in production) for the following reason:
The model weights are downloaded and cached when you instantiated the model for the first time and this should be done before the first query is even received. If you create and load the model at each query, you will experience a very heavy overhead, you should avoid that.
If you want to download the weights yourself (you can also do that) you will need to download the weights, configuration and vocabulary files manually from the url that @chrisgzf has pointed and put these in a folder. You can then load the model and tokenizer from that folder as indicated in the readme.<|||||>Okay, thank you for your detailed and interesting answer. So there is not the feature I was asking for.
And I miswrote: I didn't want to say "I don't want to make the first query very long", but rather "I don't want to make the first server start very long" for some reasons. |
transformers | 676 | closed | Importing TF checkpoint as BertForTokenClassificiation | Hello Everyone,
I've been stuck with trying to load TensorFlow checkpoints to be used by `pytorch-pretrained-bert` as `BertForTokenClassification`.
**pytorch-pretrained-BERT Version:** Installed from latest master branch.
**What works:**
```python
config = BertConfig.from_json_file(CONFIG_FILE)
model = BertForPreTraining(config)
model = load_tf_weights_in_bert(model, "/home/bert/pt_baseuncased/model.ckpt-98000")
```
**What I want to do:**
```python
config = BertConfig.from_json_file(CONFIG_FILE)
model = BertForTokenClassification(config, num_labels=num_labels)
# the difference is BertForTokenClassification instead of BertForPreTraining
model = load_tf_weights_in_bert(model, "/home/bert/pt_baseuncased/model.ckpt-98000")
```
**When I try to do this it gives me:**
```python
AttributeError: 'BertForTokenClassification' object has no attribute 'bias'
```
**Full Traceback:**
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-a8225a5966f7> in <module>
1 config = BertConfig.from_json_file(CONFIG_FILE)
2 model = BertForTokenClassification(config, num_labels=10)
----> 3 model = load_tf_weights_in_bert(model, "/home/gzhenfun/bert/pt_baseuncased/model.ckpt-98000")
~/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
88 pointer = getattr(pointer, 'weight')
89 elif l[0] == 'output_bias' or l[0] == 'beta':
---> 90 pointer = getattr(pointer, 'bias')
91 elif l[0] == 'output_weights':
92 pointer = getattr(pointer, 'weight')
~/anaconda3/envs/bert/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
533 return modules[name]
534 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 535 type(self).__name__, name))
536
537 def __setattr__(self, name, value):
AttributeError: 'BertForTokenClassification' object has no attribute 'bias'
```
**To try to resolve this:**
I followed https://github.com/huggingface/pytorch-pretrained-BERT/issues/580#issuecomment-489519231 from #580, and changed my `modeling.py` to this:
```python
pointer = model
for m_name in name:
if re.fullmatch(r'[A-Za-z]+_\d+', m_name):
l = re.split(r'_(\d+)', m_name)
else:
l = [m_name]
if l[0] == 'kernel' or l[0] == 'gamma':
pointer = getattr(pointer, 'weight')
elif l[0] == 'output_bias' or l[0] == 'beta':
pointer = getattr(pointer, 'cls')
# added the line above
pointer = getattr(pointer, 'bias')
elif l[0] == 'output_weights':
pointer = getattr(pointer, 'cls')
# added the line above
pointer = getattr(pointer, 'weight')
elif l[0] == 'squad':
pointer = getattr(pointer, 'classifier')
else:
try:
pointer = getattr(pointer, l[0])
except AttributeError:
print("Skipping {}".format("/".join(name)))
continue
if len(l) >= 2:
num = int(l[1])
pointer = pointer[num]
```
However, that gives me:
```python
AttributeError: 'FusedLayerNorm' object has no attribute 'cls'
```
Anybody here knows how I can fix this and properly import a TF checkpoint as `BertForTokenClassification`?
Will appreciate any help. Thank you! | 06-12-2019 10:30:10 | 06-12-2019 10:30:10 | Hello everyone,
I have temporarily come up with a workaround for this. Not sure if it's the best solution but it works. What I did was I essentially merged what `load_tf_weights_in_bert()` and what part of `BertPreTrainedModel.from_pretrained()` was doing. `BertPreTrainedModel` is the parent class of `BertForTokenClassification`, so if you are trying to do something similar for `BertFor{TaskName}`, it should work too.
```python
def load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):
config = BertConfig.from_json_file(bert_config)
model = BertForPreTraining(config)
load_tf_weights_in_bert(model, ckpt_path)
state_dict=model.state_dict()
model = BertForTokenClassification(config, num_labels=num_labels)
# Load from a PyTorch state_dict
old_keys = []
new_keys = []
for key in state_dict.keys():
new_key = None
if 'gamma' in key:
new_key = key.replace('gamma', 'weight')
if 'beta' in key:
new_key = key.replace('beta', 'bias')
if new_key:
old_keys.append(key)
new_keys.append(new_key)
for old_key, new_key in zip(old_keys, new_keys):
state_dict[new_key] = state_dict.pop(old_key)
missing_keys = []
unexpected_keys = []
error_msgs = []
# copy state_dict so _load_from_state_dict can modify it
metadata = getattr(state_dict, '_metadata', None)
state_dict = state_dict.copy()
if metadata is not None:
state_dict._metadata = metadata
def load(module, prefix=''):
local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
module._load_from_state_dict(
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
for name, child in module._modules.items():
if child is not None:
load(child, prefix + name + '.')
start_prefix = ''
if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):
start_prefix = 'bert.'
load(model, prefix=start_prefix)
if len(missing_keys) > 0:
logger.info("Weights of {} not initialized from pretrained model: {}".format(
model.__class__.__name__, missing_keys))
if len(unexpected_keys) > 0:
logger.info("Weights from pretrained model not used in {}: {}".format(
model.__class__.__name__, unexpected_keys))
if len(error_msgs) > 0:
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
model.__class__.__name__, "\n\t".join(error_msgs)))
return model
model = load_BFTC_from_TF_ckpt(CONFIG_FILE, "/home/bert/pt_baseuncased/model.ckpt-98000", num_labels)
```<|||||>Hi chrisgzf, thank you for the quick fix. I saw it's been integrated in the latest version.
I am trying to do quite the same using **BertForSequenceClassification** but the "**AttributeError**: 'BertForTokenClassification' object has no attribute 'bias'" still shows up.
Any ideas how I could use your fix for BertForSequenceClassification too ?
Thank you<|||||>Hi @stormskidd,
in my code snippet here (https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-501526327), just change
`model = BertForTokenClassification(config, num_labels=num_labels)`
to
`model = BertForSequenceClassification(config, num_labels=num_labels)`
and it _should_ work.<|||||>Thank you for the quick response.
So I tried this as you said :
```
def load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):
config = BertConfig.from_json_file(bert_config)
model = BertForSequenceClassification(config, num_labels=num_labels)
load_tf_weights_in_bert(model, ckpt_path)
state_dict=model.state_dict()
model = **BertForSequenceClassification**(config, num_labels=num_labels)
# Load from a PyTorch state_dict
old_keys = []
new_keys = []
for key in state_dict.keys():
new_key = None
if 'gamma' in key:
new_key = key.replace('gamma', 'weight')
if 'beta' in key:
new_key = key.replace('beta', 'bias')
if new_key:
old_keys.append(key)
new_keys.append(new_key)
for old_key, new_key in zip(old_keys, new_keys):
state_dict[new_key] = state_dict.pop(old_key)
missing_keys = []
unexpected_keys = []
error_msgs = []
# copy state_dict so _load_from_state_dict can modify it
metadata = getattr(state_dict, '_metadata', None)
state_dict = state_dict.copy()
if metadata is not None:
state_dict._metadata = metadata
def load(module, prefix=''):
local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
module._load_from_state_dict(
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
for name, child in module._modules.items():
if child is not None:
load(child, prefix + name + '.')
start_prefix = ''
if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):
start_prefix = 'bert.'
load(model, prefix=start_prefix)
if len(missing_keys) > 0:
logger.info("Weights of {} not initialized from pretrained model: {}".format(
model.__class__.__name__, missing_keys))
if len(unexpected_keys) > 0:
logger.info("Weights from pretrained model not used in {}: {}".format(
model.__class__.__name__, unexpected_keys))
if len(error_msgs) > 0:
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
model.__class__.__name__, "\n\t".join(error_msgs)))
return model
```
Then:
```
CONFIG_FILE = "Bert/multi_cased_L-12_H-768_A-12/bert_config.json"
model = load_BFTC_from_TF_ckpt(CONFIG_FILE, "model.ckpt-6032", num_labels = 2)
```
But still got the error:
`
<ipython-input-2-8b2ad52bd838> in load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels)
2 config = BertConfig.from_json_file(bert_config)
3 model = BertForSequenceClassification(config, num_labels=num_labels)
----> 4 load_tf_weights_in_bert(model, ckpt_path)
5 state_dict=model.state_dict()
6 model = BertForSequenceClassification(config, num_labels=num_labels)
C:\ProgramData\Anaconda3\Lib\site-packages\pytorch_pretrained_bert\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
89
90 elif l[0] == 'output_bias' or l[0] == 'beta':
---> 91 pointer = getattr(pointer, 'bias')
92
93 elif l[0] == 'output_weights':
C:\ProgramData\Anaconda3\Lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name)
537 return modules[name]
538 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 539 type(self).__name__, name))
540
541 def __setattr__(self, name, value):
AttributeError: 'BertForSequenceClassification' object has no attribute 'bias''`
Also, I wandered in the source code where the error occurs :
```
90 elif l[0] == 'output_bias' or l[0] == 'beta':
---> 91 pointer = getattr(pointer, 'bias')
```
I tried to change getattr(pointer, 'bias') to getattr(pointer, 'beta') but then I got a slightly different error:
```
C:\ProgramData\Anaconda3\Lib\site-packages\pytorch_pretrained_bert\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
89
90 elif l[0] == 'output_bias' or l[0] == 'beta':
---> 91 pointer = getattr(pointer, 'beta')
92
93 elif l[0] == 'output_weights':
C:\ProgramData\Anaconda3\Lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name)
537 return modules[name]
538 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 539 type(self).__name__, name))
540
541 def __setattr__(self, name, value):
AttributeError: 'BertLayerNorm' object has no attribute 'beta'
```
Hope it helps. Please let me know you think of any workaround for this !
By the way, I'm on windows using Anaconda with Python 3.7.1
Greetings,
Maxime<|||||>Hi Maxime (@stormskidd),
Do read my comment carefully. Change
`model = BertForTokenClassification(config, num_labels=num_labels)`
to
`model = BertForSequenceClassification(config, num_labels=num_labels)`
Please leave `model = BertForPreTraining(config)` (line 3) as is, and do not change it.
Edit: you might want to check out #685 as well. I submitted a PR to make it easier to do something we are trying to do. Maybe the code examples will make it clearer to you.
In line 3 of the function body, the model has to be an instance of `BertForPreTraining` because that's what `load_tf_weights_in_bert()` is designed to work with. I'm not sure if what I'm doing is the proper way or just a jank way, but I'm just trying to copy out the state_dict after the weights are imported as a `BertForPreTraining` instances, then creating a brand new `BertFor[TaskName]` instance, and loading the state_dict into it.
Hope this clears things up.
Cheers.<|||||>Oh, you're right I posted the wrong version of the many things I tried :(
Anyway, I'm afraid the error still shows up with the following:
```
def load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):
config = BertConfig.from_json_file(bert_config)
model = BertForPreTraining(config)
load_tf_weights_in_bert(model, ckpt_path)
state_dict=model.state_dict()
model = BertForSequenceClassification(config, num_labels=num_labels)
# Load from a PyTorch state_dict
old_keys = []
new_keys = []
for key in state_dict.keys():
new_key = None
if 'gamma' in key:
new_key = key.replace('gamma', 'weight')
if 'beta' in key:
new_key = key.replace('beta', 'bias')
if new_key:
old_keys.append(key)
new_keys.append(new_key)
for old_key, new_key in zip(old_keys, new_keys):
state_dict[new_key] = state_dict.pop(old_key)
missing_keys = []
unexpected_keys = []
error_msgs = []
# copy state_dict so _load_from_state_dict can modify it
metadata = getattr(state_dict, '_metadata', None)
state_dict = state_dict.copy()
if metadata is not None:
state_dict._metadata = metadata
def load(module, prefix=''):
local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
module._load_from_state_dict(
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
for name, child in module._modules.items():
if child is not None:
load(child, prefix + name + '.')
start_prefix = ''
if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):
start_prefix = 'bert.'
load(model, prefix=start_prefix)
if len(missing_keys) > 0:
logger.info("Weights of {} not initialized from pretrained model: {}".format(
model.__class__.__name__, missing_keys))
if len(unexpected_keys) > 0:
logger.info("Weights from pretrained model not used in {}: {}".format(
model.__class__.__name__, unexpected_keys))
if len(error_msgs) > 0:
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
model.__class__.__name__, "\n\t".join(error_msgs)))
return model
```
Then:
```
CONFIG_FILE = "/Bert/multi_cased_L-12_H-768_A-12/bert_config.json"
model = load_BFTC_from_TF_ckpt(CONFIG_FILE, "model.ckpt-6032", num_labels = 2)
```
And :
```
C:\ProgramData\Anaconda3\Lib\site-packages\pytorch_pretrained_bert\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
89
90 elif l[0] == 'output_bias' or l[0] == 'beta':
---> 91 pointer = getattr(pointer, 'bias')
92
93 elif l[0] == 'output_weights':
C:\ProgramData\Anaconda3\Lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name)
537 return modules[name]
538 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 539 type(self).__name__, name))
540
541 def __setattr__(self, name, value):
AttributeError: 'BertForPreTraining' object has no attribute 'bias'
```
Greetings,
Max<|||||>@stormskidd,
this is odd.... I actually specifically tested my code above on `BertForSequenceClassification` as well and I was able to successfully import TF weights. The code snippet looks like it would work...
Just want to check:
- are you running the latest pytorch-pretrained-BERT from upstream master?
- did you make any other changes to the source?
- have you tried https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py and successfully converted your TF ckpt to a pytorch dump? if my code snippet above doesn't work for you, this should at least work for you.
- not sure if this matters, but are you using CUDA? and do you have apex installed?
Do let me know if you face any errors. I'm curious about this issue. Btw, check out my edit in the previous comment.
Chris<|||||>Hi, not sure I fully understand the issue here. What is the kind of tensorflow checkpoint you are trying to convert? Is it a pretrained model like the original Bert checkpoints or is it a fine-tuned model with additional elements (like a classification layer on top)?<|||||>Hi @thomwolf,
I am trying to convert a pretrained model like the original Bert checkpoints, except that I did additional pretraining on top of the released models with `run_pretraining.py` from the BERT repo. I then wish to fine-tune these pretrained models in pytorch, which is why I had to do this conversion. I am not importing any fine-tuned TF models.<|||||>Ok so you can just convert your Tensorflow model using the command line script (see [here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#Command-line-interface)) store the converted pytorch model in a folder with the configuration file and then load it in a `BertForTokenClassification` model as follow:
`BertForTokenClassification.from_pretrained('PATH_TO_YOUR_CONVERTED_MODEL_FOLDER')`
For the tokenizer, you can use the one associated to the original TensorFlow model from which you did the fine-tuning since you probably didn't change the vocabulary itself.<|||||>Hi guys,
I saw your discussion and it gave me the idea to try the following:
It seems like I can use both the function 'load_BFTC_from_TF_ckpt' and the script 'convert_tf_checkpoint_to_pytorch.py' to load a pretrained Google model : multi_cased_L-12_H-768_A-12\bert_model.ckpt.
However the error occurs when I tried to do the same with a fined-tuned model (from Bert script run_classifier.py ran on GluON/MRPC -like data).
Is there any ways I can load a TF checkpoint fine-tuned model directly in Pytorch ? Or do I have to re-finetune it with the pytorch_pretrained_bert library ?
Thank you !
Max<|||||>> Oh, you're right I posted the wrong version of the many things I tried :(
>
> Anyway, I'm afraid the error still shows up with the following:
>
> ```
> def load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):
> config = BertConfig.from_json_file(bert_config)
> model = BertForPreTraining(config)
> load_tf_weights_in_bert(model, ckpt_path)
> state_dict=model.state_dict()
> model = BertForSequenceClassification(config, num_labels=num_labels)
>
> # Load from a PyTorch state_dict
> old_keys = []
> new_keys = []
> for key in state_dict.keys():
> new_key = None
> if 'gamma' in key:
> new_key = key.replace('gamma', 'weight')
> if 'beta' in key:
> new_key = key.replace('beta', 'bias')
> if new_key:
> old_keys.append(key)
> new_keys.append(new_key)
> for old_key, new_key in zip(old_keys, new_keys):
> state_dict[new_key] = state_dict.pop(old_key)
>
> missing_keys = []
> unexpected_keys = []
> error_msgs = []
> # copy state_dict so _load_from_state_dict can modify it
> metadata = getattr(state_dict, '_metadata', None)
> state_dict = state_dict.copy()
> if metadata is not None:
> state_dict._metadata = metadata
>
> def load(module, prefix=''):
> local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
> module._load_from_state_dict(
> state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
> for name, child in module._modules.items():
> if child is not None:
> load(child, prefix + name + '.')
> start_prefix = ''
> if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):
> start_prefix = 'bert.'
> load(model, prefix=start_prefix)
> if len(missing_keys) > 0:
> logger.info("Weights of {} not initialized from pretrained model: {}".format(
> model.__class__.__name__, missing_keys))
> if len(unexpected_keys) > 0:
> logger.info("Weights from pretrained model not used in {}: {}".format(
> model.__class__.__name__, unexpected_keys))
> if len(error_msgs) > 0:
> raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
> model.__class__.__name__, "\n\t".join(error_msgs)))
> return model
> ```
>
> Then:
>
> ```
> CONFIG_FILE = "/Bert/multi_cased_L-12_H-768_A-12/bert_config.json"
> model = load_BFTC_from_TF_ckpt(CONFIG_FILE, "model.ckpt-6032", num_labels = 2)
> ```
>
> And :
>
> ```
> C:\ProgramData\Anaconda3\Lib\site-packages\pytorch_pretrained_bert\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
> 89
> 90 elif l[0] == 'output_bias' or l[0] == 'beta':
> ---> 91 pointer = getattr(pointer, 'bias')
> 92
> 93 elif l[0] == 'output_weights':
>
> C:\ProgramData\Anaconda3\Lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name)
> 537 return modules[name]
> 538 raise AttributeError("'{}' object has no attribute '{}'".format(
> --> 539 type(self).__name__, name))
> 540
> 541 def __setattr__(self, name, value):
>
> AttributeError: 'BertForPreTraining' object has no attribute 'bias'
> ```
>
> Greetings,
> Max
you can try:
model = BertForPreTraining.from_pretrained(BERT_DIR, from_tf=True)<|||||>Hello guys,
here is something I found that seem to do the trick for now:
in `modeling.py:`
In class `class BertForSequenceClassification(BertPreTrainedModel):`
Add :
```
self.weight = Variable(torch.ones(2, config.hidden_size), requires_grad=True)
self.bias = Variable(torch.ones(2), requires_grad=True)
```
to the attributes.
Obviously juste change the right class regarding your needs. I'll let you know if it provokes another error, but for now I can load in memory the trained model I couldn't load before.
Greetings,
Max
<|||||>if use BertFor* class, will not initiate classifier layer.
when i load tf model for predict/evaluate modify **load_tf_weights_in_bert**
```
if re.fullmatch(r'[A-Za-z]+_\d+', m_name):
l = re.split(r'_(\d+)', m_name)
else:
l = [m_name]
if l[0] == 'kernel' or l[0] == 'gamma':
pointer = getattr(pointer, 'weight')
elif l[0] == 'output_bias' or l[0] == 'beta':
if pointer == model:
pointer = getattr(pointer, 'classifier')
pointer = getattr(pointer, 'bias')
elif l[0] == 'output_weights':
if pointer == model:
pointer = getattr(pointer, 'classifier')
pointer = getattr(pointer, 'weight')
elif l[0] == 'squad':
pointer = getattr(pointer, 'classifier')
```
<|||||>>
>
> Ok so you can just convert your Tensorflow model using the command line script (see [here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#Command-line-interface)) store the converted pytorch model in a folder with the configuration file and then load it in a `BertForTokenClassification` model as follow:
> `BertForTokenClassification.from_pretrained('PATH_TO_YOUR_CONVERTED_MODEL_FOLDER')`
>
> For the tokenizer, you can use the one associated to the original TensorFlow model from which you did the fine-tuning since you probably didn't change the vocabulary itself.
Hello @thomwolf,
Yes, I am aware of the conversion script and `from_pretrained()` being able to load full models (PyTorch dumps) from converted TF checkpoints. However, in my use case, I did pre-training with BERT using the script `run_pretraining.py` from the BERT repo, and I wanted to do fine-tuning on the many checkpoint steps that I have saved, so it would make more sense for me to load the checkpoints directly.
However, I am aware that my use case is a very niche one, and the others here are talking about a different use case (loading TF finetuned models). Since my issue has been resolved, I will close this issue. |
transformers | 675 | closed | [hotfix] Fix frozen pooler parameters in SWAG example. | Hotfix for #461 | 06-11-2019 22:14:32 | 06-11-2019 22:14:32 | Thanks @meetshah1995 |
transformers | 674 | closed | Gradual unfreezing and discriminative fine-tuning for BERT | Three of the tips for fine-tuning proposed in ULMFIT are slanted triangular learning rates, gradual unfreezing, and discriminative fine-tuning.
I understand that BERT's default learning rate scheduler does something similar to STLR, but I was wondering if gradual unfreezing and discriminative fine-tuning are considered in BERT's fine-tuning implementations. Has anyone had experience implementing these two features in BERT fine-tuning? I'd like to hear your thoughts on it. Thanks! | 06-11-2019 19:43:10 | 06-11-2019 19:43:10 | I've tried a bit to play with these training schemes on a deep transformer for our [tutorial on Transfer Learning in Natural Language Processing](https://naacl2019.org/program/tutorials/#t4-transfer-learning-in-natural-language-processing) held at NAACL last week but I couldn't get gradual unfreezing and discriminative fine-tuning to out-perform a standard fine-tuning procedure (multi-tasking did help, however).
You can have a look at the results by reading the "Hands-on" parts of the tutorial here: https://tinyurl.com/NAACLTransfer.
You can give it a try your-self with the associated Colab notebook which is here: https://tinyurl.com/NAACLTransferColab (and a full stand-alone codebase is here: https://tinyurl.com/NAACLTransferCode).
It's possible that I just didn't spend enough time scanning the hyper-parameters or maybe these two training variants are better suited to LSTM than Transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> I've tried a bit to play with these training schemes on a deep transformer for our [tutorial on Transfer Learning in Natural Language Processing](https://naacl2019.org/program/tutorials/#t4-transfer-learning-in-natural-language-processing) held at NAACL last week but I couldn't get gradual unfreezing and discriminative fine-tuning to out-perform a standard fine-tuning procedure (multi-tasking did help, however).
>
> You can have a look at the results by reading the "Hands-on" parts of the tutorial here: https://tinyurl.com/NAACLTransfer.
>
> You can give it a try your-self with the associated Colab notebook which is here: https://tinyurl.com/NAACLTransferColab (and a full stand-alone codebase is here: https://tinyurl.com/NAACLTransferCode).
>
> It's possible that I just didn't spend enough time scanning the hyper-parameters or maybe these two training variants are better suited to LSTM than Transformers.
I find the standard fine-tuning procedure having a unstable issue, that different shuffle order affects a lot. I wander if unfreezing help the bert finetune get a relative stable result?
This is also mentioned in bert paper, they just use different random seeds to get a best result. |
transformers | 673 | closed | LM fine-tuning without NSP | Hello,
I'm thinking about fine-tuning a BERT model using only the Masked LM pre-training objective, and I'd appreciate a bit of guidance. The most straightforward way is probably to modify the simple_lm_finetuning.py script to only do LM fine-tuning. Besides importing BertForMaskedLM instead of BertForPretraining (which has both objectives), what changes should I make, and what potential problems should I consider?
Also, would it make sense to do MLM fine-tuning on a relatively small domain-specific corpus consisting of just the training examples from the datasets? In other words, does it make sense to do LM pretraining on a corpus that includes training examples taken from the datasets for which you want to make downstream predictions? I'm assuming including the dev/test examples in the corpus is out of the question since it'll likely overfit, but what about just the training examples? I'd like hear to people's thoughts about it.
Lastly, was the run_lm_finetuning.py script replaced by the contents in the examples/lm_finetuning directory? If so, there are still 2 references to that script in the README that should be edited.
@Rocketknight1 | 06-11-2019 19:16:23 | 06-11-2019 19:16:23 | To your first question, the inputs will be almost identical, but the token_type_ids argument will be unused, as this is the vector that indicates the split between the two 'sentences' for the NextSentence objective. I'm not familiar with that part of the code - you might be able to just pass `None` for that argument, or maybe you'll need to pass a vector of zeros that has the right shape (i.e. the same length as the sequence). Also, I believe pre-training with just the masked LM objective works okay - it's by far the most important of the two objectives. The BERT paper didn't show any ablation studies, so I don't know what the effect of removing it will be, but I wouldn't worry too much for a fine-tuning task.
Secondly, I wouldn't expect a large benefit from just fine-tuning the LM on a small labelled training corpus. The main benefit arises when you have a large corpus to fine-tune the LM on, but only a small fraction of it is labelled.
Also, I would guess that fine-tuning the LM on the dev/test examples will probably result in test performance that is optimistic compared to the performance on truly unseen data, but I'm not aware of any research in that area. However, this suggests a potential research direction - if you try including the dev/test data in the LM pre-training task and find that it significantly improves dev/test accuracy, compared to text that was not included in the LM pre-training task, that would be an interesting approach to improving the performance of language models!
It would be tricky to take advantage of this effect in practice, but you can imagine ways it might be done - for example, at inference time it might be possible to add new inputs to the LM fine-tuning corpus, and then fine-tune your language model on them followed by retraining the classifier (with your pre-existing labelled data) and only then labelling the new inputs! This would probably be too computationally expensive to be used in many production systems, especially when low latency is required, but for some purposes the improvement in data efficiency and accuracy might be worthwhile.<|||||>Great, thank you!
So it's understood that language modeling (or masked LM) is an effective pre-training objective for learning a general representation of the language. But do you think it makes sense to use a MLM head during the fine-tuning phase? For example, if your main task of interest is sentence classification, perhaps you could fine-tune the pre-trained model on a sentence classification head as well as a LM head in a multi-task learning setting. Intuitively it would have a regularizing effect and potentially lead to better generalization. I'd love to hear your thoughts on that idea!<|||||>Take a look at issue #692 , there's a recent Chinese paper where they tried fine-tuning with domain text as well as doing something very similar to what you're proposing (multi-task fine-tuning) and reported good results.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>>
>
> Hello,
>
> I'm thinking about fine-tuning a BERT model using only the Masked LM pre-training objective, and I'd appreciate a bit of guidance. The most straightforward way is probably to modify the simple_lm_finetuning.py script to only do LM fine-tuning. Besides importing BertForMaskedLM instead of BertForPretraining (which has both objectives), what changes should I make, and what potential problems should I consider?
>
> Also, would it make sense to do MLM fine-tuning on a relatively small domain-specific corpus consisting of just the training examples from the datasets? In other words, does it make sense to do LM pretraining on a corpus that includes training examples taken from the datasets for which you want to make downstream predictions? I'm assuming including the dev/test examples in the corpus is out of the question since it'll likely overfit, but what about just the training examples? I'd like hear to people's thoughts about it.
>
> Lastly, was the run_lm_finetuning.py script replaced by the contents in the examples/lm_finetuning directory? If so, there are still 2 references to that script in the README that should be edited.
>
> @Rocketknight1
Hello @dchang56
Did you try fine-tuning a BERT model using only the Masked LM pre-training objective ?
Was that relatively as good as doing that along with next sentence prediction ?
How much was your block size (maximum sequence length) for that ?<|||||>>
>
> To your first question, the inputs will be almost identical, but the token_type_ids argument will be unused, as this is the vector that indicates the split between the two 'sentences' for the NextSentence objective. I'm not familiar with that part of the code - you might be able to just pass `None` for that argument, or maybe you'll need to pass a vector of zeros that has the right shape (i.e. the same length as the sequence). Also, I believe pre-training with just the masked LM objective works okay - it's by far the most important of the two objectives. The BERT paper didn't show any ablation studies, so I don't know what the effect of removing it will be, but I wouldn't worry too much for a fine-tuning task.
>
> Secondly, I wouldn't expect a large benefit from just fine-tuning the LM on a small labelled training corpus. The main benefit arises when you have a large corpus to fine-tune the LM on, but only a small fraction of it is labelled.
>
> Also, I would guess that fine-tuning the LM on the dev/test examples will probably result in test performance that is optimistic compared to the performance on truly unseen data, but I'm not aware of any research in that area. However, this suggests a potential research direction - if you try including the dev/test data in the LM pre-training task and find that it significantly improves dev/test accuracy, compared to text that was not included in the LM pre-training task, that would be an interesting approach to improving the performance of language models!
>
> It would be tricky to take advantage of this effect in practice, but you can imagine ways it might be done - for example, at inference time it might be possible to add new inputs to the LM fine-tuning corpus, and then fine-tune your language model on them followed by retraining the classifier (with your pre-existing labelled data) and only then labelling the new inputs! This would probably be too computationally expensive to be used in many production systems, especially when low latency is required, but for some purposes the improvement in data efficiency and accuracy might be worthwhile.
Hello @Rocketknight1
why do you think pre-training with just the masked LM objective works okay ?
is there any article or study about that , have you got any link of it ?
Or, have you down any training (BERT) without NSP (next sentence prediction) , yourself ?<|||||>> LM fine-tuning
Could be useful to you: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb |
transformers | 672 | closed | Add vocabulary and model config to the finetune output | If you want to use your fine-tuned model to train a classifier you will need the configuration file and the vocabulary file. This PR adds them to both pre-training scripts. | 06-11-2019 11:52:25 | 06-11-2019 11:52:25 | Nice indeed, thanks @oliverguhr! |
transformers | 671 | closed | BERT what's different with step and t_total | :param t_total: how many training steps (updates) are planned
:param step: which of t_total steps we're on
def get_lr(self, step, nowarn=False):
"""
:param step: which of t_total steps we're on
:param nowarn: set to True to suppress warning regarding training beyond specified 't_total' steps
:return: learning rate multiplier for current update
"""
if self.t_total < 0:
return 1.
progress = float(step) / self.t_total
ret = self.get_lr_(progress)
# warning for exceeding t_total (only active with warmup_linear
if not nowarn and self.warn_t_total and progress > 1. and progress > self.warned_for_t_total_at_progress:
logger.warning(
"Training beyond specified 't_total'. Learning rate multiplier set to {}. Please set 't_total' of {} correctly."
.format(ret, self.__class__.__name__))
self.warned_for_t_total_at_progress = progress
# end warning
return ret | 06-11-2019 06:54:23 | 06-11-2019 06:54:23 | |
transformers | 670 | closed | warmup for BertAdam | https://github.com/huggingface/pytorch-pretrained-BERT/blob/ee0308f79ded65dac82c53dfb03e9ff7f06aeee4/examples/run_classifier.py#L860
BertAdam() can update learning rate by itself.
Why update learning rate manually here? | 06-11-2019 05:46:15 | 06-11-2019 05:46:15 | Because we don't use BertAdam in fp16 mode but the optimizer of NVIDIA's apex library.<|||||>OK thank you! |
transformers | 669 | closed | `get_final_text` bug when dealing with chinese sentence | Hi,
I set `max_answer_length` to `30`, but I still got really long answers, so I print the `tok_text`, `orig_text` and `final_text` in function `write_predictions`.
```
tok_text: 权 健 公 司 可 能 涉 及 的 刑 事 罪 名 是 否 仅 仅 是 [UNK] 虚 假 广 告 罪
orig_text: 根据相关法律,权健公司可能涉及的刑事罪名是否仅仅是“虚假广告罪”“组织、领导传销活动罪”两个罪名,应该说仍有不少需要进一步深入调查的空间
final_text: 根据相关法律,权健公司可能涉及的刑事罪名是否仅仅是“虚假广告罪”“组织、领导传销活动罪”两个罪名,应该说仍有不少需要进一步深入调查的空间
```
From `tok_text` we can see that the answer should be `权健公司可能涉及的刑事罪名是否仅仅是“虚假广告罪`, however, `final_text` we got turned out to be much longer.
| 06-10-2019 08:14:17 | 06-10-2019 08:14:17 | Perhaps it's the problem of tokenizer...After stripping the lengths of two sequences changed, so `orig_text` is returned...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 668 | closed | apply Whole Word Masking technique | apply Whole Word Masking technique.
referred to [link](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) | 06-10-2019 03:21:08 | 06-10-2019 03:21:08 | Nice, thanks @jeonsworld |
transformers | 667 | closed | when I use bert-large-uncased to load bert,runtimeError occured,but base-uncase is ok | using BertModel.from_pretrained( path of bert-large-uncased) caused error
RuntimeError: $ Torch: invalid memory size -- maybe an overflow? at ..\aten\src\TH\THGeneral.cpp:188
But using BertModel.from_pretrained( path of bert-case-uncased) can work
| 06-10-2019 02:45:39 | 06-10-2019 02:45:39 | Yes, probably overflow. Try a smaller batch size?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 666 | closed | GPT2 generating repetitive text | I was trying to use the pretrained GPT2LMHeadModel for generating texts by feeding some initial English words. But it is always generating repetitive texts.
Input: All
Output: All All the same, the same, the same, the same, the same, the same, the same, the same, the same, the same, the same, the same,
Here is my code:
`import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from pytorch_pretrained_BERT.pytorch_pretrained_bert.modeling_gpt2 import GPT2LMHeadModel
from pytorch_pretrained_BERT.pytorch_pretrained_bert.tokenization_gpt2 import GPT2Tokenizer
from pytorch_pretrained_BERT.pytorch_pretrained_bert.optimization_openai import OpenAIAdam
from tqdm import tqdm
import torch.optim as optim
import random
import time
import os
import sys
import random
import argparse
from pathlib import Path
from torch.utils.data import Dataset,TensorDataset,DataLoader,SequentialSampler,RandomSampler
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained("gpt2")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
df = pd.read_csv("gpt_test.csv",sep="\t")
df = df.values
#context = tokenizer.encode(context)
model.to(device)
source = []
generated = []
for l in tqdm(range(len(df))):
source.append(str(df[l,-2]))
context = tokenizer.encode(str(df[l,-2]))
past = None
for i in range(40):
input_ids = torch.tensor([context])
input_ids = input_ids.to(device)
pred,_ = model(input_ids=input_ids)
predictions = torch.argmax(pred[0,-1,:]).item()
context.append(predictions)
if(predictions==2):
break
generated_text = tokenizer.decode(context)
generated.append(generated_text)
df1 = pd.DataFrame({'Source': source,'Generated': generated})
df1.to_csv("./result_with_gpt.csv",sep="\t")`
Can someone point out the mistake? I will be highly grateful if the response is fast. | 06-08-2019 09:01:22 | 06-08-2019 09:01:22 | Have you tried the provided GPT-2 generation example? It's here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_gpt2.py<|||||>> Have you tried the provided GPT-2 generation example? It's here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_gpt2.py
Hey, I encountered the same issue. I tried using the example you provided but it tends to produce repetitive text much more often than earlier versions of the library as well (from around 1-2 months back). Thank you very much for all the work! <|||||>> Have you tried the provided GPT-2 generation example? It's here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_gpt2.py
I tried the example. It is working properly. I think with `torch.argmax` there is chance of repetitive text generation. If we sample using `torch.multinomial`, there is always some variation.
<|||||>`torch.argmax` is basically top_k with 1 which is very bad for creating "human like" sentences. A better way to sample is using Nucleus Sampling [https://arxiv.org/abs/1904.09751](url).
Not sure whether this is implemented in PyTorch yet.
EDIT: I have found this code from @thomwolf [https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317](url) that implements Nucleus sampling.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||> generating repetitive text when using GPU, but this not happens when using CPU, anyone knows how to solve this weird issue?<|||||>> when using GPU, but this not happens when using CPU, anyone knows how to solve this weird issue?
I am facing the same issue. Can someone lead in this problem? |
transformers | 665 | closed | GPT-2 medium and large release? | I presume the below model is GPT-2 small.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/ee0308f79ded65dac82c53dfb03e9ff7f06aeee4/pytorch_pretrained_bert/modeling_gpt2.py#L42
When do you plan on supporting the medium (already released by OpenAI) and large versions (not released by OpenAI) of GPT-2?
Thanks! | 06-08-2019 02:04:14 | 06-08-2019 02:04:14 | Take a look at the `attention` branch @g-karthik:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/attention/pytorch_pretrained_bert/modeling_gpt2.py#L42-L45<|||||>Thanks @julien-c, I had not looked at the file in the `attention` branch!<|||||>What is the recommended hardware setup for fine-tuning GPT2 medium? |
transformers | 664 | closed | Padding in GPT-2 | How do I add padding in GTP2?
I get something like this when I add zero in front to pad the sequences, but then I found out that 0 is actually not "[PAD]" but "!".
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1639, 481]
The zeros change the result quite a lot, not that it totally ruins it, but it makes the results less precise and usually order of the most frequent predicted words is altered.
So how to we add padding there.
I have tried to follow docs, but I didn't find a way to add something analog to attention mask BERT.
| 06-07-2019 02:09:05 | 06-07-2019 02:09:05 | No padding implemented in GPT-2, you have to add implement your-self if you want e.g. by adding a special token but note that:
- GPT-2 doesn't like left side padding (doesn't mix well with a causal transformer having absolute positions)
- right-side padding is often not necessary (the causal mask means that right context is ignored anyway).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> * GPT-2 doesn't like left side padding (doesn't mix well with a causal transformer having absolute positions)
To those stumbling on this issue, this doesn't seem to be a problem anymore. [#3021](https://github.com/huggingface/transformers/issues/3021#issuecomment-1232149031)
|
transformers | 663 | closed | Accumulation | 06-06-2019 15:12:33 | 06-06-2019 15:12:33 | ||
transformers | 662 | closed | MRPC / SQuAD stuck in "Running training" | Hi there!
I am stuck since days.
ubuntu 19.04 (tried 18.04 also)
NVIDIA-SMI 418.74 Driver Version: 418.74
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
>>> import torch; torch.cuda.current_device(); torch.cuda.device_count(); torch.cuda.get_device_name(0); torch.cuda.get_device_name(1); torch.cuda.is_available(); exit()
0
2
'GeForce GTX 1080'
'GeForce GTX 1070 Ti'
True
Anaconda
tried Python 3.7 and now 3.6 (update: and 3.5 don't work also)
tried WITH APEX and now without
conda list
# packages in environment at /home/andreas/anaconda3/envs/pytorchbert:
#
# Name Version Build Channel
atomicwrites 1.3.0 pypi_0 pypi
attrs 19.1.0 pypi_0 pypi
blas 1.0 mkl
blis 0.2.4 pypi_0 pypi
boto3 1.9.162 pypi_0 pypi
botocore 1.12.162 pypi_0 pypi
bzip2 1.0.6 h14c3975_5
ca-certificates 2019.5.15 0
certifi 2019.3.9 py36_0
cffi 1.12.3 py36h2e261b9_0
chardet 3.0.4 pypi_0 pypi
cmake 3.14.0 h52cb24c_0
cudatoolkit 10.0.130 0
cudnn 7.6.0 cuda10.0_0 anaconda
cymem 2.0.2 pypi_0 pypi
docutils 0.14 pypi_0 pypi
en-core-web-sm 2.1.0 pypi_0 pypi
expat 2.2.6 he6710b0_0
freetype 2.9.1 h8a8886c_1
ftfy 5.5.1 pypi_0 pypi
google-pasta 0.1.7 pypi_0 pypi
idna 2.8 pypi_0 pypi
importlib-metadata 0.17 pypi_0 pypi
intel-openmp 2019.3 199
jmespath 0.9.4 pypi_0 pypi
joblib 0.13.2 pypi_0 pypi
jpeg 9b h024ee3a_2
jsonschema 3.0.1 pypi_0 pypi
krb5 1.16.1 h173b8e3_7
libcurl 7.64.1 h20c2e04_0
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libssh2 1.8.2 h1ba5d50_0
libstdcxx-ng 8.2.0 hdf63c60_1
libtiff 4.0.10 h2733197_2
mkl 2019.3 199
mkl-include 2019.3 199
mkl_fft 1.0.12 py36ha843d7b_0
mkl_random 1.0.2 py36hd81dba3_0
more-itertools 7.0.0 pypi_0 pypi
murmurhash 1.0.2 pypi_0 pypi
ncurses 6.1 he6710b0_1
ninja 1.9.0 py36hfd86e86_0
numpy 1.16.4 py36h7e9f1db_0
numpy-base 1.16.4 py36hde5b4d6_0
olefile 0.46 py36_0
openssl 1.1.1c h7b6447c_1
packaging 19.0 pypi_0 pypi
pandas 0.24.2 py36he6710b0_0
pillow 6.0.0 py36h34e0f95_0
pip 19.1.1 py36_0
plac 0.9.6 pypi_0 pypi
pluggy 0.12.0 pypi_0 pypi
preshed 2.0.1 pypi_0 pypi
py 1.8.0 pypi_0 pypi
pycparser 2.19 py36_0
pyparsing 2.4.0 pypi_0 pypi
pyrsistent 0.15.2 pypi_0 pypi
pytest 4.6.2 pypi_0 pypi
python 3.6.8 h0371630_0
python-dateutil 2.8.0 py36_0
pytorch 1.1.0 py3.6_cuda10.0.130_cudnn7.5.1_0 pytorch
pytz 2019.1 py_0
readline 7.0 h7b6447c_5
regex 2019.6.5 pypi_0 pypi
requests 2.22.0 pypi_0 pypi
rhash 1.3.8 h1ba5d50_0
s3transfer 0.2.1 pypi_0 pypi
scikit-learn 0.21.2 pypi_0 pypi
scipy 1.2.1 py36h7c811a0_0
setuptools 41.0.1 py36_0
six 1.12.0 py36_0
sklearn 0.0 pypi_0 pypi
spacy 2.1.4 pypi_0 pypi
sqlite 3.28.0 h7b6447c_0
srsly 0.0.5 pypi_0 pypi
tb-nightly 1.14.0a20190605 pypi_0 pypi
tf-estimator-nightly 1.14.0.dev2019060601 pypi_0 pypi
tf-nightly-gpu 1.14.1.dev20190606 pypi_0 pypi
thinc 7.0.4 pypi_0 pypi
tk 8.6.8 hbc83047_0
torch 1.1.0 pypi_0 pypi
torchvision 0.3.0 py36_cu10.0.130_1 pytorch
tqdm 4.32.1 pypi_0 pypi
urllib3 1.25.3 pypi_0 pypi
wasabi 0.2.2 pypi_0 pypi
wcwidth 0.1.7 pypi_0 pypi
wheel 0.33.4 py36_0
wrapt 1.11.1 pypi_0 pypi
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zipp 0.5.1 pypi_0 pypi
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
Failed test:
========
python -m pytest -sv tests/
tests/modeling_gpt2_test.py::GPT2ModelTest::test_config_to_json_file PASSED
tests/modeling_gpt2_test.py::GPT2ModelTest::test_config_to_json_string PASSED
tests/modeling_gpt2_test.py::GPT2ModelTest::test_default PASSED
tests/modeling_gpt2_test.py::GPT2ModelTest::test_model_from_pretrained SKIPPED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_config_to_json_file PASSED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_config_to_json_string PASSED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_default PASSED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_model_from_pretrained SKIPPED
tests/modeling_test.py::BertModelTest::test_config_to_json_file PASSED
tests/modeling_test.py::BertModelTest::test_config_to_json_string PASSED
tests/modeling_test.py::BertModelTest::test_default PASSED
tests/modeling_test.py::BertModelTest::test_model_from_pretrained SKIPPED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_config_to_json_file PASSED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_config_to_json_string PASSED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_default PASSED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_model_from_pretrained SKIPPED
tests/optimization_test.py::OptimizationTest::test_adam PASSED
tests/optimization_test.py::ScheduleInitTest::test_bert_sched_init PASSED
tests/optimization_test.py::ScheduleInitTest::test_openai_sched_init PASSED
tests/optimization_test.py::WarmupCosineWithRestartsTest::test_it [0. 0. 0. 0. 0.]
[1. 1. 1. 1. 1.]
PASSED
tests/tokenization_gpt2_test.py::GPT2TokenizationTest::test_full_tokenizer PASSED
100%|███████████████████████████████████████| 1042301/1042301 [00:01<00:00, 741907.79B/s]
100%|█████████████████████████████████████████| 456318/456318 [00:00<00:00, 704099.11B/s]
PASSED
tests/tokenization_openai_test.py::OpenAIGPTTokenizationTest::test_full_tokenizer PASSED
tests/tokenization_openai_test.py::OpenAIGPTTokenizationTest::test_tokenizer_from_pretrained SKIPPED
tests/tokenization_test.py::TokenizationTest::test_basic_tokenizer_lower PASSED
tests/tokenization_test.py::TokenizationTest::test_basic_tokenizer_no_lower PASSED
tests/tokenization_test.py::TokenizationTest::test_chinese PASSED
tests/tokenization_test.py::TokenizationTest::test_full_tokenizer PASSED
tests/tokenization_test.py::TokenizationTest::test_is_control PASSED
tests/tokenization_test.py::TokenizationTest::test_is_punctuation PASSED
tests/tokenization_test.py::TokenizationTest::test_is_whitespace PASSED
tests/tokenization_test.py::TokenizationTest::test_tokenizer_from_pretrained SKIPPED
tests/tokenization_test.py::TokenizationTest::test_wordpiece_tokenizer PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_full_tokenizer building vocab from /tmp/transfo_xl_tokenizer_test.txt
final vocab size 9
PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_full_tokenizer_lower PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_full_tokenizer_no_lower PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_tokenizer_from_pretrained SKIPPED
=================================== warnings summary ====================================
/home/andreas/anaconda3/envs/pytorchbert/lib/python3.6/site-packages/_pytest/mark/structures.py:337
/home/andreas/anaconda3/envs/pytorchbert/lib/python3.6/site-packages/_pytest/mark/structures.py:337: PytestUnknownMarkWarning: Unknown pytest.mark.slow - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
-- Docs: https://docs.pytest.org/en/latest/warnings.html
Used script:
=========
export GLUE_DIR=/data/glue_data
export TASK_NAME=MRPC
python run_classifier.py \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/$TASK_NAME \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
06/05/2019 12:06:17 - INFO - __main__ - device: cuda n_gpu: 2, distributed training: False, 16-bits training: False
06/05/2019 12:06:17 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/andreas/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
06/05/2019 12:06:17 - INFO - __main__ - LOOKING AT /data/glue_data/MRPC/train.tsv
06/05/2019 12:06:18 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/andreas/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
06/05/2019 12:06:18 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/andreas/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmp_0dlskh7
06/05/2019 12:06:21 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
06/05/2019 12:06:23 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
06/05/2019 12:06:23 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
06/05/2019 12:06:26 - INFO - __main__ - Writing example 0 of 3668
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-1
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] am ##ro ##zi accused his brother , whom he called " the witness " , of deliberately di ##stor ##ting his evidence . [SEP] referring to him as only " the witness " , am ##ro ##zi accused his brother of deliberately di ##stor ##ting his evidence . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 2572 3217 5831 5496 2010 2567 1010 3183 2002 2170 1000 1996 7409 1000 1010 1997 9969 4487 23809 3436 2010 3350 1012 102 7727 2000 2032 2004 2069 1000 1996 7409 1000 1010 2572 3217 5831 5496 2010 2567 1997 9969 4487 23809 3436 2010 3350 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 1 (id = 1)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-2
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] yu ##ca ##ip ##a owned dominic ##k ' s before selling the chain to safe ##way in 1998 for $ 2 . 5 billion . [SEP] yu ##ca ##ip ##a bought dominic ##k ' s in 1995 for $ 69 ##3 million and sold it to safe ##way for $ 1 . 8 billion in 1998 . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 9805 3540 11514 2050 3079 11282 2243 1005 1055 2077 4855 1996 4677 2000 3647 4576 1999 2687 2005 1002 1016 1012 1019 4551 1012 102 9805 3540 11514 2050 4149 11282 2243 1005 1055 1999 2786 2005 1002 6353 2509 2454 1998 2853 2009 2000 3647 4576 2005 1002 1015 1012 1022 4551 1999 2687 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 0 (id = 0)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-3
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] they had published an advertisement on the internet on june 10 , offering the cargo for sale , he added . [SEP] on june 10 , the ship ' s owners had published an advertisement on the internet , offering the explosives for sale . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 2027 2018 2405 2019 15147 2006 1996 4274 2006 2238 2184 1010 5378 1996 6636 2005 5096 1010 2002 2794 1012 102 2006 2238 2184 1010 1996 2911 1005 1055 5608 2018 2405 2019 15147 2006 1996 4274 1010 5378 1996 14792 2005 5096 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 1 (id = 1)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-4
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] around 03 ##35 gm ##t , tab shares were up 19 cents , or 4 . 4 % , at a $ 4 . 56 , having earlier set a record high of a $ 4 . 57 . [SEP] tab shares jumped 20 cents , or 4 . 6 % , to set a record closing high at a $ 4 . 57 . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 2105 6021 19481 13938 2102 1010 21628 6661 2020 2039 2539 16653 1010 2030 1018 1012 1018 1003 1010 2012 1037 1002 1018 1012 5179 1010 2383 3041 2275 1037 2501 2152 1997 1037 1002 1018 1012 5401 1012 102 21628 6661 5598 2322 16653 1010 2030 1018 1012 1020 1003 1010 2000 2275 1037 2501 5494 2152 2012 1037 1002 1018 1012 5401 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 0 (id = 0)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-5
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] the stock rose $ 2 . 11 , or about 11 percent , to close friday at $ 21 . 51 on the new york stock exchange . [SEP] pg & e corp . shares jumped $ 1 . 63 or 8 percent to $ 21 . 03 on the new york stock exchange on friday . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 1996 4518 3123 1002 1016 1012 2340 1010 2030 2055 2340 3867 1010 2000 2485 5958 2012 1002 2538 1012 4868 2006 1996 2047 2259 4518 3863 1012 102 18720 1004 1041 13058 1012 6661 5598 1002 1015 1012 6191 2030 1022 3867 2000 1002 2538 1012 6021 2006 1996 2047 2259 4518 3863 2006 5958 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 1 (id = 1)
06/05/2019 12:06:28 - INFO - __main__ - ***** Running training *****
06/05/2019 12:06:28 - INFO - __main__ - Num examples = 3668
06/05/2019 12:06:28 - INFO - __main__ - Batch size = 32
06/05/2019 12:06:28 - INFO - __main__ - Num steps = 342
Epoch: 0%| | 0/3 [00:00<?, ?it/s]
At this point the script is stucked.
Once I managed to ctrl-c twice and got this error:
threading.py", line 1048, in _wait_for_tstate_lock elif lock.acquire(block, timeout):
I should mention, that I am usually a windows user and just installed ubuntu to practice machine learning
Best regards
Andreas | 06-06-2019 11:19:44 | 06-06-2019 11:19:44 | Update: specify device works for at least 1 GPU
export CUDA_VISIBLE_DEVICES=0
python run_classifier.py \
more than 1 GPU still not working:
export CUDA_VISIBLE_DEVICES=0,1
python run_classifier.py \
<|||||>@AndreasFdev Your distributed training setting is False.<|||||>Problem:
P2P GPU traffic fails with enabled IOMMU, unless the cards are behind PLX switch.
Solution:
To disable IOMMU edit /etc/default/grub
#GRUB_CMDLINE_LINUX="" <----- Original commented
GRUB_CMDLINE_LINUX="iommu=soft" <------ Change
Source:
https://github.com/pytorch/pytorch/issues/1637
Thanks to all who had a look |
transformers | 661 | closed | How to load a existing model | 06-06-2019 08:01:35 | 06-06-2019 08:01:35 | Follow the instructions in the readme?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 660 | closed | Recommended batch size and epochs for finetuning on large data | In the original paper, BERT model is fine-tuned on downstream NLP tasks, where the number of instances for each task is in the order of thousands to hundreds of thousands. In my case, I have about 5 million samples. I'm curious whether there are recommended batch size and epochs for such training size? I'm fine-tuning bert-base-multilingual on 4 GPUs and there is a lot of unused GPU memory with the default batch size of 32. Even after increasing it to 128 there is still free available memory. | 06-06-2019 03:00:42 | 06-06-2019 03:00:42 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@okgrammer Larger batch size often means lower accuracy but faster epochs. You can try it by doing several runs of varying batch size while keeping other params constant.
See, especially, https://arxiv.org/pdf/1804.07612.pdf<|||||>> In the original paper, BERT model is fine-tuned on downstream NLP tasks, where the number of instances for each task is in the order of thousands to hundreds of thousands. In my case, I have about 5 million samples. I'm curious whether there are recommended batch size and epochs for such training size? I'm fine-tuning bert-base-multilingual on 4 GPUs and there is a lot of unused GPU memory with the default batch size of 32. Even after increasing it to 128 there is still free available memory.
I have exactly the same issue. Can anyone help?
The pretraining is really slow with more than 90% GPU memory available. No matter how I increase the batch size, the GPU memory usage is minimal. |
transformers | 659 | closed | Whole Word Masking Models update | Recently Google updated their TF implementation (`https://github.com/google-research/bert`) with Whole Word Masking Models that masks whole random word instead of just random wordpieces, which results in a performance gain.
Just wondering if this will be implemented here?
Thanks. | 06-04-2019 16:41:02 | 06-04-2019 16:41:02 | It's not yet but thanks for the pointer, we can probably add it fairly easily. I'll have a look.<|||||>+10000 This would be very helpful!<|||||>Hi,
I converted the cased and uncased whole-word-masking models using the command line tool. If you're interested in adding these to the repository, I've uploaded them to [this](https://www.kaggle.com/bkkaggle/bert-large-whole-word-masking) kaggle dataset. <|||||>Is this resolved? These seem to be available at head, and I don't see anything immediately wrong when I try them...<|||||>Yes they are working fine, I've added them to master last week.
They will be advertised in the next release.
When fine-tuned with run_squad they give pretty nice results: `exact_match: 86.91, f1: 93.15`.
I've included a version fine-tuned on SQuAD as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Does Whole Word Masking support bert base as well? |
transformers | 658 | closed | SQuAD 1.1 very low evaluation score when using `--fp16` | I'm replicating SQuAD 1.1 https://github.com/huggingface/pytorch-pretrained-BERT/tree/v0.6.2#squad on latest release `v0.6.2`.
My setup:
* GeForce RTX 2080 Ti
* Driver Version: 418.43
* CUDA Version: 10.1
* Linux Ubuntu 18.10
* pytorch 1.1.0 (installed via conda: py3.7_cuda10.0.130_cudnn7.5.1_0)
* latest `apex` package
I'm on latest release:
```
$ git status
HEAD detached at v0.6.2
```
-------------
I'm replicating fine tuning bert on SQuAD 1.1. When executing without `--fp16` I'm getting expected result:
```
$ python evaluate-v1.1.py dev-v1.1.json /tmp/debug_squad/predictions.json
{"exact_match": 81.58940397350993, "f1": 88.6984251786611}
```
Full command:
```
export SQUAD_DIR=squad_11_data
python run_squad.py \
--bert_model bert-base-uncased \
--do_train \
--do_predict \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 8 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
Same experiment with `--fp16` I'm getting very poor results:
```
$ python evaluate-v1.1.py dev-v1.1.json /tmp/debug_squad_fp16_apex/predictions.json
{"exact_match": 0.47303689687795647, "f1": 8.678859681492447}
```
Full command:
```
export SQUAD_DIR=squad_11_data
python run_squad.py \
--bert_model bert-base-uncased \
--do_train \
--do_predict \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 8 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad_fp16/ \
--fp16
```
I remember that I saw information about gradient overflow several times (sorry, don't have more details I have lost output logs).
How to get decent results when using `--fp16`? | 06-04-2019 14:10:49 | 06-04-2019 14:10:49 | I took example code from https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-bert-large-on-gpus (which has additional option `--loss_scale 128`). Still getting very low test scores:
```
$ python evaluate-v1.1.py dev-v1.1.json ../output/debug_squad_fp16/predictions.json
{"exact_match": 0.5771050141911069, "f1": 8.853750220358535}
```
Is there any known bug in current (on `v0.6.2`) PyTorch BERT implementation or is this my setup?
---
**UPDATE**
Probably there is a bug in current stable `v0.6.2` version. When running same command on latest `master` branch I'm getting good results:
```
$ python evaluate-v1.1.py dev-v1.1.json ../output/debug_squad_fp16/predictions.json
{"exact_match": 81.45695364238411, "f1": 88.71433452234619}
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 657 | closed | How to use different learning rates in the classifier example. | HI,
I am trying to use different learning rates for the bert and classifier. I am assuming that I can just say model.parameters and classifier.parameters like below.
`optimizer_grouped_parameters = [
{'params': model.bert.parameters(), 'lr': 0.001},
{'params': model.classifier.parameters(), 'lr': 0.01}
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]`
Can someone confirm if its correct way of using it, and also why no weight decay is specified for particular layers.
Thanks,
Vishnu | 06-04-2019 09:02:16 | 06-04-2019 09:02:16 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Did anyone conduct different learning rate in different layers when fine-tuning BERT?
Thanks. <|||||>[This paper](https://arxiv.org/abs/1905.05583) suggests a "layer-wise decreasing learning rate" improves BERT for text classification tasks, but I can't see any option for it in the [AdamW](https://huggingface.co/transformers/main_classes/optimizer_schedules.html#adamw-pytorch) optimizer. If it is helpful for text classification, it would be great to see it implemented.<|||||>Did anyone conduct different learning rate in different layers when fine-tuning BERT?
Thanks<|||||>You can do this in PyTorch (not sure about tf) and my understanding from this thread is that `huggingface`'s `AdamW` is now equivalent to PT's `AdamW` (see https://github.com/huggingface/transformers/issues/3407) so it should be equivalent - it would be great to get confirmation of this from someone more familiar with the huggingface codebase.
See here for PT multiple rates: https://discuss.pytorch.org/t/how-to-set-a-different-learning-rate-for-a-single-layer-in-a-network/48552/4
<|||||>As an update to the above - it actually _is_ possible to use the `huggingface` `AdamW` directly with different learning rates.
Say you wanted to train your new parameters at x10 the learning rate of the pre-trained bert-variant parameters (in this case held as `model.bert`) you would do:
```python
from transformers import AdamW
# define model etc.
...
pretrained = model.bert.parameters()
# Get names of pretrained parameters (including `bert.` prefix)
pretrained_names = [f'bert.{k}' for (k, v) in model.bert.named_parameters()]
new_params= [v for k, v in model.named_parameters() if k not in pretrained_names]
optimizer = AdamW(
[{'params': pretrained}, {'params': new_params, 'lr': learning_rate * 10}],
lr=learning_rate,
)
``` |
transformers | 656 | closed | Use of GPT for multilingual LM | Is there any way of using the openai-gpt module for multilingual language modelling? | 06-03-2019 08:33:13 | 06-03-2019 08:33:13 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 655 | closed | Finish torchhub interfaces | Adding GPT2 and Transformer XL compatibilities for torchhub.
Fix some typos in docs.
@thomwolf could you have a look on the doc changes in `modeling_transfo_xl.py` more specifically?
Otherwise, I think it should be good. | 06-01-2019 21:47:05 | 06-01-2019 21:47:05 | |
transformers | 654 | closed | use of special tokens in gpt2? | 
i am actually new to this field , I am trying to use gpt2 for sequence classification task in which i am adding "<|endoftext|>" after each sequence and using last hidden state for classification. My doubt is what is the use of special tokens such as "<|endoftext|>" if the gpt2tokenizer does not recognize them?
thank you and any suggestion in classification task is appreciated!
| 05-31-2019 18:23:42 | 05-31-2019 18:23:42 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 653 | closed | Different Results from version 0.4.0 to version 0.5.0 | Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained a new model, so the weights changed? | 05-31-2019 09:12:52 | 05-31-2019 09:12:52 | Hi, no we didn't change the weights. Can you share a sample on which the results are different?<|||||>Hi @thomwolf , thanks for your quick reply. I found even version 0.4.0 is different to version 0.2.0 and 0.3.0. I trained the model on v0.4.0, and then I tried to load the model using v0.2.0, here is the mismatch of keys:
```
RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
Missing key(s) in state_dict: "bert.embeddings.LayerNorm.gamma", "bert.embeddings.LayerNorm.beta",...
Unexpected key(s) in state_dict: "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias",...
```
This is also appears when I trained the model on v0.5.0+ and tried to load the model using v0.2.0 and v0.3.0.
However, the first step loss are the same to v0.2.0, v0.3.0 and v0.4.0, loss=0.7228, but the first step loss are different to v0.5.0+ whose loss is 0.7091. And the final converge results are different too. I have set a seed to reproduce the results.<|||||>Oh, sorry, I found the mismatch problem is from my loading scripts (the 2nd method in [Serialization best-practices](https://github.com/huggingface/pytorch-pretrained-BERT#serialization-best-practices)), because I didn't use the mapping from old_keys to new_keys as you did in 'from_pretrained()' function.<|||||>Hi @thomwolf , I have found where the differences are. It is the different 'init_bert_weights' that makes the results different.
In version 0.2.0, 0.3.0 and 0.4.0, you use 'normal_' to initialize the 'BertLayerNorm':
```
def init_bert_weights(self, module):
""" Initialize the weights.
"""
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
elif isinstance(module, BertLayerNorm):
module.beta.data.normal_(mean=0.0, std=self.config.initializer_range)
module.gamma.data.normal_(mean=0.0, std=self.config.initializer_range)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
```
but in version 0.5.0+, you use 'zeros_' and 'ones_' to initialize 'BertLayerNorm':
```
def init_bert_weights(self, module):
""" Initialize the weights.
"""
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
elif isinstance(module, BertLayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
```
By the way, after correct mapping from old_keys to new_keys, the old version pertained model could be loaded by new version with no different results! Thank you for sharing such great work with us! |
transformers | 652 | closed | RuntimeError: CUDA error: device-side assert triggered | I got this error when using simple_lm_finetuning.py to continue to train a bert model. Could anyone can help? Thanks a lot.
Here is the cuda and python trace. I confirm that my input max_length don't over **max_position_embeddings**
```
/pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [329,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [329,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
```
```
Loading Train Dataset input_lm.txt
Traceback (most recent call last):
File "simple_lm_finetuning.py", line 646, in <module>
main()
File "simple_lm_finetuning.py", line 592, in main
loss = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/bert-mrc/pytorch_pretrained_bert/modeling.py", line 783, in forward
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/bert-mrc/pytorch_pretrained_bert/modeling.py", line 714, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/bert-mrc/pytorch_pretrained_bert/modeling.py", line 261, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 118, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1454, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: device-side assert triggered
``` | 05-31-2019 05:53:27 | 05-31-2019 05:53:27 | Rerun with environmental variable `CUDA_LAUNCH_BLOCKING=1` and see what line it crashed on.
This is almost always an out-of-bounds error on some embeddings lookup. Usually positional embeddings, but it could be word embeddings or segment embeddings.<|||||>HI @stephenroller , I do set environmental variable `CUDA_LAUNCH_BLOCKING=1` and get the previous log. I will check my word embeddings or segment embeddings.<|||||>Then it’s definitely that you’ve got a bad index into the positional embeddings.<|||||>But when I removed the positional embeddings, it still posts the error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> But when I removed the positional embeddings, it still posts the error.
I met the same problem. Did you find how to solve it?<|||||>I met the same problem...<|||||>> I met the same problem...
I solved it myself. I met the problem when I distill the big model,but the vocab size of teacher model and student model is different. I modify the vocab size and it works.<|||||>> > I met the same problem...
>
> I solved it myself. I met the problem when I distill the big model,but the vocab size of teacher model and student model is different. I modify the vocab size and it works.
I'm experiencing the same problem. Can you please elaborate on what to do ? |
transformers | 651 | closed | Add GPT* compatibility to torchhub | I'll add GPT2 for torchhub later. | 05-31-2019 05:11:17 | 05-31-2019 05:11:17 | Amazing, thanks a lot @VictorSanh! |
transformers | 650 | closed | default in __init__s for classification BERT models | 05-30-2019 19:48:06 | 05-30-2019 19:48:06 | ||
transformers | 649 | closed | fine-tuning BERT, next sentence prediction loss is not decreasing | I run simple_lm_finetuning and monitor the loss change of next_sentence_loss and masked_lm_loss. The loss of masked token can converge but the next_sentence_loss is not decreasing.
For now I tried: tuning learning rate, change to optimizers from pytorch.optim, I checked the input_ids and the input looks good... I also ignore the loss from masked tokens and specifically train next sentence prediction but it didn't work.
Did anyone face the same problem? Or where should I check to make it work? Thanks.
| 05-30-2019 13:07:23 | 05-30-2019 13:07:23 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>same not working for me. NSP loss is not converging even though MLM loss is converging. |
transformers | 648 | closed | [Dropout] why there is no dropout for the dev and eval? | I do not see any dropout layer after `get_pooled_output()` in the tf version referred to [here](https://github.com/google-research/bert/blob/master/run_classifier.py#L590). Why do you add a dropout layer in your implemention? | 05-30-2019 13:00:50 | 05-30-2019 13:00:50 | @thomwolf Thanks!<|||||>line 604<|||||>> line 604
Thanks so much. my mistake.
Do you know why there is no dropout for the dev and eval?
<|||||>first of all, no one uses dropout at evaluation stage as it's a regularizer. The difference of implementation is due to the fact that a dropout layer in pytorch behaves differently once you turn on a evaluation model (i.e., model.eval()). So even though it has a dropout layer it doesn't take any effect (p=0) at evaluation.<|||||>> first of all, no one uses dropout at evaluation stage as it's a regularizer. The difference of implementation is due to the fact that a dropout layer in pytorch behaves differently once you turn on a evaluation model (i.e., model.eval()). So even though it has a dropout layer it doesn't take any effect (p=0) at evaluation.
Thanks for help me clarify this problem. |
transformers | 647 | closed | No softmax activation in BertForTokenClassification | The BertForTokenClassification class has the `classifier` member, which is a linear layer.
In the `forward` function, it treats it's out as probabilities (Cross Entropy for loss) but there's no softmax. Is there a reason for that? | 05-30-2019 11:16:34 | 05-30-2019 11:16:34 | It's because `nn.CrossEntropyLoss` already has a Softmax integrated in the module:
https://pytorch.org/docs/stable/nn.html?highlight=crossentropy#torch.nn.CrossEntropyLoss<|||||>I see it now. Thanks! |
transformers | 646 | closed | Fix link in README | Link was not working. Fixed. | 05-30-2019 05:02:12 | 05-30-2019 05:02:12 | Thanks! |
transformers | 645 | closed | BertAdam's get_lr() not return correct learning rate | when the initial values of `BertAdam`'s `params` have `requires_grad=False` Parameter, it will just continue the loop in `step()` function line 251, after step() function, when I want to use `get_lr()` to get the current learning rate, the state of this Parameter is a empty dict, so the function just return `[0]` in line 230. I think it is out of expectation, maybe more checking of the grad is needed somewhere. | 05-29-2019 08:34:48 | 05-29-2019 08:34:48 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 644 | closed | RuntimeError: cublas runtime error : an internal operation failed at /pytorch/aten/src/THC/THCBlas.cu:258 | I implemented my model referring to the implementation of the examples, when my model running several batches, the error shown in the title occurs.
The whole trace is as followed:
Traceback (most recent call last):
File "train.py", line 549, in <module>
train()
File "/media/***/***/***/***/***.py", line 120, in forward
topic_representation = self.bertRepresentation(topic_ids,topic_type_ids,topic_mask)
File "/media/***/***/***/***/***.py", line 113, in bertRepresentation
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 736, in forward
output_all_encoded_layers=output_all_encoded_layers)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 409, in forward
hidden_states = layer_module(hidden_states, attention_mask)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 394, in forward
attention_output = self.attention(hidden_states, attention_mask)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 352, in forward
self_output = self.self(input_tensor, attention_mask)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 303, in forward
mixed_query_layer = self.query(hidden_states)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward
return F.linear(input, self.weight, self.bias)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1354, in linear
output = input.matmul(weight.t())
RuntimeError: cublas runtime error : an internal operation failed at /pytorch/aten/src/THC/THCBlas.cu:258
In the trace, self.bert is the function to call the BERT model, I am looking forward to all the suggestions from all of you. I have google this problem and there are no methods could solve this problem.
I also run my model using CPU, although it is very slow, no errors. | 05-29-2019 02:51:41 | 05-29-2019 02:51:41 | Python version is 3.6, the cuda version is 10.1.105, cudnn version is 7.51, and pytorch version is 1.0.1, platform is ubuntu 14.04. And GPU is Titan GTX 1080Ti with 11g memory.
Thanks all of you!<|||||>I am running into almost exactly the same issue. python3.6 cuda 10.0 ubuntu 18.04 and a 1080ti as well. Not sure what it is coming from. I see lots of google results from 2080ti's but those looked like architecture issues. <|||||>I tried running with CUDA_LAUNCH_BLOCKING=1 because others have stated this will get us a more accurate error. This led to RuntimeError: Creating MTGP constants failed. at /pytorch/aten/src/THC/THCTensorRandom.cu:33
. Doing some searching as to why this might be. Some people said indexing error somewhere. seems to be happening in the dropout area though so I dont see how that is possible. <|||||>https://github.com/pytorch/pytorch/issues/20489
https://github.com/pytorch/pytorch/pull/20886
These look related<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Python version is 3.6, the cuda version is 10.0 , cudnn version is 7.5, and pytorch version is 1.1.0, platform is ubuntu 16.04. And GPU is Titan GTX 2080Ti.
Solve this problem. |
transformers | 643 | closed | FileNotFoundError: [Errno 2] No such file or directory: 'uncased_L-12_H-768_A-12\\pytorch_model.bin' | I was just trying to get familiar with the pytorch implementation of BERT. I tried with the examles mentioned in the README file. The statement : **tokenizer = BertTokenizer.from_pretrained(BERT_PRETRAINED_PATH,do_lower_case=True)** works perfectly but when I try the same with **model = BertForMaskedLM.from_pretrained(BERT_PRETRAINED_PATH)**, it shows the error : **FileNotFoundError: [Errno 2] No such file or directory: 'uncased_L-12_H-768_A-12\\pytorch_model.bin'** Can someone point out where I am going wrong? The pretrained bert weights are available in the same directory and the **BERT_PRETRAINED_PATH** is **Path("uncased_L-12_H-768_A-12")** | 05-28-2019 17:57:49 | 05-28-2019 17:57:49 | I got the same problem. Did you solve the problem?<|||||>> I got the same problem. Did you solve the problem?
Yes. Actually the project folder of this implementation does not contain the `pytorch_model.bin` file. For loading the actual pretrained model, you have to use `BertModel.from_pretrained('bert-base-uncased').` Here as the parameter, you can mention name of any one of the bert variations available. After fine-tuning, you can save the `state_dict()` of the model in the `pytorch_model.bin` file and use it later. |
transformers | 642 | closed | Performing optimization on CPU | In README.md, it is explained that the optimization step was performed on CPU to train a SQuAD model.
> perform the optimization step on CPU to store Adam's averages in RAM.
https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-bert-large-on-gpus
Is it still supported in `run_squad.py`?
If I understand correctly, the current implementation always places the Adam's averages on GPUs if we train the model using GPUs. | 05-28-2019 15:55:17 | 05-28-2019 15:55:17 | Oh yes you are right, we removed that, I'll update the readme<|||||>Thank you for your reply!
Because Adam's averages (i.e., `next_m` and `next_v`) are large, it is very helpful to support performing optimization step on CPU to reduce GPU memory.
Therefore, I would like to know how you implemented this.
Did you move the parameters to CPU (e.g., `grad.to(torch.device('cpu'))`) in `BertAdam.step` function?<|||||>@ikuyamada In the old repository, the model parameters on gpu obtained by forward and backward are copied to the optimizer that stores the model parameters on cpu, and the updated model parameters are returned to the model on gpu at each training step.
Check https://github.com/huggingface/pytorch-pretrained-BERT/blob/v0.3.0/examples/run_squad.py#L682
@thomwolf I have questions. Why did you decide to delete `optimize_on_cpu`? Does cpu and gpu parameter transfer affect training speed?
<|||||>@ryonakamura thank you for the pointer!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 641 | closed | The prediction accuracy for the masked token is ZERO when using the pretrained model. Does it make sense? | Hi, this question might be not relavent to the code but to BERT, I still hope I can find someone to help me out.
I run the pretrained model BertForPreTraining and test it on my own text data. Because BERT has knowledge about language so I expect it to be able to predict the masked tokens with a reasonable accuracy, however it is zero. I run the model with model.eval() mode. Does it make sense that it predicts the masked tokens with zero?
Thanks for the help. | 05-28-2019 15:54:39 | 05-28-2019 15:54:39 | |
transformers | 640 | closed | Support latest multi language bert fine tune | **Affected function**: fine tune example file
**Update summary**:
- Fix issue of bert-base-multilingual not found by fixing uncased version name in argument dict
- Add support for cased version by adding the right name into argument dict | 05-27-2019 09:30:15 | 05-27-2019 09:30:15 | Great, thanks! |
transformers | 639 | closed | Isn't it too few activations? | Hello, I have a question. In BertLayer part of model we see, that in BertAttention module we do attention (nonlinear action) and selfOutput (linear transformation as it is dense + BN). Then we do BertIntermediate, starting with linear transformation(which means, we have dense, BN, dense transformations going one by one). Why don't we have nonlinear activation between BertAttention and BertIntermediate in BertLayer? | 05-24-2019 15:44:16 | 05-24-2019 15:44:16 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 638 | closed | GPT-2 Tokenizer error! | I tried to use GPT-2 to encode with `text = "This story gets more ridiculous by the hour! And, I love that people are sending these guys dildos in the mail now. But… if they really think there's a happy ending in this for any of them, I think they're even more deluded than all of the jokes about them assume."` but it error.

| 05-24-2019 07:30:29 | 05-24-2019 07:30:29 | Maybe you can find the solution at #537 <|||||>> Maybe you can find the solution at #537
Thank for your support! It works for me! |
transformers | 637 | closed | run_squad.py F1 and EM score are differ from tensorflow version | I use squad1.1 data from https://www.kaggle.com/stanfordu/stanford-question-answering-dataset
I ran convert_tf_checkpoint_to_pytorch.py (using google's uncased_L-12_H-768_A-12 model) and run_squad.py. result was
```
{"exact_match": 73.30179754020814, "f1": 82.10116863001393}
```
Alse I ran run_squad_hvd.py from https://github.com/lambdal/bert (it wraps original tensorflow BERT with horovod) and result was
```
{"exact_match": 80.66225165562913, "f1": 88.09365604437863}
```
bert_config.json is the same.
my parameter's are
pytorch ver
```
--train_batch_size", default=32,
--learning_rate", default=5e-5.
--max_seq_length", default=384,
--doc_stride", default=128,
--max_query_length", default=64
--predict_batch_size",default=8
```
tensorflow ver
```
--train_batch_size=8,
--learning_rate=5e-5,
--num_train_epochs=3.0.
--max_seq_length=384,
--doc_stride=128,
--max_query_length=64,
--predict_batch_size=8
```
I wonder
1. If src pretrained model and parameters are the same, results of pytorch version and tensorflow version are too near score, perhaps F1:88, EM:80. Is it right?
2. If 1. is true, my parameter or procedure is wrong.Where should I see?
| 05-24-2019 05:34:50 | 05-24-2019 05:34:50 | I also can't get the same result on my squad dataset<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 636 | closed | Training Dataset of GPT-2 | What is the training dataset for the pre-trained GPT-2 Model? | 05-23-2019 20:20:45 | 05-23-2019 20:20:45 | The pretrained model is trained on the WebText dataset. It's a collection of documents from outgoing Reddit links that have above 3 karma (ensures quality).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 635 | closed | GPT2 - support data type torch.DoubleTensor for Position embedding | Hi,
Could the team add support for floating data types for the position embedding? Currently, it only allows torch.LongTensor between 0 - config.n_positions - 1 and must be of the same shape as the input. I see this a restriction as one may use floating values between e.g. 0-2 to represent the sequence.
What would be even better, if the team could add build-in position encoding techniques like the one used in Attention Is All You Need paper, where they use cos and sin:
PE(pos,2i) = sin(pos/10000**(2i/dmodel))
PE(pos,2i+1) = cos(pos/10000**(2i/dmodel))
@thomwolf Could you please share your thoughts with me on that?
Regards,
Adrian.
| 05-23-2019 09:19:24 | 05-23-2019 09:19:24 | Hi, we won't be able to add these feature as they would render the PyTorch model not compatible with the pretrained model open-sourced by OpenAI.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 634 | closed | convert_tf_checkpoint_to_pytorch get different result? | I'm using this repo to do Chinese NER on MSRA dataset, when I use the pretrained model `bert-base-chinese ` , the result is very good, it can reach 0.93+ f1 on test set in first epoch. But when I used `convert_tf_checkpoint_to_pytorch` to convert the original bert released checkpoint `chinese_L-12_H-768_A-12` to pytorch_model.bin, the result is very bad, it can only reach 0.5 f1 score on test set. Is there anything I do wrong?
And when I used `BertForPreTraining.from_pretrained`, the result is alright. | 05-23-2019 08:50:45 | 05-23-2019 08:50:45 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 633 | closed | bert->onnx ->caffe2 weird error | So really not sure if i should post this here but im having this problem with the pretrained bert for seq classification in particular when i try to consume the ONNX version of the model with Caffe2, I get this output:
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/onnx/workspace.py", line 63, in f
return getattr(workspace, attr)(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 250, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 211, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at pow_op.h:100] A.sizes() == B.sizes(). [4, 512, 768] vs []. Dimension mismatch - did you forget to set broadcast=1?
Error from operator:
input: "222" input: "223" output: "224" name: "" type: "Pow" device_option { device_type: 1 device_id: 3 }frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f9c8cdaf441 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x49 (0x7f9c8cdaf259 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x2b63861 (0x7f9c44eed861 in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2_gpu.so)
frame #3: <unknown function> + 0x15a3555 (0x7f9c4392d555 in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2_gpu.so)
frame #4: caffe2::SimpleNet::Run() + 0x161 (0x7f9c396ac101 in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so)
frame #5: caffe2::Workspace::RunNet(std::string const&) + 0x3a (0x7f9c396e35aa in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so)
frame #6: <unknown function> + 0x4e38a (0x7f9bbe6fd38a in /usr/local/lib/python3.6/dist-packages/caffe2/python/caffe2_pybind11_state_gpu.cpython-36m-x86_64-linux-gnu.so)
frame #7: <unknown function> + 0x9368e (0x7f9bbe74268e in /usr/local/lib/python3.6/dist-packages/caffe2/python/caffe2_pybind11_state_gpu.cpython-36m-x86_64-linux-gnu.so)
frame #8: PyCFunction_Call + 0xf9 (0x4aeb29 in /usr/bin/python3)
frame #9: _PyEval_EvalFrameDefault + 0x7e42 (0x54d092 in /usr/bin/python3)
frame #10: /usr/bin/python3() [0x543f21]
frame #11: /usr/bin/python3() [0x54421f]
frame #12: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #13: /usr/bin/python3() [0x543f21]
frame #14: PyEval_EvalCodeEx + 0x6d (0x544cfd in /usr/bin/python3)
frame #15: /usr/bin/python3() [0x485857]
frame #16: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #17: _PyEval_EvalFrameDefault + 0x19e8 (0x546c38 in /usr/bin/python3)
frame #18: /usr/bin/python3() [0x543f21]
frame #19: /usr/bin/python3() [0x54421f]
frame #20: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #21: /usr/bin/python3() [0x543f21]
frame #22: /usr/bin/python3() [0x54421f]
frame #23: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #24: /usr/bin/python3() [0x5432b1]
frame #25: /usr/bin/python3() [0x544447]
frame #26: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #27: /usr/bin/python3() [0x5432b1]
frame #28: /usr/bin/python3() [0x544447]
frame #29: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #30: /usr/bin/python3() [0x543f21]
frame #31: PyEval_EvalCodeEx + 0x6d (0x544cfd in /usr/bin/python3)
frame #32: /usr/bin/python3() [0x485857]
frame #33: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #34: _PyEval_EvalFrameDefault + 0x19e8 (0x546c38 in /usr/bin/python3)
frame #35: /usr/bin/python3() [0x543f21]
frame #36: PyEval_EvalCodeEx + 0x6d (0x544cfd in /usr/bin/python3)
frame #37: /usr/bin/python3() [0x485857]
frame #38: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #39: _PyEval_EvalFrameDefault + 0x19e8 (0x546c38 in /usr/bin/python3)
frame #40: /usr/bin/python3() [0x5432b1]
frame #41: /usr/bin/python3() [0x544447]
frame #42: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #43: /usr/bin/python3() [0x5432b1]
frame #44: /usr/bin/python3() [0x544447]
frame #45: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #46: /usr/bin/python3() [0x5432b1]
frame #47: _PyFunction_FastCallDict + 0x236 (0x54d8c6 in /usr/bin/python3)
frame #48: _PyObject_FastCallDict + 0x1ef (0x455acf in /usr/bin/python3)
frame #49: _PyObject_Call_Prepend + 0xcb (0x455bcb in /usr/bin/python3)
frame #50: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #51: /usr/bin/python3() [0x4c9d13]
frame #52: _PyObject_FastCallDict + 0xa2 (0x455982 in /usr/bin/python3)
frame #53: /usr/bin/python3() [0x544075]
frame #54: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #55: /usr/bin/python3() [0x5432b1]
frame #56: /usr/bin/python3() [0x544447]
frame #57: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #58: /usr/bin/python3() [0x5432b1]
frame #59: /usr/bin/python3() [0x544447]
frame #60: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #61: /usr/bin/python3() [0x5432b1]
frame #62: _PyFunction_FastCallDict + 0x236 (0x54d8c6 in /usr/bin/python3)
frame #63: _PyObject_FastCallDict + 0x1ef (0x455acf in /usr/bin/python3)
Does any of you know if the pretrained model is using something not supported by Caffe2?
I have also tried with several tensor shapes ( like (1, 512), (1, 128), (1,512, 786) in both long anf float with no luck. Also i used (4, 512), (4,128), (4,512,768) just in case since my input when i exported the ONNX file used some 4 samples.
Any pointers would be highly appreciated :)
| 05-23-2019 06:59:07 | 05-23-2019 06:59:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@maeotaku were you ever able to figure this out? I'd be curious to see what numbers you were seeing when running in caffe2.
If you didn't figure this out, seems similar this [pytorch issue](https://github.com/pytorch/pytorch/issues/18475) |
transformers | 632 | closed | run_classifier.py:TypeError: forward() missing 1 required positional argument: 'input_ids' | when I use my dataset, forward() missing 1 required positional argument: 'input_ids' ,but I can get "input_ids, input_mask, segment_ids, label_ids" in batch. | 05-22-2019 13:55:40 | 05-22-2019 13:55:40 | I ran into the same error and fixed it by using this instead:
[Pull Request](https://github.com/huggingface/pytorch-pretrained-BERT/pull/604)
Haven't tried my full dataset yet but on a slice, it worked well!
Edit: On the full dataset I still get the error.
Edit2:
Change the train data loader to drop the last "incomplete" batch as a workaround:
> train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=args.train_batch_size, drop_last=True)<|||||>Same error. I solved this problem by add "CUDA_VISIBLE_DEVICES=0" in command line.
I guess this problem caused by "DataParallel" which splits and sends batch to GPUs. In some case, batch_size is less than the number of GPU. In consequence, model on extra gpu just get an empty input.
BTW,you should use @AlanHassen 's solution in training stage.<|||||>same error when use my dataset. File "run_classifier.py", TypeError: forward() missing 1 required positional argument: 'input_ids'.
Add:
i change the default eval_batch_size from 8 to 12, it can avoid this error.
you can change the eval_batch_size to some other number.
of course, this bug should finally be solved by code.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 631 | closed | from_pretrained | I have a question regarding the from_pretrained method since I experienced a bit unexpected behaviour. It's regarding how to save a BERT classifier model as a whole.
I am experimenting with a classifier on top of BERT on the stack overflow question / tags dataset which contains 20 classes of 40.000 text samples. I trained a classifier on 80% of the data for 3 epochs and saved the model in the following and recommended way:
```
def save_model(model):
# Save a trained model, configuration and tokenizer
model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self
# If we save using the predefined names, we can load using `from_pretrained`
output_model_file = os.path.join("./", "pytorch_model.bin")
output_config_file = os.path.join("./", "bert_config.json")
torch.save(model_to_save.state_dict(), output_model_file)
model_to_save.config.to_json_file(output_config_file)
tokenizer.save_vocabulary("./")
```
This gave my a a model .bin file and a config file.
The training accuracy was around 90% after the last epoch on 32.000 training samples, leaving 8.000 samples for evaluation. I then instantiated a new BERT model with from_pretrained method with state_dict as False and ran the evaluation which surprisingly gave these results:
{'eval_loss': 9.04939697444439, 'eval_accuracy': 0.036875}
I ran through the from_pretrained method and saw that the .bin file is a PyTorch dump of a BertForPreTraining instance which I presume means that the classifier weights are not saved when saving this way?
Then it would probably be necessary to pass the state_dict parameter when loading a classifier model with from_pretrained? If so, is it necessary to pass two files (.bin and state_dict as model_state) or is there another way to do this? | 05-22-2019 11:23:23 | 05-22-2019 11:23:23 | Did you try to load the model following the best-practices indicated here: https://github.com/huggingface/pytorch-pretrained-BERT#serialization-best-practices<|||||>All but load the tokenizer from the vocab file. Think that would make a large difference?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 630 | closed | Update run_squad.py | Indentation change so that the output "nbest_predictions.json" is not empty. | 05-22-2019 09:31:20 | 05-22-2019 09:31:20 | |
transformers | 629 | closed | Is loss.mean() needed? | In `run_classifier.py`, there is a:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_classifier.py#L841-L842
However, a couple of lines higher, the logits are already flattened
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_classifier.py#L834-L839
So I assume the loss returned will only be a single-element tensor, correct? I also tested this code on 2 gpus and there indeed is only one single-element tensor. | 05-22-2019 06:22:41 | 05-22-2019 06:22:41 | This line is used when people use multi-gpu in a single python process (parallel instead of distributed). This is not the recommended setting (distributed is usually faster).
I wrote a blog post on this (parallel/distributed and the like) a few months ago: https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255<|||||>So on a multi-GPU machine, if we run `run_classifier.py`, by default it uses the distributed setting, correct? I'm assuming the answer is yes because when I ran this code on a 2 gpu machine, the `loss` was already a single-element tensor before the `.mean()` call.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 628 | closed | IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | A simple call to BertModel does not work well here.
Here is a minimal code example:
```
from pytorch_pretrained_bert.modeling import BertModel
from pytorch_pretrained_bert.tokenization import BertTokenizer
import torch
embed = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence = "the red cube is at your left"
tokens = ["[CLS]"] + tokenizer.tokenize(sentence) + ["[SEP]"]
input_ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens))
print(input_ids)
embed(input_ids)
```
I obtained the following error:
> tensor([ 101, 1996, 2417, 14291, 2003, 2012, 2115, 2187, 102])
> ---------------------------------------------------------------------------
> IndexError Traceback (most recent call last)
> <ipython-input-3-66d7a2bcfb96> in <module>
> 11
> 12 print(input_ids)
> ---> 13 embed(input_ids)
>
> ~/.pyenv/versions/3.6.7/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> ~/src/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in forward(self, input_ids, token_type_ids, attention_mask, output_all_encoded_layers)
> 731 extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
> 732
> --> 733 embedding_output = self.embeddings(input_ids, token_type_ids)
> 734 encoded_layers = self.encoder(embedding_output,
> 735 extended_attention_mask,
>
> ~/.pyenv/versions/3.6.7/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> ~/src/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in forward(self, input_ids, token_type_ids)
> 262
> 263 def forward(self, input_ids, token_type_ids=None):
> --> 264 seq_length = input_ids.size(1)
> 265 position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
> 266 position_ids = position_ids.unsqueeze(0).expand_as(input_ids)
>
> IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
I am using Python 3.6.7 and the code on the master branch. | 05-21-2019 16:57:57 | 05-21-2019 16:57:57 | Oops... I forgot to "batch" the input...
Here is a working sample:
```
from pytorch_pretrained_bert.modeling import BertModel
from pytorch_pretrained_bert.tokenization import BertTokenizer
import torch
embed = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence = "the red cube is at your left"
tokens = ["[CLS]"] + tokenizer.tokenize(sentence) + ["[SEP]"]
input_ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens))
embed(input_ids.unsqueeze(0))
```<|||||>> Oops... I forgot to "batch" the input...
>
> Here is a working sample:
>
> ```
> from pytorch_pretrained_bert.modeling import BertModel
> from pytorch_pretrained_bert.tokenization import BertTokenizer
> import torch
>
> embed = BertModel.from_pretrained('bert-base-uncased')
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>
> sentence = "the red cube is at your left"
> tokens = ["[CLS]"] + tokenizer.tokenize(sentence) + ["[SEP]"]
> input_ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens))
>
> embed(input_ids.unsqueeze(0))
> ```
Thank u so much, u helped me a lot. |
transformers | 627 | closed | BERT QnA is not matching correct answer when document is in QnA format | I have BERT trained on Squad (and without trained as well) . My documents contains question and big answers. When we try to ask question as it is and BERT to find that question within document, then it gives some arbitrary answer from some other page of document.
What can be wrong? | 05-21-2019 10:54:34 | 05-21-2019 10:54:34 | have you tried using your own data to train the model rather than using the squad1.1 or squad 2.0 data?
I am doing QnA system as well, I have my own data and I split them into train, dev and test data, then use the train and dev data to train the model, eventually it works ok on the test data.
because I am building the QnA system for my own task, not squad task, so I trained the model by using own data.
<|||||>Thanks for reply and confirming that it works. @mushro00om May I know how much is your training data? (Building a large QnA training corpus is a challenge). <|||||>@SandeepBhutani
My training data contain 140k questions, my dev data contain 5k questions, I haven't tried using the whole 140k to train yet because its a bit too large, I took 50k out of 140k to train, and it took roughly 3 hours.<|||||>@mushro00om : Thats a huge training data set. Unfortunately creating this corpus takes a lot of effort. So we are trying to ask same question that is there as it is in the document, based on vanilla bert-uncased (or squad trained too). <|||||>@SandeepBhutani Wish you good luck !<|||||>Hey @SandeepBhutani, I am facing the exact same issue. Did you solve it? I would like to talk to you about the same. <|||||>Hi.. Not yet
On Tue, 18 Jun, 2019, 4:21 PM dhruvkp090, <[email protected]> wrote:
> Hey @SandeepBhutani <https://github.com/SandeepBhutani>, I am facing the
> exact same issue. Did you solve it? I would like to talk to you about the
> same.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/627?email_source=notifications&email_token=AHRBKIBNACCIYZFQWCOTNKLP3C43FA5CNFSM4HOJYYQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX57T3Q#issuecomment-503052782>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AHRBKIEHHKK3S3IFTCVWWNDP3C43FANCNFSM4HOJYYQA>
> .
>
<|||||>@SandeepBhutani is there any way we can connect? I would like to talk to you about this..<|||||>Please email me your contact details at [email protected]
On Tue, 18 Jun, 2019, 4:28 PM dhruvkp090, <[email protected]> wrote:
> @SandeepBhutani <https://github.com/SandeepBhutani> is there any way we
> can connect? I would like to talk to you about this..
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/627?email_source=notifications&email_token=AHRBKIAJFWSHZTLRDSGZNSDP3C5UVA5CNFSM4HOJYYQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX6ADIY#issuecomment-503054755>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AHRBKIDPOVXVESF72DCFYWLP3C5UVANCNFSM4HOJYYQA>
> .
>
<|||||>@SandeepBhutani, I sent my details.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 626 | closed | How to use run_squad.py to produce multiple answers for a question? | Hello,
I am using run_squad.py to build my own question answering system, the problem is that, I want the system can output multiple answers for a question.
The number of answers can be 0, or one, or multiple if possible, how can I do to the code to achieve this? Thank you | 05-21-2019 10:03:32 | 05-21-2019 10:03:32 | soloved<|||||>Can you share how did you solve that problem?<|||||>@armheb
Sure, some questions in my dataset have multiple answers, some have one answer, some no answer.
Firstly, I add a for loop in the "read_squad_example" method to allow the code to read all answers for each question and build N SquadExamples for each question, N is the number of answers (This is for my case, you don't have to do it, because I need to use all answers, the original squad code only reads the first answer of each question even the question has multiple answers).
The run_squad.py produces a "nbest_predictions.json" file, you can see the model provides top 20 possible answers for each question, with possibilities, so I just simply pick some of those answers according to their possibilities.
However, I have to admit that eventually the performance isn't that good. it works but just not that good, but I think it can be improved by some way.<|||||>@mushro00om
Hi,
Can you give sample codes for how you used your model for prediction given a text corpus and a question?<|||||>@Swathygsb Hi, sorry for late reply. Actually the code script I used is not this Pytorch version, I used the Tensorflow version provided by Google, because it is much more easier and they provide very clear guidance, Here is the link:
https://github.com/google-research/bert
The most of the code remained unchanged, I basically modified the read_squad_examples method to allow process multiple answers (in my task, a question may have more than one answer, the original code can only process one answer for each question).
So if all your questions have only one particular answer, you can simply follow the guidance, or if your questions may have more than one answer, you can give me your email and i can send my code to you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 625 | closed | Tried to visualize the CLS Token embeddings after fine-tuning on SST-2 using t-SNE, but no clear clustered visualizations of positive and negative sentences ! | I have used run_classifier.py to finetune the model on SST-2 data, and used this model in the extract_features.py to extract the embeddings of some sentences(fed only sentences-input.txt). Later used these features from .jsonl file and used the vectors of layer -2, corresponding to CLS token and tried to visualize using t-SNE to some see clear separations between the positive and negative sentences. But i could not get any clear clusters.
so, my questions are:
que.1: Does CLS tokens after finetuning represents the entire sentence ? , so that one can use them on downstream tasks .
que.2: What is the best way to know that, the CLS tokens after fine-tuning is carrying the sentence representation ? (For Example: I tried to visualize using t-SNE)
que.3: I used those CLS tokens vectors in scikit-learn (Naive-bayes) models as well, but i got the accuracy of around 50%, but BERT uses same vectors in the evaluation and achieves 93% accuracy , how does it possible ? Is my approach in checking the CLS token vectors is wrong ?
The following figure shows the visualizations of CLS vectors using t-SNE along with corresponding labels of the sentences (vectors from -2 layer are used for plot)
It would be great if @thomwolf have a look at this issue too.
looking forward for suggestions from all folks around here!
best,
Tinya

| 05-20-2019 16:20:45 | 05-20-2019 16:20:45 | Hi @rsc90,
The `BertForSequenceClassification` model use a [linear layer](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L994) on top of Bert's `pooled_output` which is a [small feed-forward layer with a tanh activation](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L417-L429).
I would rather use the output of Bert's `pooled_output` or the last hidden-layer for what you are doing. Why do you use layer -2?<|||||>Hello @thomwolf ,
Thank you. I had done with last layer as well but even the clusters were not clear as shown in below fig.
I had read that last layer would be sometimes biased so i didn't, but well i experimented that as well.

Ok. could you let me know how to collect this pooled output for sentence representations after finetuning ?
best,
tinya<|||||>You can try to initialize a `BertModel` from your fine-tuned `BertForSequenceClassification` model (I hope you fine-tuned a model otherwise it's normal the representations are not adapted to your task).
Just do `model = BertModel.from_pretrained('path-to-your-fine-tuned-model')`.
And then use the pooled output of your `BertModel`.
Still not sure what you are trying to do here in general by the way.<|||||>Yeah idid the same, like:
1. I used the run_classifier.py on SST-2 dataset, saved the model**( fine_tuned_model)**
2. Used **fine_tuned_model** in extract_features.py and collected this output.jsonl (as you said here )
3. From json file i plot the vectors corresponding to CLS embeddings using t-SNE
Intention of the experiment is, if CLS tokens were carrying representations of sentence on downstream tasks, then i was expecting something like representations what we get when we plot MNSIT data using t-SNE. (I just want to make sure whether CLS is carrying the complete sentence representation after finetuning on downstream task, if so then why i am not getting separate clusters )
Please correct me, if i am missing something or do you suggest some other experiments to verify my thoughts.
Many thanks,
tinya
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any updates on this issue? |
transformers | 624 | closed | tokenization_gpt2.py - on python 2 you can use backports.functools_lru_cache package from pypi | See https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_gpt2.py#L28. Instead of not using `lru_cache` you can use this package. | 05-20-2019 13:17:14 | 05-20-2019 13:17:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 623 | closed | Integration with a retriever Model | How can I integrate BERT to a retriever model? | 05-20-2019 04:05:35 | 05-20-2019 04:05:35 | Have a look at the ParlAI library and in particular [these great models based on BERT](https://github.com/facebookresearch/ParlAI/pull/1331) by @samhumeau.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hey @rajo19,
if you want to do Question Answering at scale with a Retriever + Reader pipeline, it might be worth checking out our new [haystack](https://github.com/deepset-ai/haystack/) project. It builds upon transformers and you can use all the QA models from [here](https://huggingface.co/models) as a reader. <|||||>Very interesting, thanks for sharing @tholor! cc @mfuntowicz |
transformers | 622 | closed | In run_classifier.py, is "warmup_proportion" a fraction or percentage? | In `run_classifier.py`, the arg parameter `--warmup_proportion` [help doc](https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_classifier.py#L628) says, "E.g., 0.1 = 10%% of training.". Is it actually a percentage such that `0.1` => `0.1%` => `0.001`, which is indeed `10%%` as stated in the help doc? But throughout the code, it seems `0.1` is just fraction which is `0.1`, not `0.001`. Please clarify and fix the help doc if it is wrong. | 05-20-2019 00:43:26 | 05-20-2019 00:43:26 | Found the same problem. `0.1` means `10%` in [Google's TensorFlow implementation](https://github.com/google-research/bert/blob/d66a146741588fb208450bde15aa7db143baaa69/run_classifier.py#L92).<|||||>It's a fraction of total training like indicated in the help doc: `0.1 = 10% of training`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 621 | closed | Question on duplicated sentence | Hi. I wonder whether they are unnecessary duplicated sentence or not.
When I run in "test mode", similar sentences are called twice.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L908
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L913
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L1042
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L1044
Thanks. | 05-19-2019 06:01:58 | 05-19-2019 06:01:58 | Looks like a minor bug. However, it seems that simply removing this `else` statement may cause some problems according to the previous code logic.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 620 | closed | Convert pytorch models back to tensorflow | Added a file that converts pytorch models that have been trained/finetuned back to tensorflow. This currently supports the base BERT models (uncased/cased); conversion for other BERT models will be added in the future. | 05-18-2019 21:20:32 | 05-18-2019 21:20:32 | Changed filename from convert_hf_checkpoint_to_tf.py to convert_pytorch_checkpoint_to_tf.py for consistency.<|||||>I use this to convert the fine-tuned pytorch model to TF and convert this converted TF back to pytorch model. The prediction result seems incorrect with the converted-converted pytorch model. <|||||>I compare the dumped stat_dict, it seems the differences lie in :
encoder.layer.{d}.attention.self.query.weight
encoder.layer.{d}.attention.self.key.weight
encoder.layer.{d}.attention.self.value.weight<|||||>What model are you trying to convert @Qiuzhuang? Per the docstring, only the BertModel is currently supported. I will add support other models in the near future.<|||||>Hi @chrislarson1, I use BertModel as follows:
BertModel(config).from_pretrained(pretrain_model_dir)
where pretrain_model_dir is (domain-pretraining + task specific classifier) training model dir.<|||||>I write the sample test code read the pytorch model and tf-pytorch model via BertModel.

<|||||>@chrislarson1 we need to transpose attention Q/K/V weight as well, here is the fixing:
if any(attention_weight in var_name for attention_weight in ["dense.weight", "attention.self.query", "attention.self.key", "attention.self.value"]):
torch_tensor = torch_tensor.T
tf_tensor = assign_tf_var(tensor=torch_tensor, name=tf_name)<|||||>I am able to train pytorch fine-tuned model and then convert to tensorflow model for serving purpose. E.g. using bert-as-service. The results are consistent.<|||||>Thanks @Qiuzhuang, your change has been added.<|||||>Thanks, @chrislarson1 and @Qiuzhuang.
This feature in interesting indeed.
I think it could be nice to add a check that output of the PyTorch and TensorFlow models are identical (on at least one example). For the Bert model, I did a simple notebook (see the notebook folder) but it can also be a script or a test.
Do you think you can add something like that @chrislarson1 ?<|||||># [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=h1) Report
> Merging [#620](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/3763f8944dc3fef8afb0c525a2ced8a04889c14f?src=pr&el=desc) will **decrease** coverage by `0.79%`.
> The diff coverage is `0%`.
[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #620 +/- ##
=========================================
- Coverage 68.23% 67.43% -0.8%
=========================================
Files 18 19 +1
Lines 3976 4023 +47
=========================================
Hits 2713 2713
- Misses 1263 1310 +47
```
| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [...retrained\_bert/convert\_pytorch\_checkpoint\_to\_tf.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvY29udmVydF9weXRvcmNoX2NoZWNrcG9pbnRfdG9fdGYucHk=) | `0% <0%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=footer). Last update [3763f89...716cc1c](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>No problem @thomwolf, I've added a notebook that checks correctness.<|||||>Ok great, let's merge it!<|||||>Hi,
I am trying to write a converter for RobertaForSequenceClassification to tensorflow using this script as a guide. I had a question regarding this.
Why did we take a transpose of the layers here at all? Is it because of tensorflow treating its layers differently than pytorch?
Also, if the dense.weight layers are being transposed, then in s sequence classification model, will the out_proj layer need to be transposed as well?<|||||>Hi @justachetan, the answer lies in how linear transformations are represented in pytorch vs. tensorflow; they are not the same. In Pytorch, weights in a network **often** get wrapped in the torch.nn.Linear class, which store transposed versions of the weights that would get saved in tensorflow for an equivalent projection (see the example below).
```
>>> import numpy as np
>>> import tensorflow as tf
>>> import torch
>>> from torch.functional import F
>>> x = np.ones([10, 4])
>>> W = np.ones([5, 4])
>>> tf.matmul(x, W.T).shape
(10, 5)
>>> F.linear(torch.Tensor(x), torch.Tensor(W)).detach().numpy().shape
(10, 5)
``` |
transformers | 619 | closed | Custom data, gradient explosion, accuracy is 0 | Hi,
I have 16000+ labels to predict using sequence classifier. I tried running the code with BertAdam (no gradient clipping) and low LR of 1e-5. But my loss doesn't not improve and the accuracy stays at zero. Gradient clipping doesn't help either. I've check my inputs and they are correct. Any help is welcome.
model: base uncased | 05-18-2019 19:39:59 | 05-18-2019 19:39:59 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 618 | closed | Loss function of run_classifier.py takes in 2 inputs of different dimensions. | I am having an error here https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L836
In this line, `loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))`, suppose we have 2 labels (entailment vs. not_entailment like QNLI task), then,
`logits` is already in dimension batch_size x num_labels. Using `logits.view(-1, num_labels)` will convert logits into an array of length (2 times batch_size) x 1.
So, this logits does not match `label_ids.view(-1)` which is batch_size x 1.
Does anyone else see this error when running the code ?
Thanks.
| 05-18-2019 10:37:57 | 05-18-2019 10:37:57 | Closing issue, because I pass in the num_labels as 1 instead of 2 for the QNLI task. I was thinking that giving 1 label is enough because the 2nd label can be inferred from the 1st one. |
transformers | 617 | closed | How to get the softmax probabilities from the TransfoXLLMModel | ```
A tuple of (last_hidden_state, new_mems)
`softmax_output`: output of the (adaptive) softmax:
if target is None:
Negative log likelihood of shape [batch_size, sequence_length]
else:
log probabilities of tokens, shape [batch_size, sequence_length, n_tokens]
`new_mems`: list (num layers) of updated mem states at the entry of each layer
each mem state is a torch.FloatTensor of size [self.config.mem_len, batch_size, self.config.d_model]
Note that the first two dimensions are transposed in `mems` with regards to `input_ids` and `target`
```
how do i get the negative log likelihood of a given sentence, like say modifying the example :
```
# Tokenized input
text_1 = "Who was Jim Henson ?"
text_2 = "Jim Henson was a puppeteer"
tokenized_text_1 = tokenizer.tokenize(text_1)
tokenized_text_2 = tokenizer.tokenize(text_2)
# Convert token to vocabulary indices
indexed_tokens_1 = tokenizer.convert_tokens_to_ids(tokenized_text_1)
indexed_tokens_2 = tokenizer.convert_tokens_to_ids(tokenized_text_2)
# Convert inputs to PyTorch tensors
tokens_tensor_1 = torch.tensor([indexed_tokens_1])
tokens_tensor_2 = torch.tensor([indexed_tokens_2])
# Load pre-trained model (weights)
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
model.eval()
# If you have a GPU, put everything on cuda
tokens_tensor_1 = tokens_tensor_1.to('cuda')
tokens_tensor_2 = tokens_tensor_2.to('cuda')
model.to('cuda')
with torch.no_grad():
# Predict all tokens
predictions_1, mems_1 = model(tokens_tensor_1)
# We can re-use the memory cells in a subsequent call to attend a longer context
predictions_2, mems_2 = model(tokens_tensor_2, mems=mems_1)
```
now when i print the shape of predictions_1 i get `torch.Size([1, 5, 267735])`
this is in relation with #477, and also one final thing, is there any way i could fasten the model loading and inference time in eval mode, i just need to use this as a reward in an RL setup, but it takes a tremendous amount of time to evaluate sentence, is there any way i could fasten the process? | 05-17-2019 10:51:10 | 05-17-2019 10:51:10 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 616 | closed | TransfoXLModel and TransforXLLMModel have the same example | can someone help me understand how the outputs would wary and if someone could give an example for the latter? | 05-17-2019 09:40:14 | 05-17-2019 09:40:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 615 | closed | Couldn't import '''BertPreTrainedModel''' | I installed this lib by '''pip install pytorch-pretrained-bert''', and there are no problems when run the examples. However, when I import '''BertPreTrainedModel''', I use '''from pytorch_pretrained_bert import BertPreTrainedModel''', error occurs.
I want to write a new class like '''BertForSequenceClassification''', and in order to use the facility of constructing the instance of the class by "'from_pretrained('bert-base-uncased')''' . My new class need to inherit the class '''BertPreTrainedModel'''.
In another way, we need the config file to get the instance of the class like '''BertForSequenceClassification''', however, I couldn't find the file 'bert_config.json'.
Are there other ways to write a new class like '''BertForSequenceClassification'''?
Thanks all of you! | 05-17-2019 08:30:10 | 05-17-2019 08:30:10 | I guess you have to clone the repository. You just need to add all the classes you want to import in this line:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/__init__.py#L12
Then you can install from source:
`pip install --editable .`
Of course since you download all the code, you can directly add your own class in `/pytorch_pretrained_bert/modeling_openai.py` then install from source.<|||||>So sorry that I am not familiar with the lib installation in python.
Thanks for your suggestions. I know how to solve it.
On 2019-05-20 19:32, eveliao wrote:
> I guess you have to clone the repository. You just need to add all the
> classes you want to import in this line:
> https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/__init__.py#L12
> [1]
> Then you can install from source:
> pip install --editable .
>
> Of course since you download all the code, you can directly add your
> own class in /pytorch_pretrained_bert/modeling_openai.py then install
> from source.
>
> --
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub [2], or mute the
> thread [3]. [ { "@context": "http://schema.org", "@type":
> "EmailMessage", "potentialAction": { "@type": "ViewAction", "target":
> "https://github.com/huggingface/pytorch-pretrained-BERT/issues/615?email_source=notifications\u0026email_token=AFUNK24OFCY7KRHX6OVL4PTPWKD3BA5CNFSM4HNTFW5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVYQ2OA#issuecomment-493948216",
> "url":
> "https://github.com/huggingface/pytorch-pretrained-BERT/issues/615?email_source=notifications\u0026email_token=AFUNK24OFCY7KRHX6OVL4PTPWKD3BA5CNFSM4HNTFW5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVYQ2OA#issuecomment-493948216",
> "name": "View Issue" }, "description": "View this Issue on GitHub",
> "publisher": { "@type": "Organization", "name": "GitHub", "url":
> "https://github.com" } } ]
>
> Links:
> ------
> [1]
> https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/__init__.py#L12
> [2]
> https://github.com/huggingface/pytorch-pretrained-BERT/issues/615?email_source=notifications&email_token=AFUNK24OFCY7KRHX6OVL4PTPWKD3BA5CNFSM4HNTFW5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVYQ2OA#issuecomment-493948216
> [3]
> https://github.com/notifications/unsubscribe-auth/AFUNK24OKPGO4TJTYJ7SPZ3PWKD3BANCNFSM4HNTFW5A
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 614 | closed | global grad norm clipping (#581) | (see #581 )
- norm-based gradient clipping was being done per param group
- when more than one param group, this is different from global norm grad clipping | 05-15-2019 17:01:26 | 05-15-2019 17:01:26 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 613 | closed | Learning from scratch not working | I'm using simple_lm_learning as it was, except I didn't use the model from pretrained, but a new BertForPreTraining model with the same config as bert-base, how come it's not learning anything and predicting the same token 1996 ("the") for every output? | 05-15-2019 15:22:40 | 05-15-2019 15:22:40 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 612 | closed | How to use the fine tuned model for classification (CoLa) task? | How to use the fine-tuned model for classification (CoLa) task?
I do not see the argument `--do_predict`, in `/examples/run_classifier.py`.
However, `--do_predict` exists in the original implementation of the Bert.
The fine-tuned model is getting saving in the BERT_OUTPUT_DIR as `pytorch_model.bin`, but is there a simple way to reuse it through the command line? | 05-15-2019 14:18:16 | 05-15-2019 14:18:16 | Can you please confirm that the "/examples/run_classifier.py" file is indeed an example for simple sentence classification?
It looks like the code here uses the "BertForSequenceClassification" model where the tf model uses the "BertModel" (line 577 here https://github.com/google-research/bert/blob/master/run_classifier.py) - why is it different?<|||||>> How to use the fine-tuned model for classification (CoLa) task?
>
> I do not see the argument `--do_predict`, in `/examples/run_classifier.py`.
>
> However, `--do_predict` exists in the original implementation of the Bert.
>
> The fine-tuned model is getting saving in the BERT_OUTPUT_DIR as `pytorch_model.bin`, but is there a simple way to reuse it through the command line?
I got an solution of QNLI task in GLUE. You can add an arg-parser (--do_predict) and [these lines](https://github.com/weidafeng/NLU2019/blob/master/model/run_classifier.py#L743:#L792), and run this command:
```bash
python ./model/run_classifier.py \
--task_name QNLI \
--do_predict \
--do_lower_case \
--data_dir ./glue_data/QNLI \
--bert_model bert-base-uncased \
--output_dir ./glue_data/QNLI/results **# path to your trained model**
```
You will get the QNLI.tsv file as you want. Hope this works for you.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 611 | closed | extract_features | Traceback (most recent call last):
File "extract_features.py", line 297, in <module>
main()
File "extract_features.py", line 230, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 197, in from_pretrained
tokenizer = cls(resolved_vocab_file, *inputs, **kwargs)
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 97, in __init__
self.vocab = load_vocab(vocab_file)
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 56, in load_vocab
token = reader.readline()
File "/home/py36/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
Hello, the pre-trained model I downloaded from https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz, what caused the above error? | 05-15-2019 12:35:46 | 05-15-2019 12:35:46 | Were you able to fix this issue? If yes, can you please share your solution?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 610 | closed | t_total | Traceback (most recent call last):
File "finetune_on_pregenerated.py", line 333, in <module>
main()
File "finetune_on_pregenerated.py", line 321, in main
optimizer.step()
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/optimization.py", line 290, in step
lr_scheduled *= group['schedule'].get_lr(state['step'])
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/optimization.py", line 61, in get_lr
progress = float(step) / self.t_total
ZeroDivisionError: float division by zero
Excuse me, what is the cause of this situation, why is t_total equal to 0? ? ? | 05-15-2019 08:56:33 | 05-15-2019 08:56:33 |
I found the reason. When the data is relatively small, this happens. After I added the data, it is normal now. |
transformers | 609 | closed | t_total | 
| 05-15-2019 08:51:35 | 05-15-2019 08:51:35 | ok |
transformers | 608 | closed | when using multiple GPUs, `loss.mean()` may have subtle bias | The problem is that, when the input is distributed to multiple GPUs, the input on each GPU may have different `batch_size`.
For example, if you have 2 GPUs and the total batch_size is 13, then the `batch_size` for each GPU will be 7 and 6 respectively, `loss.mean()` will not give the exact loss. Although it may have little influence on the training of the model, it is not the exact result.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_squad.py#L1006-L1007 | 05-14-2019 14:57:04 | 05-14-2019 14:57:04 | You could fix it with something like this
```
batch_size = torch.tensor(data.shape[1]).to(device)
dist.all_reduce(batch_size, op=dist.ReduceOp.SUM)
dist.all_reduce(loss, op=dist.ReduceOp.SUM)
mean_loss = loss/batch_size
```<|||||>Thanks for your solution. I don't think it could fix it, because the `loss` returned by each GPU is already averaged over its `batch_size`.<|||||>@Lvzhh the examples (like `run_squad.py`) only work in two settings for multi-gpu:
- using `DataParallel` (not distributed, triggered when you simply run the script on a multi-gpu server): in this case the batch size should be a multiple of the number of gpu or an error is thrown. This is the case in which this `loss = loss.mean()` will be triggered. No risk of differing batch sizes here.
- using `DistributedDataParallel` with a *single* GPU assigned to each process (triggered when you run the example using PyTorch `distributed.launch` script for instance): this only works when 1 GPU is assigned to each process (see [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_squad.py#L853-L857)). In this case `loss = loss.mean()` is not triggered. There is no averaging of the loss over the GPU. What happens here is that it's the gradients which are averaged (read PyTorch [DDP doc](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel) for instance). The batch size should be the same for each script so no problem here also.
So I don't think the problem you are describing will show up in practice.
<|||||>@thomwolf I'm using the first multi-gpu setting `DataParallel`. And I tried to find some code in `run_squad.py` to ensure that the `batch_size` be a multiple of the number of gpu, but I didn't find.
And I just run this command [here](https://github.com/huggingface/pytorch-pretrained-BERT#squad) with `--train_batch_size 13` using 2 GPUs, and print the size of `start_logits` and its device, the first dimension (`batch_size`) on each GPUs are actually different (7 vs 6).
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/modeling.py#L1204-L1206<|||||>Oh you are right, this error is not there anymore.
We should probably just check for this again, I'll fix the examples.
`DataParallel` is in the process of being replaced by `DistributedDataParallel` which is pretty much always faster due to releasing the GiL so maybe you should try DDP.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 607 | closed | How to check the vocab size of bert large and bert small? | 05-12-2019 22:20:02 | 05-12-2019 22:20:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 606 | closed | How can we import cased bert model? | I notice that the author only shows us how to use uncased model, could anyone show me how to import cased model in both BertModel and BertTokenClassifier model? Thanks | 05-12-2019 22:18:30 | 05-12-2019 22:18:30 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 605 | closed | why use self.apply(self.init_bert_weights) in inhiritance class? | self.apply(self.init_bert_weights) is already used in BertModel class, why do we still need to use self.apply(self.init_bert_weights) in all inhiritance model such as BertTokenClassificaiton model? | 05-12-2019 22:16:25 | 05-12-2019 22:16:25 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 604 | closed | Fixing issue "Training beyond specified 't_total' steps with schedule 'warmup_linear'" reported in #556 | Fixing the issues reported in https://github.com/huggingface/pytorch-pretrained-BERT/issues/556
Reason for issue was that num_optimzation_steps was computed from example size, which is different from actual size of dataloader when an example is chunked into multiple instances.
Solution in this pull request is to compute num_optimization_steps directly from len(data_loader). | 05-11-2019 22:34:28 | 05-11-2019 22:34:28 | Looks good.<|||||>Ok merging thanks, sorry for the delay! |
transformers | 603 | closed | Using BERT as feature extractor | Following file extract_features.py, I use bert-large-uncased and it outputs 4 layer outputs for each token(word). Since I want to use it as feature extractor of entire sentence, which values should I use? Or is there any other processing we should do(like concatenate output for last token)? | 05-11-2019 14:18:43 | 05-11-2019 14:18:43 | i have same question <|||||>How best to fine-tune or pool BERT is an open question in bertology :p
["How to Fine-Tune BERT for Text Classification?"](https://arxiv.org/pdf/1905.05583.pdf) has a comprehensive overview. Look at table 3 specifically which found that taking the max of the last 4 layers achieves the best performance (in their testing configuration with some fine tuning), this would be my guess.
You can also look at the approaches taken by [bert-as-a-service](https://github.com/hanxiao/bert-as-service#q-what-are-the-available-pooling-strategies) which are explained pretty well, in this repo they use MEAN as the default pooling strategy.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 602 | closed | Different GPT-2 outputs with mixed precision vs single precision | When using GPT-2 with mixed precision, the generated text is different from that produced by running it normally. This is true for both conditional and unconditional generation, and for top_k=1 (deterministic) and top_k=40. Typically the mixed precision and single precision outputs agree for a number of tokens and then begin to disagree (sometimes early, sometimes late).
Using GPT-2 with mixed precision would be useful to take advantage of the tensor cores on V100 and T4 GPUs.
Testing by calling `model.half()` on GPT2LMHeadModel tends to start producing incorrect outputs early, while instead using Apex's AMP usually produces correct outputs for a little longer but still generally deviates. My tests were on the 117M model, with Apex installed.
It surprises me that the top_k=1 results often differ, sometimes very early in the sequence. They only take the largest logits, so this means the ranking of the logits is different.
I think the cause is compounding errors in the "past" tensor used by the attention function. Each time a new token is generated, its past has some error in it. When subsequent token generations then use those values (in higher attention layers), their own pasts have _more_ error. And so on, up through 16 layers for 117M or 24 for 345M. For cases where the top 2 logit values are almost the same, those 16 steps of error might be enough to change which one is larger and thereby change even the top_k=1 output. I haven't verified this idea yet.
I'm not sure if this necessarily means the outputs will be _qualitatively worse_, but that's a hard thing to measure. | 05-11-2019 00:38:21 | 05-11-2019 00:38:21 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@AdamDanielKing did you manage to fix the divergence somehow? <|||||>Was this finally an issue? It seems important and it was closed due to inactivity. @AdamDanielKing did you work it around?<|||||>@Damiox While sampling with mixed precision gives different results, they seem to still be of high quality. I've been using mixed precision on [talktotransformer.com](https://talktotransformer.com) for at least 6-7 months now and the quality has been excellent.<|||||>I am not using the "past" output at the moment, just doing topk=30. I ran into this issue as I am trying to improve my inferences times.
I have apex installed, and using the gpt2-medium model I get in average per sentence an inference time of 11ms for ~50 tokens within a batch size of 30 sentences on Tesla T4. Bigger batches aren't increasing the throughput. I just tried turning on fp16 into my model via ".half()" and it seems to be 3x faster. Is it possible? I am wondering whether that is fine or I need to do anything else (e.g. initializing apex, re-training my model with fp16). I feel I may be losing something. What do you think?<|||||>Talktotransformer.com just uses Apex to cast the model to fp16--no
retraining. I use the opt level that casts the weights themselves as well
(allows larger batch sizes). It seems to work well.
If you're getting 3x improvement from calling .half() instead of
initializing with Apex, that is strange and I can't imagine why. I've found
that this methods diverges more quickly from fp32 than using Apex so I
haven't tested much with it.
On a separate note, I'd consider trying cheaper GPUs. It may be
counter-intuitive, but I've found for example on Google that cheap P4 GPUs
give greater throughput _for their cost_ than a T4 even though they don't
have tensor cores. I think this is for the following reason: in generating
each token after the first, only the values for a single position are being
computed at each iteration, which is very little computation for the amount
of memory being used. I think this results in a lot of time being spent on
GPU-CPU round trips rather than actual work. Batch size becomes more
important than GPU speed.
On Mon, Mar 9, 2020, 5:56 PM Damian Nardelli, <[email protected]>
wrote:
> I am not using the "past" output at the moment, just doing topk=30. I ran
> into this issue as I am trying to improve my inferences times.
> I have apex installed, and using the gpt2-medium model I get in average
> per sentence an inference time of 11ms for ~50 tokens within a batch size
> of 30 sentences on Tesla T4. Bigger batches aren't increasing the
> throughput. I just tried adding turning fp16 into my model via ".half()"
> and it seems to be 3x faster. Is it possible? I am wondering whether that
> is fine or I need to do anything else (e.g. initializing apex, re-training
> my model with fp16). I feel I may be losing something. What do you think?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/602?email_source=notifications&email_token=ABKUZHOUB74EQKNVRORWKB3RGVQX7A5CNFSM4HMG65OKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOJG2FA#issuecomment-596798740>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABKUZHOS54LHVYM4OSGCR7LRGVQX7ANCNFSM4HMG65OA>
> .the
>
<|||||>I haven't tried initializing apex explicitly in my code. I believe I just thought that was being done by `GPT2LMHeadModel` behind the scenes, but it doesn't look like... I had installed it because I saw the following warning when running my inference app: `Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex`. Does this mean that the gpt2-medium is compiled with apex support? I'm not certain about that.
Should I try doing the amp.initialize() with the opt level in my inference app? Are you using O1? From https://nvidia.github.io/apex/amp.html#opt-levels-and-properties looks like I shouldn't be explicitly calling `half()`, so probably I should try initializing apex. What do you think?
Thanks in advance for all your help!<|||||>I had a question related to this: would the outputs in generation with GPT-2 change if the batch size changes?<|||||>Currently generation only allows `batch_size=1` |
transformers | 601 | closed | How to reduce embedding size from 768? | I am using simple_lm_finetuning.py to fine tune BERT. However I want to get smaller embeddings. Where can I change this? | 05-10-2019 10:03:17 | 05-10-2019 10:03:17 | You need to retrain it with your embeddings replacing `BertModel.embeddings.word_embeddings` and the model size being your embeddings size.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 600 | closed | Fine tuning time did not change much after freezing layers | Hi,
I am using simple_lm_finetuning.py to fine tune the model. I wanted to freeze all parameters from the very beginning to the beginning of the 12th transformer layer, I looked into parameters by name and used a counter, took the value of the counter corresponding to the beginning of the 12th layer, and used this value to freeze all layers before 12th layer. You can understand better from this piece of code I added to simple_lm_finetuning.py.
```
ctr = 0
for name, param in model.named_parameters():
ctr += 1
#print(ctr)
#print(name)
if(ctr < 183): #183 is where 12th transformer layer starts
param.requires_grad = False
```
With Nvidia K80 and original simple_lm_finetuning.py, one epoch required about 37-38 hours. After adding this piece of code, it required about 28 hours. Since I have frozen all parameters from the beginning to the 12th layer, I was expecting more reduced time. Where am I wrong?
I am open to other suggestions of fine tuning methods that requires less computational time. | 05-10-2019 09:48:26 | 05-10-2019 09:48:26 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 599 | closed | BERT tokenizer - set special tokens | Hi,
I was wondering whether the team could expand BERT so that fine-tuning with newly defined special tokens would be possible - just like the GPT allows.
@thomwolf Could you share your thought with me on that?
Regards,
Adrian. | 05-10-2019 08:38:43 | 05-10-2019 08:38:43 | Hi Adrian, BERT already has a few unused tokens that can be used similarly to the `special_tokens` of GPT/GPT-2.
For more details see https://github.com/google-research/bert/issues/9#issuecomment-434796704 and issue #405 for instance.<|||||>In case we use an unused special token from the vocabulary, is it enough to finetune a classification task or do we need to train an embedding from scratch? Did anyone already do this?
Two different and somehow related questions I had when looking into the implementation:
1) The Bert paper mentions a (learned) positional embedding. How is this implemented here? examples/extract_features/convert_examples_to_features() defines tokens (representation), input_type_ids (the difference between the first and second sequence) and an input_mask (distinguishing padding/real tokens) but no positional embedding. Is this done internally?
2) Can I use a special token as input_type_ids for Bert? In the classification example, only values of [0,1] are possible and I'm wondering what would happen if I would choose a special token instead? Is this possible with a pretrained embedding or do i need to retrain the whole embedding as a consequence?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 598 | closed | Updating learning rate with special warm up in examples | Updating examples by removing division to num_train_optimization_steps for new WarmupLinearSchedule.
Fixes #566 | 05-09-2019 15:17:42 | 05-09-2019 15:17:42 | Oh great thanks Burc! |
transformers | 597 | closed | GPT-2 (medium size model, special_tokens, fine-tuning, attention) + repo code coverage metric | Superseded #560.
Improvements to GPT-2:
- add special tokens
- tested fine-tuning
- add medium size model
Improvements to GPT/GPT-2:
- option to extract attention weights.
Add code coverage | 05-08-2019 20:38:56 | 05-08-2019 20:38:56 | # [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@f9cde97`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `81%`.
[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #597 +/- ##
=========================================
Coverage ? 67.04%
=========================================
Files ? 18
Lines ? 3835
Branches ? 0
=========================================
Hits ? 2571
Misses ? 1264
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [pytorch\_pretrained\_bert/tokenization\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `81.34% <0%> (ø)` | |
| [pytorch\_pretrained\_bert/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <20%> (ø)` | |
| [pytorch\_pretrained\_bert/modeling\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `78.3% <68.51%> (ø)` | |
| [pytorch\_pretrained\_bert/modeling\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `79.04% <73.11%> (ø)` | |
| [pytorch\_pretrained\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `88.57% <98.09%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=footer). Last update [f9cde97...35e6baa](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi, is the script run_openai.py updating weights of all layers or just the last few? Thanks in advance! |
transformers | 596 | closed | [Question] Cross-lingual sentence representations | Hi,
Would it be possible to integrate also a BERT model for cross-lingual sentence representations?
Something like, for example, the `XNLI-15` model in [https://github.com/facebookresearch/XLM](https://github.com/facebookresearch/XLM).
Thanks! | 05-08-2019 12:42:27 | 05-08-2019 12:42:27 | Hi @shoegazerstella well XLM is already pretty much as powerful as BERT and focused on cross-lingual sentence representations so I would go directly for it instead of BERT.<|||||>Thanks @thomwolf,
Are you considering integrating something for cross-lingual representations in the `pytorch-pretrained-BERT` library in the near future?<|||||>Not in the short-term |
transformers | 595 | closed | Unclear error message when unable to cache the model | I encountered the following error:
```
[2019-05-07 11:06:51,904: ERROR/ForkPoolWorker-1] Model name 'bert-base-uncased'
was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased,
bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased,
bert-base-chinese).
We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz'
was a path or url but couldn't find any file associated to this path or url.
```
After some debugging, I found that the root cause of the issue was the fact that the application is unable to cache the model in the home directory. It was a simple I/O error rather than an issue with the model name or file downloading, as the message suggests. I think it would be worth it to handle this case with an appropriate message, and what's more important - throwing an exception.
In my case, I did get the error logs, but the application initiated "successfully" - with the tokenizer and model set to `None`. If the library is not able to load the model for any reason, I'd expect it to throw an exception rather than just (almost) silently return a `None`. | 05-07-2019 12:12:06 | 05-07-2019 12:12:06 | Yes, this error message hides several potential sources, I'll see if I can disentangle the error messages :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This is still an issue. I suggest to improve the message and raise an exception if unable to load any of the models, instead of silently returning `None`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Still an issue, as far as I know.<|||||>@czyzby Hello,How do you solve the cache problem? I have the same problem but can't fix it<|||||>@twothousand To be honest, I don't remember, but I think the directory did not exist or lacked write permission. Are you sure the cache is causing the problem? If you changed the cache directory, make sure the folder exists and has appropriate permissions - otherwise I'd debug model loading and see which exception is being ignored.
@thomwolf It seems that it's still an issue, will you look into proper error handling?<|||||>Yes, it should have been improved on the latest release 2.1.1 (with the merge of #1480).
Maybe open a new issue with clear details of the current issue? |
transformers | 594 | closed | size mismatch for lm_head.decoder.weight | Hi i'm new to this,
first i started a finetune job
```
export ROC_STORIES_DIR=roc/
python run_openai_gpt.py \
--model_name openai-gpt \
--do_train \
--do_eval \
--train_dataset $ROC_STORIES_DIR/cloze_test_val__spring2016\ -\ cloze_test_ALL_val.csv \
--eval_dataset $ROC_STORIES_DIR/cloze_test_test__spring2016\ -\ cloze_test_ALL_test.csv \
--output_dir ../roc_gpt \
--train_batch_size 16 \
```
i have the following files in the output folder
```
config.json eval_results.txt merges.txt pytorch_model.bin special_tokens.txt vocab.json
```
however when i run
```
python run_gpt2.py --model_name_or_path=../roc_gpt/
```
i get this error
```
(env) [wwilson@b-user-wwilson-m ~/persistent-disk/notebooks/pytorch-pretrained-BERT/examples]$ python run_gpt2.py --model_name_or_path=../roc_gpt/
Namespace(batch_size=-1, length=-1, model_name_or_path='../roc_gpt/', nsamples=1, seed=0, temperature=1.0, top_k=0, unconditional=False)
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - loading special tokens file ../roc_gpt/special_tokens.txt
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - loading vocabulary file ../roc_gpt/vocab.json
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - loading merges file ../roc_gpt/merges.txt
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - Special tokens {'_start_': 40478, '_delimiter_': 40479, '_classify_': 40480}
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.modeling_gpt2 - loading weights file ../roc_gpt/pytorch_model.bin
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.modeling_gpt2 - loading configuration file ../roc_gpt/config.json
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.modeling_gpt2 - Model config {
"afn": "gelu",
"attn_pdrop": 0.1,
"embd_pdrop": 0.1,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"n_ctx": 512,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 512,
"n_special": 3,
"resid_pdrop": 0.1,
"vocab_size": 40478
}
05/07/2019 09:58:26 - INFO - pytorch_pretrained_bert.modeling_gpt2 - Weights of GPT2LMHeadModel not initialized from pretrained model: ['transformer.wte.weight', 'transformer.wpe.weight', 'transformer.ln_f.weight', 'transformer.ln_f.bias']
05/07/2019 09:58:26 - INFO - pytorch_pretrained_bert.modeling_gpt2 - Weights from pretrained model not used in GPT2LMHeadModel: ['multiple_choice_head.linear.weight', 'multiple_choice_head.linear.bias', 'transformer.tokens_embed.weight', 'transformer.positions_embed.weight']
Traceback (most recent call last):
File "run_gpt2.py", line 129, in <module>
run_model()
File "run_gpt2.py", line 77, in run_model
model = GPT2LMHeadModel.from_pretrained(args.model_name_or_path)
File "/mnt/notebooks/notebooks/pytorch-pretrained-BERT/env/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 475, in from_pretrained
"Error(s) in loading state_dict for {}:\n\t{}".format(model.__class__.__name__, "\n\t".join(error_msgs))
RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:
size mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([40481, 768]) from checkpoint, the shape in current model is torch.Size([40478, 768]).
```
I'm guessing the first script is for gpt and the second one is for gpt2?
should i adjust the 2nd script to load the same classes as the trainer one to use the fine tuned gpt2 model?
thanks | 05-07-2019 10:05:22 | 05-07-2019 10:05:22 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>```Traceback (most recent call last):
File "question-generation/interact.py", line 238, in <module>
run()
File "question-generation/interact.py", line 144, in run
model = GPT2LMHeadModel.from_pretrained(args.model_checkpoint)
File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 475, in from_pretrained
"Error(s) in loading state_dict for {}:\n\t{}".format(model.__class__.__name__, "\n\t".join(error_msgs))
RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:
size mismatch for transformer.wte.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).
size mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).
```
I am having a similar issue? Any idea how to solve this? @Wingie <|||||>+1<|||||>+1<|||||>+1
<|||||>> ```
> File "question-generation/interact.py", line 238, in <module>
> run()
> File "question-generation/interact.py", line 144, in run
> model = GPT2LMHeadModel.from_pretrained(args.model_checkpoint)
> File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 475, in from_pretrained
> "Error(s) in loading state_dict for {}:\n\t{}".format(model.__class__.__name__, "\n\t".join(error_msgs))
> RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:
> size mismatch for transformer.wte.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).
> size mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).
> ```
>
> I am having a similar issue? Any idea how to solve this? @Wingie
Were you able to fix this? I am having this issue, with the same sizes as well.<|||||>same here!
|
transformers | 593 | closed | Embedding' object has no attribute 'shape' | While running the script to convert the Tensorflow checkpoints to Pytorch Model.
Model path: https://github.com/naver/biobert-pretrained/releases/download/v1.0-pubmed-pmc/biobert_pubmed_pmc.tar.gz
python pytorch_pretrained_BERT/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path pubmed_pmc_470k/biobert_model.ckpt \
--bert_config_file pubmed_pmc_470k/bert_config.json \
--pytorch_dump_path pytorch_model | 05-07-2019 09:29:43 | 05-07-2019 09:29:43 | @Dhanachandra Same issue with another pre-trained BERT model.
Have you managed to solve that?<|||||>> @Dhanachandra Same issue with another pre-trained BERT model.
> Have you managed to solve that?
@Dhanachandra I've just found the solution: this exception occurs at "modeling.py" module. It's because of tensorflow->pytorch transformation. You need to find rows where 'shape' is used in modeling.py (you'll see its path in error logs) and delete it (it's somewhere in try... assert ... except..., just delete it).<|||||>You can use the following code:
tf_path = 'pubmed_pmc_470k/biobert_model.ckpt'
config_path = 'pubmed_pmc_470k/bert_config.json'
pytorch_dump_path = 'pytorch_model/pytorch_model.bin'
# Save pytorch-model
import os
import re
import argparse
import tensorflow as tf
import torch
import numpy as np
from pytorch_pretrained_bert import BertConfig, BertForPreTraining
def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):
config_path = os.path.abspath(bert_config_file)
tf_path = os.path.abspath(tf_checkpoint_path)
print("Converting TensorFlow checkpoint from {} with config at {}".format(tf_path, config_path))
# Load weights from TF model
init_vars = tf.train.list_variables(tf_path)
excluded = ['BERTAdam','_power','global_step']
init_vars = list(filter(lambda x:all([True if e not in x[0] else False for e in excluded]),init_vars))
names = []
arrays = []
for name, shape in init_vars:
print("Loading TF weight {} with shape {}".format(name, shape))
array = tf.train.load_variable(tf_path, name)
names.append(name)
arrays.append(array)
# Initialise PyTorch model
config = BertConfig.from_json_file(bert_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = BertForPreTraining(config)
for name, array in zip(names, arrays):
name = name.split('/')
# adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
# which are not required for using pretrained model
if any(n in ["adam_v", "adam_m", "global_step"] for n in name):
print("Skipping {}".format("/".join(name)))
continue
pointer = model
for m_name in name:
if re.fullmatch(r'[A-Za-z]+_\d+', m_name):
l = re.split(r'_(\d+)', m_name)
else:
l = [m_name]
if l[0] == 'kernel' or l[0] == 'gamma':
pointer = getattr(pointer, 'weight')
elif l[0] == 'output_bias' or l[0] == 'beta':
pointer = getattr(pointer, 'bias')
elif l[0] == 'output_weights':
pointer = getattr(pointer, 'weight')
else:
pointer = getattr(pointer, l[0])
if len(l) >= 2:
num = int(l[1])
pointer = pointer[num]
if m_name[-11:] == '_embeddings':
pointer = getattr(pointer, 'weight')
elif m_name == 'kernel':
array = np.transpose(array)
try:
assert pointer.shape == array.shape
except AssertionError as e:
e.args += (pointer.shape, array.shape)
raise
print("Initialize PyTorch weight {}".format(name))
pointer.data = torch.from_numpy(array)
# Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
torch.save(model.state_dict(), pytorch_dump_path)
convert_tf_checkpoint_to_pytorch(tf_path, config_path, pytorch_dump_path) `<|||||>@DmLitov4 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 592 | closed | Can the use of [SEP] reduce the information extraction between the sentences? | Hello. I know that [CLS] means the start of a sentence and [SEP] makes BERT know the second sentence has begun. [SEP] can’t stop one sentence from extracting information from another sentence. However, I have a question.
If I have 2 sentences, which are s1 and s2, and our fine-tuning task is the same. In one way, I add special tokens and the input looks like [CLS]+s1+[SEP] + s2 + [SEP]. In another, I make the input look like [CLS] + s1 + s2 + [SEP]. When I input them to BERT respectively, what is the difference between them? Will the s1 in second one integrate more information from s2 than the s1 in first one does? Will the token embeddings change a lot between the 2 methods?
Thanks for any help! | 05-07-2019 05:13:00 | 05-07-2019 05:13:00 | I think so. Ultimately you should have s1 and s2 in input, your [CLS] + s1 + s2 + [SEP] will be equivalent to `[CLS] + s1 + [SEP]` in `[CLS] + s1 + [SEP] + s2 + [SEP]` where `s1` now is the concatenation of `s1` and `s2`. I don't think that's what you want to do. |
transformers | 591 | closed | What is the use of [SEP]? | Hello. I know that [CLS] means the start of a sentence and [SEP] makes BERT know the second sentence has begun. [SEP] can’t stop one sentence from extracting information from another sentence. However, I have a question.
If I have 2 sentences, which are s1 and s2., and our fine-tuning task is the same. In one way, I add special tokens and the input looks like [CLS]+s1+[SEP] + s2 + [SEP]. In another, I make the input look like [CLS] + s1 + s2 + [SEP]. When I input them to BERT respectively, what is the difference between them? Will the s1 in second one integrate more information from s2 than the s1 in first one does? Will the token embeddings change a lot between the 2 methods?
Thanks for any help! | 05-07-2019 04:12:16 | 05-07-2019 04:12:16 | @RomanShen What is your observation on your question |
transformers | 590 | closed | Fix for computing t_total in examples | Examples had wrongly computed t_total, resulting in warning messages (Issue #556 )
Added fixes in several examples but:
- only tested MRPC in `run_classifier.py` so far
- `finetune_on_pregenerated.py` still needs fixing (not sure why lines 221-227 are as they are) | 05-06-2019 15:27:41 | 05-06-2019 15:27:41 | Closed in favour of #604 |
transformers | 589 | closed | Can't save converted checkpoint | Thank you for creating the pytorch version of BERT. But there is a problem when I use the convert_tf_checkpoint_to_pytorch script, I can't find any files created under the pytorch_dumpy_path. | 05-06-2019 13:34:01 | 05-06-2019 13:34:01 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 588 | closed | installation error | Hi, i am getting an error after following the installation orders you stated at read me.
My output's error message is here:
> error: command '/usr/bin/nvcc' failed with exit status 1
> error
> Cleaning up...
> Removing source in /tmp/pip-req-build-837wsq53
> Removed build tracker '/tmp/pip-req-tracker-txkml2po'
> Command "/home/ubuntu/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-req-build-837wsq53/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" --cpp_ext --cuda_ext install --record /tmp/pip-record-qpb_35xo/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-837wsq53/
>
Whole output is here:
> error: command '/usr/bin/nvcc' failed with exit status 1
> error
> Cleaning up...
> Removing source in /tmp/pip-req-build-837wsq53
> Removed build tracker '/tmp/pip-req-tracker-txkml2po'
> Command "/home/ubuntu/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-req-build-837wsq53/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" --cpp_ext --cuda_ext install --record /tmp/pip-record-qpb_35xo/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-837wsq53/
> Exception information:
> Traceback (most recent call last):
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 143, in main
> status = self.run(options, args)
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 366, in run
> use_user_site=options.use_user_site,
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/req/__init__.py", line 49, in install_given_reqs
> **kwargs
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/req/req_install.py", line 791, in install
> spinner=spinner,
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/utils/misc.py", line 705, in call_subprocess
> % (command_desc, proc.returncode, cwd))
> pip._internal.exceptions.InstallationError: Command "/home/ubuntu/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-req-build-837wsq53/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" --cpp_ext --cuda_ext install --record /tmp/pip-record-qpb_35xo/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-837wsq53/
> 1 location(s) to search for versions of pip:
> * https://pypi.org/simple/pip/
> Getting page https://pypi.org/simple/pip/
> Starting new HTTPS connection (1): pypi.org:443
> https://pypi.org:443 "GET /simple/pip/ HTTP/1.1" 200 11064
> Analyzing links from page https://pypi.org/simple/pip/
> Found link https://files.pythonhosted.org/packages/3d/9d/1e313763bdfb6a48977b65829c6ce2a43eaae29ea2f907c8bbef024a7219/pip-0.2.tar.gz#sha256=88bb8d029e1bf4acd0e04d300104b7440086f94cc1ce1c5c3c31e3293aee1f81 (from https://pypi.org/simple/pip/), version: 0.2
> Found link https://files.pythonhosted.org/packages/18/ad/c0fe6cdfe1643a19ef027c7168572dac6283b80a384ddf21b75b921877da/pip-0.2.1.tar.gz#sha256=83522005c1266cc2de97e65072ff7554ac0f30ad369c3b02ff3a764b962048da (from https://pypi.org/simple/pip/),version: 0.2.1
> Found link https://files.pythonhosted.org/packages/17/05/f66144ef69b436d07f8eeeb28b7f77137f80de4bf60349ec6f0f9509e801/pip-0.3.tar.gz#sha256=183c72455cb7f8860ac1376f8c4f14d7f545aeab8ee7c22cd4caf79f35a2ed47 (from https://pypi.org/simple/pip/), version: 0.3
> Found link https://files.pythonhosted.org/packages/0a/bb/d087c9a1415f8726e683791c0b2943c53f2b76e69f527f2e2b2e9f9e7b5c/pip-0.3.1.tar.gz#sha256=34ce534f17065c78f980702928e988a6b6b2d8a9851aae5f1571a1feb9bb58d8 (from https://pypi.org/simple/pip/),version: 0.3.1
> Found link https://files.pythonhosted.org/packages/cf/c3/153571aaac6cf999f4bb09c019b1ff379b7b599ea833813a41c784eec995/pip-0.4.tar.gz#sha256=28fc67558874f71fddda7168f73595f1650523dce3bc5bf189713ecdfc1e456e (from https://pypi.org/simple/pip/), version: 0.4
> Found link https://files.pythonhosted.org/packages/8d/c7/f05c87812fa5d9562ecbc5f4f1fc1570444f53c81c834a7f662af406e3c1/pip-0.5.tar.gz#sha256=328d8412782f22568508a0d0c78a49c9920a82e44c8dfca49954fe525c152b2a (from https://pypi.org/simple/pip/), version: 0.5
> Found link https://files.pythonhosted.org/packages/9a/aa/f536b6d14fe03343367da2ff44eee28f340ae650cd017ca088b6be13084a/pip-0.5.1.tar.gz#sha256=e27650538c41fe1007a41abd4cfd0f905b822622cbe1f8e7e09d1215af207694 (from https://pypi.org/simple/pip/),version: 0.5.1
> Found link https://files.pythonhosted.org/packages/db/e6/fdf7be8a17b032c533d3f91e91e2c63dd81d3627cbe4113248a00c2d39d8/pip-0.6.tar.gz#sha256=4cf47db6815b2f435d1f44e1f35ff04823043f6161f7df9aec71a123b0c47f0d (from https://pypi.org/simple/pip/), version: 0.6
> Found link https://files.pythonhosted.org/packages/91/cd/105f4d3c75d0ae18e12623acc96f42168aaba408dd6e43c4505aa21f8e37/pip-0.6.1.tar.gz#sha256=efe47e84ffeb0ea4804f9858b8a94bebd07f5452f907ebed36d03aed06a9f9ec (from https://pypi.org/simple/pip/),version: 0.6.1
> Found link https://files.pythonhosted.org/packages/1c/c7/c0e1a9413c37828faf290f29a85a4d6034c145cc04bf1622ba8beb662ad8/pip-0.6.2.tar.gz#sha256=1c1a504d7e70d2c24246f95bd16e3d5fcec740fd144df69a407bf65a2ee67586 (from https://pypi.org/simple/pip/),version: 0.6.2
> Found link https://files.pythonhosted.org/packages/3f/af/c4b9d49fb0f286996b28dbc0955c3ad359794697eb98e0e69863908070b0/pip-0.6.3.tar.gz#sha256=1a6df71eb29b98cba11bde6d6a0d8c6dd8b0518e74ceb71fb31ea4fbb42fd313 (from https://pypi.org/simple/pip/),version: 0.6.3
> Found link https://files.pythonhosted.org/packages/ec/7a/6fe91ff0079ad0437830957c459d52f3923e516f5b453218f2a93d09a427/pip-0.7.tar.gz#sha256=ceaea0b9e494d893c8a191895301b79c1db33e41f14d3ad93e3d28a8b4e9bf27 (from https://pypi.org/simple/pip/), version: 0.7
> Found link https://files.pythonhosted.org/packages/a5/63/11303863c2f5e9d9a15d89fcf7513a4b60987007d418862e0fb65c09fff7/pip-0.7.1.tar.gz#sha256=f54f05aa17edd0036de433c44892c8fedb1fd2871c97829838feb995818d24c3 (from https://pypi.org/simple/pip/),version: 0.7.1
> Found link https://files.pythonhosted.org/packages/cd/a9/1debaa96bbc1005c1c8ad3b79fec58c198d35121546ea2e858ce0894268a/pip-0.7.2.tar.gz#sha256=98df2eb779358412bbbae75980171ae85deebc846d87e244d086520b1212da09 (from https://pypi.org/simple/pip/),version: 0.7.2
> Found link https://files.pythonhosted.org/packages/74/54/f785c327fb3d163560a879b36edae5c78ee07806be282c9d4807f6be7dd1/pip-0.8.tar.gz#sha256=9017e4484a212dd4e1a43dd9f039dd7fc8338d4eea1c339d5ae1c80726de5b0f (from https://pypi.org/simple/pip/), version: 0.8
> Found link https://files.pythonhosted.org/packages/5c/79/5e8381cc3078bae92166f2ba96de8355e8c181926505ba8882f7b099a500/pip-0.8.1.tar.gz#sha256=7176a87f35675f6468341212f3b959bb51d23ea66eb1c3692bf746c45c716fa2 (from https://pypi.org/simple/pip/),version: 0.8.1
> Found link https://files.pythonhosted.org/packages/17/3e/0a98ab032991518741e7e712a719633e6ae160f51b3d3e855194530fd308/pip-0.8.2.tar.gz#sha256=f80a3549c048bc3bbcb47844826e9c7c6fcd87e77b92bef0d9e66d1b397c4962 (from https://pypi.org/simple/pip/),version: 0.8.2
> Found link https://files.pythonhosted.org/packages/f7/9a/943fc6d879ed7220bac2e7e53096bfe78abec88d77f2f516400e0129679e/pip-0.8.3.tar.gz#sha256=1be2e18edd38aa75b5e4ef38a99ec33ba9247177cfcb4a6d2d2b3e73430e3001 (from https://pypi.org/simple/pip/),version: 0.8.3
> Found link https://files.pythonhosted.org/packages/24/33/6eb675fb6db7b71d69d6928b33dea61b8bf5cfe1e5649be70ec84ce2fc09/pip-1.0.tar.gz#sha256=34ba07e2d14ba86d5088ba896ac80bed845a9b276ab8acb279b8d99bc77fec8e (from https://pypi.org/simple/pip/), version: 1.0
> Found link https://files.pythonhosted.org/packages/10/d9/f584e6107ef98ad7eaaaa5d0f756bfee12561fa6a4712ffdb7209e0e1fd4/pip-1.0.1.tar.gz#sha256=37d2f18213d3845d2038dd3686bc71fc12bb41ad66c945a8b0dfec2879f3497b (from https://pypi.org/simple/pip/),version: 1.0.1
> Found link https://files.pythonhosted.org/packages/16/90/5e6f80364d8a656f60681dfb7330298edef292d43e1499bcb3a4c71ff0b9/pip-1.0.2.tar.gz#sha256=a6ed9b36aac2f121c01a2c9e0307a9e4d9438d100a407db701ac65479a3335d2 (from https://pypi.org/simple/pip/),version: 1.0.2
> Found link https://files.pythonhosted.org/packages/25/57/0d42cf5307d79913a082c5c4397d46f3793bc35e1138a694136d6e31be99/pip-1.1.tar.gz#sha256=993804bb947d18508acee02141281c77d27677f8c14eaa64d6287a1c53ef01c8 (from https://pypi.org/simple/pip/), version: 1.1
> Found link https://files.pythonhosted.org/packages/ba/c3/4e1f892f41aaa217fe0d1f827fa05928783349c69f3cc06fdd68e112678a/pip-1.2.tar.gz#sha256=2b168f1987403f1dc6996a1f22a6f6637b751b7ab6ff27e78380b8d6e70aa314 (from https://pypi.org/simple/pip/), version: 1.2
> Found link https://files.pythonhosted.org/packages/c3/a2/a63244da32afd9ce9a8ca1bd86e71610039adea8b8314046ebe5047527a6/pip-1.2.1.tar.gz#sha256=12a9302acfca62cdc7bc5d83386cac3e0581db61ac39acdb3a4e766a16b88eb1 (from https://pypi.org/simple/pip/),version: 1.2.1
> Found link https://files.pythonhosted.org/packages/00/45/69d4f2602b80550bfb26cfd2f62c2f05b3b5c7352705d3766cd1e5b27648/pip-1.3.tar.gz#sha256=d6a13c5be316cb21a0243047c7f163f47e88973ebccff8d32e63ca1bf4d9321c (from https://pypi.org/simple/pip/), version: 1.3
> Found link https://files.pythonhosted.org/packages/5b/ce/f5b98104f1c10d868936c25f7c597f492d4371aa9ad5fb61a94954ee7208/pip-1.3.1.tar.gz#sha256=145eaa5d1ea1b062663da1f3a97780d7edea4c63c68a37c463b1deedf7bb4957 (from https://pypi.org/simple/pip/),version: 1.3.1
> Found link https://files.pythonhosted.org/packages/5f/d0/3b3958f6a58783bae44158b2c4c7827ae89abaecdd4bed12cff402620b9a/pip-1.4.tar.gz#sha256=1fd43cbf07d95ddcecbb795c97a1674b3ddb711bb4a67661284a5aa765aa1b97 (from https://pypi.org/simple/pip/), version: 1.4
> Found link https://files.pythonhosted.org/packages/3f/f8/da390e0df72fb61d176b25a4b95262e3dcc14bda0ad25ac64d56db38b667/pip-1.4.1.tar.gz#sha256=4e7a06554711a624c35d0c646f63674b7f6bfc7f80221bf1eb1f631bd890d04e (from https://pypi.org/simple/pip/),version: 1.4.1
> Found link https://files.pythonhosted.org/packages/4f/7d/e53bc80667378125a9e07d4929a61b0bd7128a1129dbe6f07bb3228652a3/pip-1.5.tar.gz#sha256=25f81d1a0e55d3b1709818dd57fdfb954b028f229f09bd69cb0bc80a8e03e048 (from https://pypi.org/simple/pip/), version: 1.5
> Found link https://files.pythonhosted.org/packages/44/5d/1dca53b5de6d287e7eb99bd174bb022eb6cb0d6ca6e19ca6b16655dde8c2/pip-1.5.1-py2.py3-none-any.whl#sha256=00960db3b0b8724dd37fe37cfb9c72ecb8f59fab9db7d17c5c1e89a1adab49ce (from https://pypi.org/simple/pip/), version: 1.5.1
> Found link https://files.pythonhosted.org/packages/21/3f/d86a600c9b2f41a75caacf768a24130f343def97652de2345da15ef7911f/pip-1.5.1.tar.gz#sha256=e60e936fbc101d56668c6134c1f2b5b40fcbec8b4fc4ca7fc34842b6b4c5c130 (from https://pypi.org/simple/pip/),version: 1.5.1
> Found link https://files.pythonhosted.org/packages/3d/1f/227d77d5e9ed2df5162de4ba3616799a351eccb1ecd668ae824dd26153a1/pip-1.5.2-py2.py3-none-any.whl#sha256=6903909ccdcdbc3297b74118590e71344d6d262827acd1f5c0e2fcfce9807499 (from https://pypi.org/simple/pip/), version: 1.5.2
> Found link https://files.pythonhosted.org/packages/ed/94/391a003107f6ec997c314199d03bff1c105af758ee490e3255353574487b/pip-1.5.2.tar.gz#sha256=2a8a3e08e652d3a40edbb39264bf01f8ff3c32520a79113357cca1f30533f738 (from https://pypi.org/simple/pip/),version: 1.5.2
> Found link https://files.pythonhosted.org/packages/df/e9/bdb53d44fad1465b43edaf6bc7dd3027ed5af81405cc97603fdff0721ebb/pip-1.5.3-py2.py3-none-any.whl#sha256=f0037aed3ce6cf96b9e9117d42e967a74bea9ebe19088a2fdea5de93d5762fee (from https://pypi.org/simple/pip/), version: 1.5.3
> Found link https://files.pythonhosted.org/packages/55/de/671a48ad313c808623041fc475f7c8f7610401d9f573f06b40eeb84e74e3/pip-1.5.3.tar.gz#sha256=dc53b4d28b88556a37cd73052b6d1d08cc644c6724e37c4d38a2e3c03c5440b2 (from https://pypi.org/simple/pip/),version: 1.5.3
> Found link https://files.pythonhosted.org/packages/a9/9a/9aa19fe00de4c025562e5fb3796ff8520165a7dd1a5662c6ec9816e1ae99/pip-1.5.4-py2.py3-none-any.whl#sha256=fb7282556a42e84464f2e963a859ac4012d8134ba6218b70c1d82d145fcfa82f (from https://pypi.org/simple/pip/), version: 1.5.4
> Found link https://files.pythonhosted.org/packages/78/d8/6e58a7130d457edadb753a0ea5708e411c100c7e94e72ad4802feeef735c/pip-1.5.4.tar.gz#sha256=70208a250bb4afdbbdd74c3ac35d4ab9ba1eb6852d02567a6a87f2f5104e30b9 (from https://pypi.org/simple/pip/),version: 1.5.4
> Found link https://files.pythonhosted.org/packages/ce/c2/10d996b9c51b126a9f0bb9e14a9edcdd5c88888323c0685bb9b392b6c47c/pip-1.5.5-py2.py3-none-any.whl#sha256=fe7a5808190067b2598d85def9b83db46e5d64a00848ad843e107c36e1db4ae6 (from https://pypi.org/simple/pip/), version: 1.5.5
> Found link https://files.pythonhosted.org/packages/88/01/a442fde40bd9aaf837612536f16ab751fac628807fd718690795b8ade77d/pip-1.5.5.tar.gz#sha256=4b7f5124364ae9b5ba833dcd8813a84c1c06fba1d7c8543323c7af4b33188eca (from https://pypi.org/simple/pip/),version: 1.5.5
> Found link https://files.pythonhosted.org/packages/3f/08/7347ca4021e7fe0f1ab8f93cbc7d2a7a7350012300ad0e0227d55625e2b8/pip-1.5.6-py2.py3-none-any.whl#sha256=fbc1351ffedf09ca7560428758845a88d648b9730b63ce9e5df53a7c89f039a4 (from https://pypi.org/simple/pip/), version: 1.5.6
> Found link https://files.pythonhosted.org/packages/45/db/4fb9a456b4ec4d3b701456ef562b9d72d76b6358e0c1463d17db18c5b772/pip-1.5.6.tar.gz#sha256=b1a4ae66baf21b7eb05a5e4f37c50c2706fa28ea1f8780ce8efe14dcd9f1726c (from https://pypi.org/simple/pip/),version: 1.5.6
> Found link https://files.pythonhosted.org/packages/dc/7c/21191b5944b917b66e4e4e06d74f668d814b6e8a3ff7acd874479b6f6b3d/pip-6.0-py2.py3-none-any.whl#sha256=5ec6732505bd8be49fe1f8ad557b88253ffb085736396df4d6bea753fc2a8f2c (from https://pypi.org/simple/pip/), version: 6.0
> Found link https://files.pythonhosted.org/packages/38/fd/065c66a88398f240e344fdf496b9707f92d75f88eedc3d10ff847b28a657/pip-6.0.tar.gz#sha256=6103897f1bb68d3f933edd60f3e3830c4ea6b8abf7a4b500db148921b11f6c9b (from https://pypi.org/simple/pip/), version: 6.0
> Found link https://files.pythonhosted.org/packages/e9/7a/cdbc1a12ed52410d557e48d4646f4543e9e991ff32d2374dc6db849aa617/pip-6.0.1-py2.py3-none-any.whl#sha256=322aea7d1f7b9ee68ad87ac4704cad5df97f77e70668c0bd18f964c5daa78173 (from https://pypi.org/simple/pip/), version: 6.0.1
> Found link https://files.pythonhosted.org/packages/4d/c3/8675b90cd89b9b222062f4f6c7e9d48b0387f5b35cbf747a74403a883e56/pip-6.0.1.tar.gz#sha256=fa2f7c68da4a405d673aa38542f9df009d60026db4f532429ac9cbfbda1f959d (from https://pypi.org/simple/pip/),version: 6.0.1
> Found link https://files.pythonhosted.org/packages/71/3c/b5a521e5e99cfff091e282231591f21193fd80de079ec5fb8ed9c6614044/pip-6.0.2-py2.py3-none-any.whl#sha256=7d17b0f267f7c9cd17cd2924bbbe2b4a3d407322c0e09084ca3f1295c1fed50d (from https://pypi.org/simple/pip/), version: 6.0.2
> Found link https://files.pythonhosted.org/packages/4c/5a/f9e8e3de0153282c7cb54a9b991af225536ac914bac858ca664cf883bb3e/pip-6.0.2.tar.gz#sha256=6fa90667706a679e3dc75b27a51fddafa64401c45e96f8ae6c20978183290077 (from https://pypi.org/simple/pip/),version: 6.0.2
> Found link https://files.pythonhosted.org/packages/73/cb/3eebf42003791df29219a3dfa1874572aa16114b44c9b1b0ac66bf96e8c0/pip-6.0.3-py2.py3-none-any.whl#sha256=b72655b6ac6aef1c86dd07f51e8ace8d7aabd6a1c4ff88db87155276fa32a073 (from https://pypi.org/simple/pip/), version: 6.0.3
> Found link https://files.pythonhosted.org/packages/ce/63/8d99ae60d11ae1a65f5d4fc39a529a598bd3b8e067132210cb0c4d9e9f74/pip-6.0.3.tar.gz#sha256=b091a35f5fa0faffac0b27b97e1e1e93ffe63b463c2ea8dbde0c1fb987933614 (from https://pypi.org/simple/pip/),version: 6.0.3
> Found link https://files.pythonhosted.org/packages/c5/0e/c974206726542bc495fc7443dd97834a6d14c2f0cba183fcfcd01075225a/pip-6.0.4-py2.py3-none-any.whl#sha256=8dfd95de29a7a3bb1e7d368cc83d566938eb210b04d553ebfe5e3a422f4aec65 (from https://pypi.org/simple/pip/), version: 6.0.4
> Found link https://files.pythonhosted.org/packages/02/a1/c90f19910ee153d7a0efca7216758121118d7e93084276541383fe9ca82e/pip-6.0.4.tar.gz#sha256=1dbbff9c369e510c7468ab68ba52c003f68f83c99c2f8259acd51099e8799f1e (from https://pypi.org/simple/pip/),version: 6.0.4
> Found link https://files.pythonhosted.org/packages/e9/1b/c6a375a337fb576784cdea3700f6c3eaf1420f0a01458e6e034cc178a84a/pip-6.0.5-py2.py3-none-any.whl#sha256=b2c20e3a2a43b2bbb1d19ad98be27eccc7b0f0ece016da602ccaa757a862b0e2 (from https://pypi.org/simple/pip/), version: 6.0.5
> Found link https://files.pythonhosted.org/packages/19/f2/58628768f618c8c9fea878e0fb97730c0b8a838d3ab3f325768bf12dac94/pip-6.0.5.tar.gz#sha256=3bf42d28be9085ab2e9aecfd69a6da2d31563fe833304bf71a620a30c38ab8a2 (from https://pypi.org/simple/pip/),version: 6.0.5
> Found link https://files.pythonhosted.org/packages/64/fc/4a49ccb18f55a0ceeb76e8d554bd4563217117492997825d194ed0017cc1/pip-6.0.6-py2.py3-none-any.whl#sha256=fb04f8afe1ba57626783f0c8e2f3d46bbaebaa446fcf124f434e968a2fee595e (from https://pypi.org/simple/pip/), version: 6.0.6
> Found link https://files.pythonhosted.org/packages/f6/ce/d9e4e178b66c766c117f62ddf4fece019ef9d50127a8926d2f60300d615e/pip-6.0.6.tar.gz#sha256=3a14091299dcdb9bab9e9004ae67ac401f2b1b14a7c98de074ca74fdddf4bfa0 (from https://pypi.org/simple/pip/),version: 6.0.6
> Found link https://files.pythonhosted.org/packages/7a/8e/2bbd4fcf3ee06ee90ded5f39ec12f53165dfdb9ef25a981717ad38a16670/pip-6.0.7-py2.py3-none-any.whl#sha256=93a326304c7db749896bcef822bbbac1ab29dad5651c6d732e245975239890e6 (from https://pypi.org/simple/pip/), version: 6.0.7
> Found link https://files.pythonhosted.org/packages/52/85/b160ebdaa84378df6bb0176d4eed9f57edca662446174eead7a9e2e566d6/pip-6.0.7.tar.gz#sha256=35a5a43ac6b7af83ed47ea5731a365f43d350a3a7267e039e5f06b61d42ab3c2 (from https://pypi.org/simple/pip/),version: 6.0.7
> Found link https://files.pythonhosted.org/packages/63/65/55b71647adec1ad595bf0e5d76d028506dfc002df30c256f022ff7a660a5/pip-6.0.8-py2.py3-none-any.whl#sha256=3c22b0a8ff92727bd737a82f72700790591f177541df08c07bc1f90d6b72ac19 (from https://pypi.org/simple/pip/), version: 6.0.8
> Found link https://files.pythonhosted.org/packages/ef/8a/e3a980bc0a7f791d72c1302f65763ed300f2e14c907ac033e01b44c79e5e/pip-6.0.8.tar.gz#sha256=0d58487a1b7f5be2e5e965c11afbea1dc44ecec8069de03491a4d0d6c85f4551 (from https://pypi.org/simple/pip/),version: 6.0.8
> Found link https://files.pythonhosted.org/packages/24/fb/8a56a46243514681e569bbafd8146fa383476c4b7c725c8598c452366f31/pip-6.1.0-py2.py3-none-any.whl#sha256=435a018f6d29e34d4f901bf4e6860d8a5fa1816b68d62008c18ca062a306db31 (from https://pypi.org/simple/pip/), version: 6.1.0
> Found link https://files.pythonhosted.org/packages/6c/84/432eb60bbcb414b9cdfcb135d5f4925e253c74e7d6916ada79990d6cc1a0/pip-6.1.0.tar.gz#sha256=89f120e2ab3d25ab70c36eb28ad4f280fc9ba71736e74d3055f609c1f9173768 (from https://pypi.org/simple/pip/),version: 6.1.0
> Found link https://files.pythonhosted.org/packages/67/f0/ba0fb41dbdbfc4aa3e0c16b40269aca6b9e3d59cacdb646218aa2e9b1d2c/pip-6.1.1-py2.py3-none-any.whl#sha256=a67e54aa0f26b6d62ccec5cc6735eff205dd0fed075f56ac3d3111e91e4467fc (from https://pypi.org/simple/pip/), version: 6.1.1
> Found link https://files.pythonhosted.org/packages/bf/85/871c126b50b8ee0b9819e8a63b614aedd264577e73478caedcd447e8f28c/pip-6.1.1.tar.gz#sha256=89f3b626d225e08e7f20d85044afa40f612eb3284484169813dc2d0631f2a556 (from https://pypi.org/simple/pip/),version: 6.1.1
> Found link https://files.pythonhosted.org/packages/5a/9b/56d3c18d0784d5f2bbd446ea2dc7ffa7476c35e3dc223741d20cfee3b185/pip-7.0.0-py2.py3-none-any.whl#sha256=309c48399c7d68501a10ef206abd6e5c541fedbf84b95435d9063bd454b39df7 (from https://pypi.org/simple/pip/), version: 7.0.0
> Found link https://files.pythonhosted.org/packages/c6/16/6475b142927ca5d03e3b7968efa5b0edd103e4684ecfde181a25f6fa2505/pip-7.0.0.tar.gz#sha256=7b46bfc1b95494731de306a688e2a7bc056d7fa7ad27e026908fb2ae67fed23d (from https://pypi.org/simple/pip/),version: 7.0.0
> Found link https://files.pythonhosted.org/packages/5a/10/bb7a32c335bceba636aa673a4c977effa1e73a79f88856459486d8d670cf/pip-7.0.1-py2.py3-none-any.whl#sha256=d26b8573ba1ac1ec99a9bdbdffee2ff2b06c7790815211d0eb4dc1462a089705 (from https://pypi.org/simple/pip/), version: 7.0.1
> Found link https://files.pythonhosted.org/packages/4a/83/9ae4362a80739657e0c8bb628ea3fa0214a9aba7c8590dacc301ea293f73/pip-7.0.1.tar.gz#sha256=cfec177552fdd0b2d12b72651c8e874f955b4c62c1c2c9f2588cbdc1c0d0d416 (from https://pypi.org/simple/pip/),version: 7.0.1
> Found link https://files.pythonhosted.org/packages/64/7f/7107800ae0919a80afbf1ecba21b90890431c3ee79d700adac3c79cb6497/pip-7.0.2-py2.py3-none-any.whl#sha256=83c869c5ab7113866e2d69641ec470d47f0faae68ca4550a289a4d3db515ad65 (from https://pypi.org/simple/pip/), version: 7.0.2
> Found link https://files.pythonhosted.org/packages/75/b1/66532c273bca0133e42c3b4540a1609289f16e3046f1830f18c60794d661/pip-7.0.2.tar.gz#sha256=ba28fa60b573a9444e7b78ccb3b0f261d1f66f46d20403f9dce37b18a6aed405 (from https://pypi.org/simple/pip/),version: 7.0.2
> Found link https://files.pythonhosted.org/packages/96/76/33a598ae42dd0554207d83c7acc60e3b166dbde723cbf282f1f73b7a127c/pip-7.0.3-py2.py3-none-any.whl#sha256=7b1cb03e827d58d2d05e68ea96a9e27487ed4b0afcd951ac6e40847ce94f0738 (from https://pypi.org/simple/pip/), version: 7.0.3
> Found link https://files.pythonhosted.org/packages/35/59/5b23115758ba0f2fc465c459611865173ef006202ba83f662d1f58ed2fb8/pip-7.0.3.tar.gz#sha256=b4c598825a6f6dc2cac65968feb28e6be6c1f7f1408493c60a07eaa731a0affd (from https://pypi.org/simple/pip/),version: 7.0.3
> Found link https://files.pythonhosted.org/packages/f7/c0/9f8dac88326609b4b12b304e8382f64f7d5af7735a00d2fac36cf135fc30/pip-7.1.0-py2.py3-none-any.whl#sha256=80c29f899d3a00a448d65f8158544d22935baec7159af8da1a4fa1490ced481d (from https://pypi.org/simple/pip/), version: 7.1.0
> Found link https://files.pythonhosted.org/packages/7e/71/3c6ece07a9a885650aa6607b0ebfdf6fc9a3ef8691c44b5e724e4eee7bf2/pip-7.1.0.tar.gz#sha256=d5275ba3221182a5dd1b6bcfbfc5ec277fb399dd23226d6fa018048f7e0f10f2 (from https://pypi.org/simple/pip/),version: 7.1.0
> Found link https://files.pythonhosted.org/packages/1c/56/094d563c508917081bccff365e4f621ba33073c1c13aca9267a43cfcaf13/pip-7.1.1-py2.py3-none-any.whl#sha256=ce13000878d34c1178af76cb8cf269e232c00508c78ed46c165dd5b0881615f4 (from https://pypi.org/simple/pip/), version: 7.1.1
> Found link https://files.pythonhosted.org/packages/3b/bb/b3f2a95494fd3f01d3b3ae530e7c0e910dc25e88e30787b0a5e10cbc0640/pip-7.1.1.tar.gz#sha256=b22fe3c93a13fc7c04f145a42fd2ad50a9e3e1b8a7eed2e2b1c66e540a0951da (from https://pypi.org/simple/pip/),version: 7.1.1
> Found link https://files.pythonhosted.org/packages/b2/d0/cd115fe345dd6f07ec1c780020a7dfe74966fceeb171e0f20d1d4905b0b7/pip-7.1.2-py2.py3-none-any.whl#sha256=b9d3983b5cce04f842175e30169d2f869ef12c3546fd274083a65eada4e9708c (from https://pypi.org/simple/pip/), version: 7.1.2
> Found link https://files.pythonhosted.org/packages/d0/92/1e8406c15d9372084a5bf79d96da3a0acc4e7fcf0b80020a4820897d2a5c/pip-7.1.2.tar.gz#sha256=ca047986f0528cfa975a14fb9f7f106271d4e0c3fe1ddced6c1db2e7ae57a477 (from https://pypi.org/simple/pip/),version: 7.1.2
> Found link https://files.pythonhosted.org/packages/00/ae/bddef02881ee09c6a01a0d6541aa6c75a226a4e68b041be93142befa0cd6/pip-8.0.0-py2.py3-none-any.whl#sha256=262ed1823eb7fbe3f18a9bedb4800e59c4ab9a6682aff8c37b5ee83ea840910b (from https://pypi.org/simple/pip/), version: 8.0.0
> Found link https://files.pythonhosted.org/packages/e3/2d/03c014d11e66628abf2fda5ca00f779cbe7b5292c5cd13d42a95b94aa9b8/pip-8.0.0.tar.gz#sha256=90112b296152f270cb8dddcd19b7b87488d9e002e8cf622e14c4da9c2f6319b1 (from https://pypi.org/simple/pip/),version: 8.0.0
> Found link https://files.pythonhosted.org/packages/45/9c/6f9a24917c860873e2ce7bd95b8f79897524353df51d5d920cd6b6c1ec33/pip-8.0.1-py2.py3-none-any.whl#sha256=dedaac846bc74e38a3253671f51a056331ffca1da70e3f48d8128f2aa0635bba (from https://pypi.org/simple/pip/), version: 8.0.1
> Found link https://files.pythonhosted.org/packages/ea/66/a3d6187bd307159fedf8575c0d9ee2294d13b1cdd11673ca812e6a2dda8f/pip-8.0.1.tar.gz#sha256=477c50b3e538a7ac0fa611fb8b877b04b33fb70d325b12a81b9dbf3eb1158a4d (from https://pypi.org/simple/pip/),version: 8.0.1
> Found link https://files.pythonhosted.org/packages/e7/a0/bd35f5f978a5e925953ce02fa0f078a232f0f10fcbe543d8cfc043f74fda/pip-8.0.2-py2.py3-none-any.whl#sha256=249a6f3194be8c2e8cb4d4be3f6fd16a9f1e3336218caffa8e7419e3816f9988 (from https://pypi.org/simple/pip/), version: 8.0.2
> Found link https://files.pythonhosted.org/packages/ce/15/ee1f9a84365423e9ef03d0f9ed0eba2fb00ac1fffdd33e7b52aea914d0f8/pip-8.0.2.tar.gz#sha256=46f4bd0d8dfd51125a554568d646fe4200a3c2c6c36b9f2d06d2212148439521 (from https://pypi.org/simple/pip/),version: 8.0.2
> Found link https://files.pythonhosted.org/packages/ae/d4/2b127310f5364610b74c28e2e6a40bc19e2d3c9a9a4e012d3e333e767c99/pip-8.0.3-py2.py3-none-any.whl#sha256=b0335bc837f9edb5aad03bd43d0973b084a1cbe616f8188dc23ba13234dbd552 (from https://pypi.org/simple/pip/), version: 8.0.3
> Found link https://files.pythonhosted.org/packages/22/f3/14bc87a4f6b5ec70b682765978a6f3105bf05b6781fa97e04d30138bd264/pip-8.0.3.tar.gz#sha256=30f98b66f3fe1069c529a491597d34a1c224a68640c82caf2ade5f88aa1405e8 (from https://pypi.org/simple/pip/),version: 8.0.3
> Found link https://files.pythonhosted.org/packages/1e/c7/78440b3fb882ed001e6e12d8770bd45e73d6eced4e57f7c072b829ce8a3d/pip-8.1.0-py2.py3-none-any.whl#sha256=a542b99e08002ead83200198e19a3983270357e1cb4fe704247990b5b35471dc (from https://pypi.org/simple/pip/), version: 8.1.0
> Found link https://files.pythonhosted.org/packages/3c/72/6981d5adf880adecb066a1a1a4c312a17f8d787a3b85446967964ac66d55/pip-8.1.0.tar.gz#sha256=d8faa75dd7d0737b16d50cd0a56dc91a631c79ecfd8d38b80f6ee929ec82043e (from https://pypi.org/simple/pip/),version: 8.1.0
> Found link https://files.pythonhosted.org/packages/31/6a/0f19a7edef6c8e5065f4346137cc2a08e22e141942d66af2e1e72d851462/pip-8.1.1-py2.py3-none-any.whl#sha256=44b9c342782ab905c042c207d995aa069edc02621ddbdc2b9f25954a0fdac25c (from https://pypi.org/simple/pip/), version: 8.1.1
> Found link https://files.pythonhosted.org/packages/41/27/9a8d24e1b55bd8c85e4d022da2922cb206f183e2d18fee4e320c9547e751/pip-8.1.1.tar.gz#sha256=3e78d3066aaeb633d185a57afdccf700aa2e660436b4af618bcb6ff0fa511798 (from https://pypi.org/simple/pip/),version: 8.1.1
> Found link https://files.pythonhosted.org/packages/9c/32/004ce0852e0a127f07f358b715015763273799bd798956fa930814b60f39/pip-8.1.2-py2.py3-none-any.whl#sha256=6464dd9809fb34fc8df2bf49553bb11dac4c13d2ffa7a4f8038ad86a4ccb92a1 (from https://pypi.org/simple/pip/), version: 8.1.2
> Found link https://files.pythonhosted.org/packages/e7/a8/7556133689add8d1a54c0b14aeff0acb03c64707ce100ecd53934da1aa13/pip-8.1.2.tar.gz#sha256=4d24b03ffa67638a3fa931c09fd9e0273ffa904e95ebebe7d4b1a54c93d7b732 (from https://pypi.org/simple/pip/),version: 8.1.2
> Found link https://files.pythonhosted.org/packages/3f/ef/935d9296acc4f48d1791ee56a73781271dce9712b059b475d3f5fa78487b/pip-9.0.0-py2.py3-none-any.whl#sha256=c856ac18ca01e7127456f831926dc67cc7d3ab663f4c13b1ec156e36db4de574 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.0
> Found link https://files.pythonhosted.org/packages/5e/53/eaef47e5e2f75677c9de0737acc84b659b78a71c4086f424f55346a341b5/pip-9.0.0.tar.gz#sha256=f62fb70e7e000e46fce12aaeca752e5281a5446977fe5a75ab4189a43b3f8793 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.0
> Found link https://files.pythonhosted.org/packages/b6/ac/7015eb97dc749283ffdec1c3a88ddb8ae03b8fad0f0e611408f196358da3/pip-9.0.1-py2.py3-none-any.whl#sha256=690b762c0a8460c303c089d5d0be034fb15a5ea2b75bdf565f40421f542fefb0 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.1
> Found link https://files.pythonhosted.org/packages/11/b6/abcb525026a4be042b486df43905d6893fb04f05aac21c32c638e939e447/pip-9.0.1.tar.gz#sha256=09f243e1a7b461f654c26a725fa373211bb7ff17a9300058b205c61658ca940d (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.1
> Found link https://files.pythonhosted.org/packages/e7/f9/e801dcea22886cd513f6bd2e8f7e581bd6f67bb8e8f1cd8e7b92d8539280/pip-9.0.2-py2.py3-none-any.whl#sha256=b135491ddb061f39719b8472d8abb59c613816a2b86069c332db74d1cd208ab2 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.2
> Found link https://files.pythonhosted.org/packages/e5/8f/3fc66461992dc9e9fcf5e005687d5f676729172dda640df2fd8b597a6da7/pip-9.0.2.tar.gz#sha256=88110a224e9d30e5d76592a0b2130ef10e7e67a6426e8617bb918fffbfe91fe5 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.2
> Found link https://files.pythonhosted.org/packages/ac/95/a05b56bb975efa78d3557efa36acaf9cf5d2fd0ee0062060493687432e03/pip-9.0.3-py2.py3-none-any.whl#sha256=c3ede34530e0e0b2381e7363aded78e0c33291654937e7373032fda04e8803e5 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.3
> Found link https://files.pythonhosted.org/packages/c4/44/e6b8056b6c8f2bfd1445cc9990f478930d8e3459e9dbf5b8e2d2922d64d3/pip-9.0.3.tar.gz#sha256=7bf48f9a693be1d58f49f7af7e0ae9fe29fd671cde8a55e6edca3581c4ef5796 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.3
> Found link https://files.pythonhosted.org/packages/4b/5a/8544ae02a5bd28464e03af045e8aabde20a7b02db1911a9159328e1eb25a/pip-10.0.0b1-py2.py3-none-any.whl#sha256=dbd5d24cd461be23429625085a36cc8732cbcac4d2aaf673031f80f6ac07d844 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b1
> Found link https://files.pythonhosted.org/packages/aa/6d/ffbb86abf18b750fb26f27eda7c7732df2aacaa669c420d2eb2ad6df3458/pip-10.0.0b1.tar.gz#sha256=8d6e63d8b99752e4b53f272b66f9cd7b59e2b288e9a863a61c48d167203a2656 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b1
> Found link https://files.pythonhosted.org/packages/97/72/1d514201e7d7fc7fff5aac3de9c7b892cd72fb4bf23fd983630df96f7412/pip-10.0.0b2-py2.py3-none-any.whl#sha256=79f55588912f1b2b4f86f96f11e329bb01b25a484e2204f245128b927b1038a7 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b2
> Found link https://files.pythonhosted.org/packages/32/67/572f642e6e42c580d3154964cfbab7d9322c23b0f417c6c01fdd206a2777/pip-10.0.0b2.tar.gz#sha256=ad6adec2150ce4aed8f6134d9b77d928fc848dbcb887fb1a455988cf99da5cae (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b2
> Found link https://files.pythonhosted.org/packages/62/a1/0d452b6901b0157a0134fd27ba89bf95a857fbda64ba52e1ca2cf61d8412/pip-10.0.0-py2.py3-none-any.whl#sha256=86a60a96d85e329962a9e6f6af612cbc11106293dbc83f119802b5bee9874cf3 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0
> Found link https://files.pythonhosted.org/packages/e0/69/983a8e47d3dfb51e1463c1e962b2ccd1d74ec4e236e232625e353d830ed2/pip-10.0.0.tar.gz#sha256=f05a3eeea64bce94e85cc6671d679473d66288a4d37c3fcf983584954096b34f (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0
> Found link https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl#sha256=717cdffb2833be8409433a93746744b59505f42146e8d37de6c62b430e25d6d7 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.1
> Found link https://files.pythonhosted.org/packages/ae/e8/2340d46ecadb1692a1e455f13f75e596d4eab3d11a57446f08259dee8f02/pip-10.0.1.tar.gz#sha256=f2bd08e0cd1b06e10218feaf6fef299f473ba706582eb3bd9d52203fdbd7ee68 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.1
> Found link https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl#sha256=070e4bf493c7c2c9f6a08dd797dd3c066d64074c38e9e8a0fb4e6541f266d96c (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.0
> Found link https://files.pythonhosted.org/packages/69/81/52b68d0a4de760a2f1979b0931ba7889202f302072cc7a0d614211bc7579/pip-18.0.tar.gz#sha256=a0e11645ee37c90b40c46d607070c4fd583e2cd46231b1c06e389c5e814eed76 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.0
> Found link https://files.pythonhosted.org/packages/c2/d7/90f34cb0d83a6c5631cf71dfe64cc1054598c843a92b400e55675cc2ac37/pip-18.1-py2.py3-none-any.whl#sha256=7909d0a0932e88ea53a7014dfd14522ffef91a464daaaf5c573343852ef98550 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.1
> Found link https://files.pythonhosted.org/packages/45/ae/8a0ad77defb7cc903f09e551d88b443304a9bd6e6f124e75c0fbbf6de8f7/pip-18.1.tar.gz#sha256=c0a292bd977ef590379a3f05d7b7f65135487b67470f6281289a94e015650ea1 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.1
> Found link https://files.pythonhosted.org/packages/60/64/73b729587b6b0d13e690a7c3acd2231ee561e8dd28a58ae1b0409a5a2b20/pip-19.0-py2.py3-none-any.whl#sha256=249ab0de4c1cef3dba4cf3f8cca722a07fc447b1692acd9f84e19c646db04c9a (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0
> Found link https://files.pythonhosted.org/packages/11/31/c483614095176ddfa06ac99c2af4171375053b270842c7865ca0b4438dc1/pip-19.0.tar.gz#sha256=c82bf8bc00c5732f0dd49ac1dea79b6242a1bd42a5012e308ed4f04369b17e54 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0
> Found link https://files.pythonhosted.org/packages/46/dc/7fd5df840efb3e56c8b4f768793a237ec4ee59891959d6a215d63f727023/pip-19.0.1-py2.py3-none-any.whl#sha256=aae79c7afe895fb986ec751564f24d97df1331bb99cdfec6f70dada2f40c0044 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.1
> Found link https://files.pythonhosted.org/packages/c8/89/ad7f27938e59db1f0f55ce214087460f65048626e2226531ba6cb6da15f0/pip-19.0.1.tar.gz#sha256=e81ddd35e361b630e94abeda4a1eddd36d47a90e71eb00f38f46b57f787cd1a5 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.1
> Found link https://files.pythonhosted.org/packages/d7/41/34dd96bd33958e52cb4da2f1bf0818e396514fd4f4725a79199564cd0c20/pip-19.0.2-py2.py3-none-any.whl#sha256=6a59f1083a63851aeef60c7d68b119b46af11d9d803ddc1cf927b58edcd0b312 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.2
> Found link https://files.pythonhosted.org/packages/4c/4d/88bc9413da11702cbbace3ccc51350ae099bb351febae8acc85fec34f9af/pip-19.0.2.tar.gz#sha256=f851133f8b58283fa50d8c78675eb88d4ff4cde29b6c41205cd938b06338e0e5 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.2
> Found link https://files.pythonhosted.org/packages/d8/f3/413bab4ff08e1fc4828dfc59996d721917df8e8583ea85385d51125dceff/pip-19.0.3-py2.py3-none-any.whl#sha256=bd812612bbd8ba84159d9ddc0266b7fbce712fc9bc98c82dee5750546ec8ec64 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.3
> Found link https://files.pythonhosted.org/packages/36/fa/51ca4d57392e2f69397cd6e5af23da2a8d37884a605f9e3f2d3bfdc48397/pip-19.0.3.tar.gz#sha256=6e6f197a1abfb45118dbb878b5c859a0edbdd33fd250100bc015b67fded4b9f2 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.3
> Found link https://files.pythonhosted.org/packages/f9/fb/863012b13912709c13cf5cfdbfb304fa6c727659d6290438e1a88df9d848/pip-19.1-py2.py3-none-any.whl#sha256=8f59b6cf84584d7962d79fd1be7a8ec0eb198aa52ea864896551736b3614eee9 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.1
> Found link https://files.pythonhosted.org/packages/51/5f/802a04274843f634469ef299fcd273de4438386deb7b8681dd059f0ee3b7/pip-19.1.tar.gz#sha256=d9137cb543d8a4d73140a3282f6d777b2e786bb6abb8add3ac5b6539c82cd624 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.1 | 05-06-2019 12:31:47 | 05-06-2019 12:31:47 | Which commands are you running?<|||||>I am so sorry that I took your time, I accidentally posted this here. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.