tomaarsen HF Staff commited on
Commit
a32ac11
·
verified ·
1 Parent(s): 0b2d947

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- # Cross-Encoder for Quora Duplicate Questions Detection
5
  This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
6
 
7
  ## Training Data
@@ -15,7 +15,8 @@ For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder]
15
  Pre-trained models can be used like this:
16
  ```python
17
  from sentence_transformers import CrossEncoder
18
- model = CrossEncoder('model_name')
 
19
  scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')])
20
 
21
  #e.g.
@@ -28,8 +29,8 @@ You can use the model also directly with Transformers library (without SentenceT
28
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
29
  import torch
30
 
31
- model = AutoModelForSequenceClassification.from_pretrained('model_name')
32
- tokenizer = AutoTokenizer.from_pretrained('model_name')
33
 
34
  features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
35
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Cross-Encoder for SQuAD (QNLI)
5
  This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
6
 
7
  ## Training Data
 
15
  Pre-trained models can be used like this:
16
  ```python
17
  from sentence_transformers import CrossEncoder
18
+
19
+ model = CrossEncoder('cross-encoder/qnli-distilroberta-base')
20
  scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')])
21
 
22
  #e.g.
 
29
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
30
  import torch
31
 
32
+ model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/qnli-distilroberta-base')
33
+ tokenizer = AutoTokenizer.from_pretrained('cross-encoder/qnli-distilroberta-base')
34
 
35
  features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
36