Link to paper (#47)
Browse files- Link to paper (aa37cced63635d18c302683b0aab74102a33f096)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -1,25 +1,6 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
tags:
|
5 |
-
- xlm-roberta
|
6 |
-
- eva02
|
7 |
-
- clip
|
8 |
-
- feature-extraction
|
9 |
-
- sentence-similarity
|
10 |
-
- retrieval
|
11 |
-
- multimodal
|
12 |
-
- multi-modal
|
13 |
-
- crossmodal
|
14 |
-
- cross-modal
|
15 |
-
- mteb
|
16 |
-
- clip-benchmark
|
17 |
-
- vidore
|
18 |
-
- transformers
|
19 |
-
- sentence-transformers
|
20 |
-
- onnx
|
21 |
-
- safetensors
|
22 |
-
- transformers.js
|
23 |
language:
|
24 |
- multilingual
|
25 |
- af
|
@@ -115,9 +96,28 @@ language:
|
|
115 |
- xh
|
116 |
- yi
|
117 |
- zh
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
inference: false
|
119 |
-
base_model:
|
120 |
-
- jinaai/xlm-roberta-flash-implementation
|
121 |
---
|
122 |
|
123 |
<br><br>
|
@@ -135,6 +135,7 @@ base_model:
|
|
135 |
<b>Jina CLIP v2: Multilingual Multimodal Embeddings for Texts and Images</b>
|
136 |
</p>
|
137 |
|
|
|
138 |
|
139 |
## Quick Start
|
140 |
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- jinaai/xlm-roberta-flash-implementation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
language:
|
5 |
- multilingual
|
6 |
- af
|
|
|
96 |
- xh
|
97 |
- yi
|
98 |
- zh
|
99 |
+
library_name: transformers
|
100 |
+
license: cc-by-nc-4.0
|
101 |
+
tags:
|
102 |
+
- xlm-roberta
|
103 |
+
- eva02
|
104 |
+
- clip
|
105 |
+
- feature-extraction
|
106 |
+
- sentence-similarity
|
107 |
+
- retrieval
|
108 |
+
- multimodal
|
109 |
+
- multi-modal
|
110 |
+
- crossmodal
|
111 |
+
- cross-modal
|
112 |
+
- mteb
|
113 |
+
- clip-benchmark
|
114 |
+
- vidore
|
115 |
+
- transformers
|
116 |
+
- sentence-transformers
|
117 |
+
- onnx
|
118 |
+
- safetensors
|
119 |
+
- transformers.js
|
120 |
inference: false
|
|
|
|
|
121 |
---
|
122 |
|
123 |
<br><br>
|
|
|
135 |
<b>Jina CLIP v2: Multilingual Multimodal Embeddings for Texts and Images</b>
|
136 |
</p>
|
137 |
|
138 |
+
This model is based on the paper [jina-clip-v2: Multilingual Multimodal Embeddings for Text and Images](https://huggingface.co/papers/2412.08802).
|
139 |
|
140 |
## Quick Start
|
141 |
|