Update README.md
Browse files
README.md
CHANGED
@@ -24,8 +24,8 @@ The sparse version of GPT-J 6B is a pruned variant derived from the original [GP
|
|
24 |
| \\(n_{heads}\\) | 16 |
|
25 |
| \\(d_{head}\\) | 256 |
|
26 |
| \\(n_{ctx}\\) | 2048 |
|
27 |
-
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3)
|
28 |
-
| Positional Encoding |
|
29 |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
|
30 |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
|
31 |
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
|
|
|
24 |
| \\(n_{heads}\\) | 16 |
|
25 |
| \\(d_{head}\\) | 256 |
|
26 |
| \\(n_{ctx}\\) | 2048 |
|
27 |
+
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
|
28 |
+
| Positional Encoding | Rotary Position Embedding RoPE |
|
29 |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
|
30 |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
|
31 |
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
|