weiweiz1 commited on
Commit
c8254c9
·
1 Parent(s): 9c90a40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -24,8 +24,8 @@ The sparse version of GPT-J 6B is a pruned variant derived from the original [GP
24
  | \\(n_{heads}\\) | 16 |
25
  | \\(d_{head}\\) | 256 |
26
  | \\(n_{ctx}\\) | 2048 |
27
- | \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
28
- | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
29
  | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
30
  <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p>
31
  <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
 
24
  | \\(n_{heads}\\) | 16 |
25
  | \\(d_{head}\\) | 256 |
26
  | \\(n_{ctx}\\) | 2048 |
27
+ | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) |
28
+ | Positional Encoding | Rotary Position Embedding RoPE |
29
  | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
30
  <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p>
31
  <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>