feat: add tag for hfjob build
Browse files
README.md
CHANGED
@@ -4,6 +4,8 @@ tags:
|
|
4 |
- kernel
|
5 |
---
|
6 |
|
|
|
|
|
7 |
# Flash Attention
|
8 |
|
9 |
Flash Attention is a fast and memory-efficient implementation of the attention mechanism, designed to work with large models and long sequences. This is a Hugging Face compliant kernel build of Flash Attention.
|
|
|
4 |
- kernel
|
5 |
---
|
6 |
|
7 |
+

|
8 |
+
|
9 |
# Flash Attention
|
10 |
|
11 |
Flash Attention is a fast and memory-efficient implementation of the attention mechanism, designed to work with large models and long sequences. This is a Hugging Face compliant kernel build of Flash Attention.
|