kernel
drbh HF Staff commited on
Commit
4762963
·
verified ·
1 Parent(s): a741640

feat: add tag for hfjob build

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -4,6 +4,8 @@ tags:
4
  - kernel
5
  ---
6
 
 
 
7
  # Flash Attention
8
 
9
  Flash Attention is a fast and memory-efficient implementation of the attention mechanism, designed to work with large models and long sequences. This is a Hugging Face compliant kernel build of Flash Attention.
 
4
  - kernel
5
  ---
6
 
7
+ ![Status](https://hubwebhook.dholtz.com/shield?repo=kernels-community/flash-attn)
8
+
9
  # Flash Attention
10
 
11
  Flash Attention is a fast and memory-efficient implementation of the attention mechanism, designed to work with large models and long sequences. This is a Hugging Face compliant kernel build of Flash Attention.