Update README.md
Browse files
README.md
CHANGED
@@ -6,67 +6,24 @@ colorTo: gray
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
-
|
10 |
-
|
11 |
-
Welcome to ***GEM_Testing_Arsenal***, where groundbreaking research meets practical power! This repository unveils a novel architecture for On-Device Language Models (ODLMs), straight from our paper, ["Fragile Mastery: are domain-specific trade-offs undermining On-Device Language Models?"](./link_to_be_insterted). With just a few lines of code, our custom `gem_trainer.py` script lets you train ODLMs that are more accurate than ever, tracking accuracy and loss as you go.
|
12 |
-
|
13 |
-
---
|
14 |
-
|
15 |
-
### Highlights:
|
16 |
-
- **Next-Level ODLMs**: Boosts accuracy with a new architecture from our research.
|
17 |
-
- **Easy Training**: Call run_gem_pipeline to train on your dataset in minutes.
|
18 |
-
- **Live Metrics**: Get accuracy and loss results as training unfolds.
|
19 |
-
- **Flexible Design**: Works with any compatible dataset—plug and play!
|
20 |
-
|
21 |
-
---
|
22 |
-
### Prerequisites:
|
23 |
-
To dive in, you’ll need:
|
24 |
-
- **Python** `3.8+`
|
25 |
-
|
26 |
-
- Required libraries (go through [quick start](#quick-start) below 👇)
|
27 |
-
|
28 |
-
- **Git** *(to clone the repo)*
|
29 |
|
30 |
---
|
31 |
-
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
git clone https://github.com/Firojpaudel/GEM.git
|
36 |
-
```
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
```
|
42 |
|
43 |
-
|
44 |
-
Create a new python file and execute the code like:
|
45 |
-
```python
|
46 |
-
from datasets import load_dataset
|
47 |
-
from gem_trainer import run_gem_pipeline
|
48 |
|
49 |
-
|
50 |
-
dataset = load_dataset("banking77")
|
51 |
|
52 |
-
|
53 |
-
|
|
|
54 |
|
55 |
-
print(results) # See accuracy and loss
|
56 |
-
```
|
57 |
-
|
58 |
-
> ***Boom—your ODLM is training with boosted accuracy!***
|
59 |
-
|
60 |
-
---
|
61 |
-
### Customizing Training:
|
62 |
-
`run_gem_pipeline` keeps it simple, but you can tweak it! Dive into [`gem_trainer.py`](./gem_trainer.py) to adjust epochs, batch size, or other settings to fit your needs.
|
63 |
-
|
64 |
-
---
|
65 |
-
### Contributing 💓
|
66 |
-
Got ideas to make this even better? We’re all ears!
|
67 |
-
- Fork the repo.
|
68 |
-
- Branch off (`git checkout -b your-feature`).
|
69 |
-
- Submit a pull request with your magic.
|
70 |
-
|
71 |
-
---
|
72 |
-
Edit this `README.md` markdown file to author your organization card.
|
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
+
## Welcome to GEM Space
|
10 |
+
Greetings from GEM Space, the heart of innovation behind our upcoming paper, "FRAGILE MASTERY: ARE DOMAIN-SPECIFIC TRADE-OFFS UNDERMINING ON-DEVICE LANGUAGE MODELS?". We’re thrilled to invite you into our world of edge AI exploration! This repository, GEM_Testing_Arsenal, is a cornerstone of our efforts to redefine On-Device Language Models (ODLMs) through the Generalized Edge Model (GEM). Keep an eye out for the paper link once it’s published!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
---
|
13 |
+
## About Our Paper
|
14 |
|
15 |
+
***Abstract***
|
16 |
+
The deployment of On-Device Language Models (ODLMs) on resource-constrained edge devices demands a delicate balance of efficiency, memory, power, and linguistic skill across diverse tasks. In "FRAGILE MASTERY", we explore the trade-offs between domain-specific optimization and cross-domain robustness, introducing the Generalized Edge Model (GEM). GEM integrates specialization and generalization using a Sparse Cross-Attention Router (SCAR), achieving a cross-domain F1 score of 0.89 with sub-100ms latency on platforms like Raspberry Pi 4 and Pixel 6. Across 47 benchmarks spanning eight domains—healthcare, legal, finance, STEM, and more—GEM boosts general-task performance by 7% over GPT-4 Lite while matching domain-specific results. With new metrics like the Domain Specialization Index (DSI) and a balanced distillation framework cutting catastrophic forgetting by 43%, this work offers a robust foundation for edge AI. [Paper link coming soon!]
|
|
|
|
|
17 |
|
18 |
+
***Our Vision***
|
19 |
+
At GEM Space, we’re on a mission to revolutionize edge intelligence. We’re striving to build On-Device Language Models that:
|
20 |
+
- **Thrive Under Constraints**: Deliver exceptional accuracy and speed on low-power devices—from smartphones to custom NPUs—without compromise.
|
|
|
21 |
|
22 |
+
- **Master the Balance**: Seamlessly blend domain-specific expertise (think healthcare diagnostics or financial analysis) with robust, cross-domain adaptability.
|
|
|
|
|
|
|
|
|
23 |
|
24 |
+
- **Empower the Edge**: Bring advanced AI to the fingertips of real-world applications, making it fast, practical, and accessible wherever it’s needed.
|
|
|
25 |
|
26 |
+
The **GEM_Testing_Arsenal** embodies this ambition—a testing ground for GEM, our pioneering architecture designed to make ODLMs smarter, leaner, and more versatile. We’re here to push the limits of what’s possible and inspire a new era of edge AI innovation.
|
27 |
+
Join the Journey
|
28 |
+
We’re building more than models—we’re building a movement. Stay tuned for our paper, explore GEM Space, and join us in shaping the future of on-device intelligence. Reach out at [[email protected] (mailto:[email protected])] with ideas or questions!
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|