hsiung commited on
Commit
8a35da4
·
1 Parent(s): a767b82

update README.md

Browse files

Signed-off-by: Lei Hsiung <[email protected]>

Files changed (1) hide show
  1. README.md +18 -14
README.md CHANGED
@@ -1,25 +1,29 @@
1
  ---
2
  title: NeuralFuse
3
- emoji: 🧠
4
  colorFrom: yellow
5
  colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
- # Nerfies
11
 
12
- This is the repository that contains source code for the [Nerfies website](https://nerfies.github.io).
13
 
14
- If you find Nerfies useful for your work please cite:
15
- ```
16
- @article{park2021nerfies
17
- author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
18
- title = {Nerfies: Deformable Neural Radiance Fields},
19
- journal = {ICCV},
20
- year = {2021},
21
- }
22
- ```
23
 
24
- # Website License
25
- <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: NeuralFuse
3
+ emoji:
4
  colorFrom: yellow
5
  colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
+ # NeuralFuse
11
 
12
+ Official project page of the paper "[NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes](https://arxiv.org/abs/2306.16869)."
13
 
14
+ ![NeuralFuse](static/images/teaser.png)
 
 
 
 
 
 
 
 
15
 
16
+ Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains problematically high. An effective strategy for reducing such consumption is supply-voltage reduction, but if done too aggressively, it can lead to accuracy degradation. This is due to random bit-flips in static random access memory (SRAM), where model parameters are stored. To address this challenge, we have developed NeuralFuse, a novel add-on module that handles the energy-accuracy tradeoff in low-voltage regimes by learning input transformations and using them to generate error-resistant data representations, thereby protecting DNN accuracy in both nominal and low-voltage scenarios. As well as being easy to implement, NeuralFuse can be readily applied to DNNs with limited access, such cloud-based APIs that are accessed remotely or non-configurable hardware. Our experimental results demonstrate that, at a 1% bit-error rate, NeuralFuse can reduce SRAM access energy by up to 24% while recovering accuracy by up to 57%. To the best of our knowledge, this is the first approach to addressing low-voltage-induced bit errors that requires no model retraining.
17
+
18
+
19
+ ## Citation
20
+ If you find this helpful for your research, please cite our paper as follows:
21
+
22
+ @article{sun2024neuralfuse,
23
+ title={{NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes}},
24
+ author={Hao-Lun Sun and Lei Hsiung and Nandhini Chandramoorthy and Pin-Yu Chen and Tsung-Yi Ho},
25
+ booktitle = {Advances in Neural Information Processing Systems},
26
+ publisher = {Curran Associates, Inc.},
27
+ volume = {37},
28
+ year = {2024}
29
+ }