|
--- |
|
language: |
|
- "py" |
|
license: |
|
- "mit" |
|
pretty_name: "W3SA Solidity Access Control Benchmark" |
|
--- |
|
# W3SA - Solidity Access Control Benchmark |
|
This benchmark includes 18 test High Severity Access Control vulnerabilities, derived from real world contracts audited through Code4rena competitions. Credits to Zhuo for the initial data scraping and curation. We add a python wrapper and standardized evaluation framework for the evaluation of AI Models. |
|
|
|
## Project Statistics |
|
These are from 12 different projects, and only the files containing the bug are included in this benchmark. |
|
| Number of Projects | Total number of Files | Total number of bugs | |
|
|--------------------|-----------------------|-----------------------| |
|
| 12 | 16 | 18 | |
|
|
|
<img src="./resources/benchmark.png" alt="Solidity Access Control Benchmark" style="border: 2px solid black; border-radius: 15px;" width="500"> |
|
|
|
|
|
## Repo Structure |
|
The benchmark is contained within the `benchmark` directory and is made by: |
|
- A list of folders (projects) with one or more .sol files |
|
- The `ground_truth.csv` file which contains the labelled vulnerabilities of all projects |
|
|
|
``` |
|
βββ README.md |
|
βββ benchmark |
|
β βββ contracts/ |
|
β βββ ground_truth.csv |
|
βββ bm_src |
|
β βββ eval.py |
|
β βββ llm.py |
|
β βββ metrics.py |
|
β βββ util.py |
|
βββ experiment.py |
|
.. |
|
``` |
|
|
|
## Set up |
|
- Install `uv` package manager if not yet available |
|
- Run `uv sync` |
|
|
|
## Run an experiment |
|
Launch your experiment by running: |
|
```bash |
|
uv run experiment.py --model o3-mini |
|
``` |
|
A `tqdm` progress bar will pop up and, at the end of the experiment, the results metrics will be printed out. |
|
|
|
|
|
## Contact Us |
|
For or questions, suggestions, or to learn more about Almanax.ai, reach out to us at https://www.almanax.ai/contact |
|
|