File size: 900 Bytes
0a160d9
 
 
 
 
 
 
 
 
 
 
cbfae43
 
5f098bf
 
 
 
 
c92251d
cbfae43
4f9db67
129bcf2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
title: Aidapal Space
emoji: 😻
colorFrom: pink
colorTo: purple
sdk: gradio
sdk_version: 5.25.2
app_file: app.py
pinned: false
---

# Aidapal Space

This is a space to try out the
[Aidapal](https://huggingface.co/AverageBusinessUser/aidapal) model, which
attempts to infer a function name, comment/description, and suitable variable
names, when given the output of Hex-Rays decompiler output of a function.  More information is available in this [blog post](https://www.atredis.com/blog/2024/6/3/how-to-train-your-large-language-model).

## TODO / Issues

* We currently use `transformers` which de-quantizes the gguf.  This is easy but inefficient.  Can we use llama.cpp or Ollama with zerogpu?
* Model returns the markdown json prefix often.  [Is this something I am doing wrong?](https://github.com/atredispartners/aidapal/issues/12)  Currently we remove it in present to enable JSON parsing.