Spaces:
Running
on
Zero
Running
on
Zero
Commit
·
33dfc83
1
Parent(s):
c92251d
Try to remove markdown
Browse files
README.md
CHANGED
@@ -19,4 +19,4 @@ names, when given the output of Hex-Rays decompiler output of a function. More
|
|
19 |
## TODO / Issues
|
20 |
|
21 |
* We currently use `transformers` which de-quantizes the gguf. This is easy but inefficient. Can we use llama.cpp or Ollama with zerogpu?
|
22 |
-
* Model returns the markdown json prefix often. Is this something I am doing wrong?
|
|
|
19 |
## TODO / Issues
|
20 |
|
21 |
* We currently use `transformers` which de-quantizes the gguf. This is easy but inefficient. Can we use llama.cpp or Ollama with zerogpu?
|
22 |
+
* Model returns the markdown json prefix often. Is this something I am doing wrong? Currently we remove it in present to enable JSON parsing.
|
app.py
CHANGED
@@ -70,7 +70,10 @@ def predict(code):
|
|
70 |
)
|
71 |
print(f"Pipe out: {repr(pipe_out)}")
|
72 |
|
73 |
-
|
|
|
|
|
|
|
74 |
|
75 |
json_output = json.dumps([])
|
76 |
try:
|
@@ -81,7 +84,7 @@ def predict(code):
|
|
81 |
|
82 |
print(f"JSON output: {repr(json_output)}")
|
83 |
|
84 |
-
return json_output,
|
85 |
|
86 |
|
87 |
demo = gr.Interface(
|
|
|
70 |
)
|
71 |
print(f"Pipe out: {repr(pipe_out)}")
|
72 |
|
73 |
+
raw_output = pipe_out[0]["generated_text"]
|
74 |
+
output = raw_output
|
75 |
+
if output.startswith("``` json\n"):
|
76 |
+
output = output[9:]
|
77 |
|
78 |
json_output = json.dumps([])
|
79 |
try:
|
|
|
84 |
|
85 |
print(f"JSON output: {repr(json_output)}")
|
86 |
|
87 |
+
return json_output, raw_output
|
88 |
|
89 |
|
90 |
demo = gr.Interface(
|