Spaces:
Running
Running
Commit
·
901c32e
1
Parent(s):
6feae38
fix: update branding from InfraBench to InferBench in app.py for consistency across the platform
Browse files
app.py
CHANGED
@@ -18,10 +18,10 @@ df = df[["Model"] + [col for col in df.columns.tolist() if col not in ["URL", "P
|
|
18 |
with gr.Blocks("ParityError/Interstellar") as demo:
|
19 |
gr.Markdown(
|
20 |
"""
|
21 |
-
<h1 style="margin: 0;">
|
22 |
<br>
|
23 |
<div style="margin-bottom: 20px;">
|
24 |
-
<p>Welcome to
|
25 |
</div>
|
26 |
"""
|
27 |
)
|
@@ -59,8 +59,8 @@ with gr.Blocks("ParityError/Interstellar") as demo:
|
|
59 |
# 💜 About Pruna AI
|
60 |
We are Pruna AI, an open source AI optimisation engine and we simply make your models cheaper, faster, smaller, greener!
|
61 |
|
62 |
-
# 📊 About
|
63 |
-
|
64 |
Over the past few years, we’ve observed outstanding progress in image generation models fueled by ever-larger architectures.
|
65 |
Due to their size, state-of-the-art models such as FLUX take more than 6 seconds to generate a single image on a high-end H100 GPU.
|
66 |
While compression techniques can reduce inference time, their impact on quality often remains unclear.
|
@@ -89,8 +89,8 @@ with gr.Blocks("ParityError/Interstellar") as demo:
|
|
89 |
gr.Markdown(
|
90 |
"""
|
91 |
```bibtex
|
92 |
-
@article{
|
93 |
-
title={
|
94 |
author={PrunaAI},
|
95 |
year={2025},
|
96 |
howpublished={\\url{https://huggingface.co/spaces/PrunaAI/InferBench}}
|
|
|
18 |
with gr.Blocks("ParityError/Interstellar") as demo:
|
19 |
gr.Markdown(
|
20 |
"""
|
21 |
+
<h1 style="margin: 0;">InferBench - A Leaderboard for Inference Providers</h1>
|
22 |
<br>
|
23 |
<div style="margin-bottom: 20px;">
|
24 |
+
<p>Welcome to InferBench, the ultimate leaderboard for evaluating inference providers. Our platform focuses on key metrics such as cost, quality, and compression to help you make informed decisions. Whether you're a developer, researcher, or business looking to optimize your inference processes, InferBench provides the insights you need to choose the best provider for your needs.</p>
|
25 |
</div>
|
26 |
"""
|
27 |
)
|
|
|
59 |
# 💜 About Pruna AI
|
60 |
We are Pruna AI, an open source AI optimisation engine and we simply make your models cheaper, faster, smaller, greener!
|
61 |
|
62 |
+
# 📊 About InferBench
|
63 |
+
InferBench is a leaderboard for inference providers, focusing on cost, quality, and compression.
|
64 |
Over the past few years, we’ve observed outstanding progress in image generation models fueled by ever-larger architectures.
|
65 |
Due to their size, state-of-the-art models such as FLUX take more than 6 seconds to generate a single image on a high-end H100 GPU.
|
66 |
While compression techniques can reduce inference time, their impact on quality often remains unclear.
|
|
|
89 |
gr.Markdown(
|
90 |
"""
|
91 |
```bibtex
|
92 |
+
@article{InferBench,
|
93 |
+
title={InferBench: A Leaderboard for Inference Providers},
|
94 |
author={PrunaAI},
|
95 |
year={2025},
|
96 |
howpublished={\\url{https://huggingface.co/spaces/PrunaAI/InferBench}}
|