Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
seanpedrickcase
/
llm_topic_modelling
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
b0e08c8
llm_topic_modelling
Ctrl+K
Ctrl+K
3 contributors
History:
25 commits
seanpedrickcase
Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files.
b0e08c8
2 months ago
.github
First commit
5 months ago
tools
Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files.
2 months ago
.dockerignore
Safe
137 Bytes
First commit
5 months ago
.gitignore
Safe
137 Bytes
First commit
5 months ago
Dockerfile
Safe
1.91 kB
Topic deduplication/merging now separated from summarisation. Gradio upgrade
3 months ago
README.md
Safe
2.65 kB
Updated intro and readme to link to datasets
5 months ago
app.py
Safe
25.8 kB
Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files.
2 months ago
requirements.txt
Safe
437 Bytes
Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files.
2 months ago
requirements_aws.txt
Safe
369 Bytes
Topic deduplication/merging now separated from summarisation. Gradio upgrade
3 months ago
requirements_gpu.txt
Safe
633 Bytes
Changed default requirements to CPU version of llama cpp. Added Gemini Flash 2.0 to model list. Output files should contain only final files.
2 months ago