Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated from
TamisAI/inference-lamp-api
TamisAI
/
inference-api-g1
like
0
Sleeping
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
inference-api-g1
Ctrl+K
Ctrl+K
1 contributor
History:
79 commits
alexfremont
Update model lookup to use filename instead of ID in get_model function
9f9c6d5
5 days ago
api
Improve model unloading with explicit GPU memory cleanup and CUDA cache clearing
6 days ago
architecture
Refactor API architecture with modular design and database integration
10 days ago
config
Add model management endpoints and database fetch functionality
8 days ago
db
Disable prepared statement cache for pgbouncer compatibility
6 days ago
models
Update model lookup to use filename instead of ID in get_model function
5 days ago
schemas
Refactor API architecture with modular design and database integration
10 days ago
steps
Refactor API architecture with modular design and database integration
10 days ago
utils
Add timestamps to memory monitoring logs and display outputs
6 days ago
.gitattributes
Safe
1.52 kB
initial commit
7 months ago
.gitignore
Safe
347 Bytes
first commit for API
7 months ago
Dockerfile
Safe
832 Bytes
Merge Gradio UI into FastAPI app and standardize port to 7860
10 days ago
README.md
Safe
276 Bytes
Update README.md
18 days ago
docker-compose.yml
Safe
205 Bytes
Merge Gradio UI into FastAPI app and standardize port to 7860
10 days ago
main.py
Safe
8.75 kB
Remove periodic memory status updates and related helper function
6 days ago
requirements.txt
Safe
299 Bytes
Add system monitoring features and memory usage tracking for loaded models
6 days ago