Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated from
TamisAI/inference-lamp-api
TamisAI
/
inference-api-g1
like
0
Running
on
CPU Upgrade
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
db280f4
inference-api-g1
Ctrl+K
Ctrl+K
1 contributor
History:
69 commits
alexfremont
Disable prepared statement cache for pgbouncer compatibility
db280f4
10 days ago
api
Remove DELETE endpoint for model unload, keep POST alternative only
10 days ago
architecture
Refactor API architecture with modular design and database integration
13 days ago
config
Add model management endpoints and database fetch functionality
12 days ago
db
Disable prepared statement cache for pgbouncer compatibility
10 days ago
models
Refactor model loading to store metadata alongside pipelines in model_pipelines dict
10 days ago
schemas
Refactor API architecture with modular design and database integration
13 days ago
steps
Refactor API architecture with modular design and database integration
13 days ago
utils
Refactor API architecture with modular design and database integration
13 days ago
.gitattributes
Safe
1.52 kB
initial commit
7 months ago
.gitignore
Safe
347 Bytes
first commit for API
7 months ago
Dockerfile
Safe
832 Bytes
Merge Gradio UI into FastAPI app and standardize port to 7860
13 days ago
README.md
Safe
276 Bytes
Update README.md
22 days ago
docker-compose.yml
Safe
205 Bytes
Merge Gradio UI into FastAPI app and standardize port to 7860
13 days ago
main.py
Safe
8.28 kB
Improve model info formatting for cleaner display in API responses
10 days ago
requirements.txt
Safe
228 Bytes
Replace gradio import syntax and remove unnecessary whitespace
10 days ago