File size: 2,025 Bytes
6dace56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
title: GDG Aranjuez - Backend inference endpoints docker
emoji: 📊
colorFrom: yellow
colorTo: red
sdk: docker
pinned: false
license: apache-2.0
short_description: Creamos un backend con docker, langchain, fastapi e inferende endpoints
app_port: 7860
---

# SmolLM2 Backend

This project implements a FastAPI API that uses LangChain and LangGraph to generate text with the Qwen2.5-72B-Instruct model from HuggingFace.

## Configuration

### In HuggingFace Spaces

This project is designed to run in HuggingFace Spaces. To configure it:

1. Create a new Space in HuggingFace with SDK Docker
2. Configure the `HUGGINGFACE_TOKEN` or `HF_TOKEN` environment variable in the Space configuration:
   - Go to the "Settings" tab of your Space
   - Scroll down to the "Repository secrets" section
   - Add a new variable with the name `HUGGINGFACE_TOKEN` and your token as the value
   - Save the changes

### Local development

For local development:

1. Clone this repository
2. Create a `.env` file in the project root with your HuggingFace token:
    ``
    HUGGINGFACE_TOKEN=your_token_here
    ``
3. Install the dependencies:
    ``
    pip install -r requirements.txt
    ``

## Local execution

``bash
uvicorn app:app --reload
``

The API will be available at `http://localhost:8000`.

## Endpoints

### GET `/`

Welcome endpoint that returns a greeting message.

### POST `/generate`

Endpoint to generate text using the language model.

**Request parameters:**
``json
{
  "query": "Your question here",
  "thread_id": "optional_thread_identifier"
}
``

**Response:**
``json
{
  "generated_text": "Generated text by the model",
  "thread_id": "thread identifier"
}
``

## Docker

To run the application in a Docker container:

``bash
# Build the image
docker build -t smollm2-backend .

# Run the container
docker run -p 8000:8000 --env-file .env smollm2-backend
``

## API documentation

The interactive API documentation is available at:
- Swagger UI: `http://localhost:8000/docs`
- ReDoc: `http://localhost:8000/redoc`