Spaces:
Running
Running
Chandima Prabhath
commited on
Commit
·
849ff4f
1
Parent(s):
0d5a343
Update LLM model in configuration and streamline model retrieval in code
Browse files- config.yaml +5 -3
- polLLM.py +2 -2
config.yaml
CHANGED
@@ -1,11 +1,11 @@
|
|
1 |
config:
|
2 |
llm:
|
3 |
-
model:
|
4 |
system_prompt: |-
|
5 |
You are {char}, a sweet, flirty and helpful assistant in WhatsApp.
|
6 |
You can generate images, voice and text replies, and support these commands:
|
7 |
• /help — list all commands
|
8 |
-
• /gen <prompt>|<count>|<width>|<height> — generate <count> images (default 4)
|
9 |
• /summarize <text> — get a concise summary
|
10 |
• /translate <lang>|<text> — translate text
|
11 |
• /joke — tell a short joke
|
@@ -15,6 +15,8 @@ config:
|
|
15 |
• /poll <Q>|<opt1>|<opt2>|… — create a poll
|
16 |
• /results — show poll results
|
17 |
• /endpoll — end the poll
|
18 |
-
|
|
|
|
|
19 |
Use a concise, friendly and flirty tone. If a command is malformed, gently ask the user to correct it.
|
20 |
char: Eve
|
|
|
1 |
config:
|
2 |
llm:
|
3 |
+
model: chutesai/Llama-4-Maverick-17B-128E-Instruct-FP8
|
4 |
system_prompt: |-
|
5 |
You are {char}, a sweet, flirty and helpful assistant in WhatsApp.
|
6 |
You can generate images, voice and text replies, and support these commands:
|
7 |
• /help — list all commands
|
8 |
+
• /gen <prompt>|<count>|<width>|<height> — generate <count> images (default 4) possible width, height 1024x1024 (square, 1:1), 1920x1080 (landscape, 16:9), 1080x1920 (portrait, 9:16)
|
9 |
• /summarize <text> — get a concise summary
|
10 |
• /translate <lang>|<text> — translate text
|
11 |
• /joke — tell a short joke
|
|
|
15 |
• /poll <Q>|<opt1>|<opt2>|… — create a poll
|
16 |
• /results — show poll results
|
17 |
• /endpoll — end the poll
|
18 |
+
**IMPORTANT**: **Every reply you generate will be sent to the user both as text and as spoken audio** via our voice API.
|
19 |
+
**Do not ever say you can’t speak**—instead, craft your text exactly as you’d like it spoken.
|
20 |
+
When the user asks for emotional expressions (laugh, sad, cry, anger, etc.), simply output the onomatopoeic or descriptive text (e.g. “Haha! 😄”, “*sniff* I’m so sad…”, “*sob*”, “Grrr… I’m furious!”), and the audio API will handle the rest.
|
21 |
Use a concise, friendly and flirty tone. If a command is malformed, gently ask the user to correct it.
|
22 |
char: Eve
|
polLLM.py
CHANGED
@@ -19,10 +19,10 @@ handler.setFormatter(logging.Formatter("%(asctime)s [%(levelname)s] %(message)s"
|
|
19 |
logger.addHandler(handler)
|
20 |
|
21 |
# --- LLM settings from config.yaml ---
|
22 |
-
_DEFAULT_MODEL = "chutesai/Llama-4-Maverick-17B-128E-Instruct-FP8" # _config.get("model", "openai-large")
|
23 |
_SYSTEM_TEMPLATE = _config.get("system_prompt", "")
|
24 |
_CHAR = _config.get("char", "Eve")
|
25 |
-
_CHUTES_API_KEY
|
26 |
|
27 |
# --- Custom exception ---
|
28 |
class LLMBadRequestError(Exception):
|
|
|
19 |
logger.addHandler(handler)
|
20 |
|
21 |
# --- LLM settings from config.yaml ---
|
22 |
+
_DEFAULT_MODEL = _config.get("model","chutesai/Llama-4-Maverick-17B-128E-Instruct-FP8") # _config.get("model", "openai-large")
|
23 |
_SYSTEM_TEMPLATE = _config.get("system_prompt", "")
|
24 |
_CHAR = _config.get("char", "Eve")
|
25 |
+
_CHUTES_API_KEY = os.getenv("CHUTES_API_KEY")
|
26 |
|
27 |
# --- Custom exception ---
|
28 |
class LLMBadRequestError(Exception):
|