Chandima Prabhath commited on
Commit
5c8a7b9
·
1 Parent(s): 3b132e6

Remove max_tokens parameter from LLM request in generate_llm function for cleaner API call.

Browse files
Files changed (1) hide show
  1. polLLM.py +0 -1
polLLM.py CHANGED
@@ -63,7 +63,6 @@ def generate_llm(
63
  model=model,
64
  messages=messages,
65
  seed=seed,
66
- max_tokens=4000,
67
  )
68
  text = resp.choices[0].message.content.strip()
69
  logger.debug("LLM response received")
 
63
  model=model,
64
  messages=messages,
65
  seed=seed,
 
66
  )
67
  text = resp.choices[0].message.content.strip()
68
  logger.debug("LLM response received")