fibonacciai commited on
Commit
c1214bf
Β·
verified Β·
1 Parent(s): be013d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -1
README.md CHANGED
@@ -7,4 +7,39 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ # Persian-llm-fibonacci-1-7b-chat.P1_0 🌟
11
+
12
+ ## Description πŸ“„
13
+ The **Persian-llm-fibonacci-1-7b-chat.P1_0** is a **1.7 billion parameter language model (LLM)** specifically designed for **Persian-language chat and text interactions**. Developed as part of the **FibonacciAI** project, this model is optimized to generate fluent and natural Persian text, making it ideal for conversational AI applications.
14
+
15
+ Built on advanced language model architectures (e.g., GPT), it excels in tasks like chat, content generation, question answering, and more. πŸš€
16
+
17
+ ---
18
+
19
+ ## Use Cases πŸ’‘
20
+ - **Chatbots**: Create intelligent Persian-language chatbots. πŸ€–
21
+ - **Content Generation**: Generate creative and contextually relevant Persian text. πŸ“
22
+ - **Question Answering**: Provide natural and accurate answers to user queries. ❓
23
+ - **Machine Translation**: Translate text to and from Persian. 🌍
24
+
25
+ ---
26
+
27
+ ## How to Use πŸ› οΈ
28
+ To use this model, you can leverage the `transformers` library. Here's a quick example:
29
+
30
+ ```python
31
+ from transformers import AutoModelForCausalLM, AutoTokenizer
32
+
33
+ # Load the model and tokenizer
34
+ model_name = "fibonacciai/Persian-llm-fibonacci-1-7b-chat.P1_0"
35
+ model = AutoModelForCausalLM.from_pretrained(model_name)
36
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
37
+
38
+ # Generate a response to an input text
39
+ input_text = "Ψ³Ω„Ψ§Ω…ΨŒ Ϊ†Ψ·ΩˆΨ±ΫŒΨŸ"
40
+ inputs = tokenizer(input_text, return_tensors="pt")
41
+ outputs = model.generate(**inputs, max_length=50)
42
+
43
+ # Decode the output to text
44
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
45
+ print(response)