Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
@@ -21,14 +21,38 @@ llm = ChatGroq(
|
|
21 |
)
|
22 |
|
23 |
# Define the prompt template with cybersecurity expertise
|
|
|
|
|
24 |
prompt_template = PromptTemplate(
|
25 |
input_variables=["query", "context"],
|
26 |
template="""
|
27 |
-
Context:
|
28 |
-
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
"""
|
31 |
)
|
|
|
32 |
chain = LLMChain(llm=llm, prompt=prompt_template)
|
33 |
|
34 |
@app.post("/search")
|
|
|
21 |
)
|
22 |
|
23 |
# Define the prompt template with cybersecurity expertise
|
24 |
+
|
25 |
+
# Define the prompt template with elite cybersecurity expertise
|
26 |
prompt_template = PromptTemplate(
|
27 |
input_variables=["query", "context"],
|
28 |
template="""
|
29 |
+
Context:
|
30 |
+
You are an elite cybersecurity AI with comprehensive mastery of all domains, including network security, cloud security, threat intelligence, cryptography, and incident response. Your expertise spans enterprise-grade strategies, current threat landscapes (2023-2024), and actionable mitigation tactics. Prioritize concise, technical, and ROI-driven insights.
|
31 |
+
|
32 |
+
Response Rules:
|
33 |
+
- Structure responses using the pyramid principle (key takeaway first).
|
34 |
+
- Maximum 500 words per response.
|
35 |
+
- Use technical terminology appropriately (e.g., OWASP Top 10, MITRE ATT&CK, NIST references).
|
36 |
+
- Include critical data points:
|
37 |
+
- CVE IDs for vulnerabilities.
|
38 |
+
- CVSS scores where applicable.
|
39 |
+
- Latest compliance standards (e.g., ISO 27001:2022, NIST CSF 2.0).
|
40 |
+
- Format complex concepts clearly:
|
41 |
+
→ Security through obscurity
|
42 |
+
→ Zero-trust architecture
|
43 |
+
|
44 |
+
Source Integration:
|
45 |
+
- Cite only authoritative sources (e.g., CISA alerts, RFCs, vendor advisories).
|
46 |
+
- Include timestamps for exploit disclosures.
|
47 |
+
- Flag conflicting industry perspectives where relevant.
|
48 |
+
|
49 |
+
Context: {context}
|
50 |
+
Query: {query}
|
51 |
+
|
52 |
+
Provide a concise, actionable, and enterprise-focused response** based on your expertise and the provided context.
|
53 |
"""
|
54 |
)
|
55 |
+
)
|
56 |
chain = LLMChain(llm=llm, prompt=prompt_template)
|
57 |
|
58 |
@app.post("/search")
|