Spaces:
Running
Running
Mishig
commited on
[docs] docs are not live yet (#1181)
Browse files
README.md
CHANGED
@@ -16,15 +16,23 @@ load_balancing_strategy: random
|
|
16 |
|
17 |

|
18 |
|
19 |
-
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
-
|
24 |
|
25 |
-
|
26 |
|
27 |
-
|
28 |
|
29 |
Set `HF_TOKEN` in [Space secrets](https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables) to deploy a model with gated access or a model in a private repository. It's also compatible with [Inference for PROs](https://huggingface.co/blog/inference-pro) curated list of powerful models with higher rate limits. Make sure to create your personal token first in your [User Access Tokens settings](https://huggingface.co/settings/tokens).
|
30 |
|
|
|
16 |
|
17 |

|
18 |
|
19 |
+
A chat interface using open source models, eg OpenAssistant or Llama. It is a SvelteKit app and it powers the [HuggingChat app on hf.co/chat](https://huggingface.co/chat).
|
20 |
|
21 |
+
0. [No Setup Deploy](#no-setup-deploy)
|
22 |
+
1. [Setup](#setup)
|
23 |
+
2. [Launch](#launch)
|
24 |
+
3. [Web Search](#web-search)
|
25 |
+
4. [Text Embedding Models](#text-embedding-models)
|
26 |
+
5. [Extra parameters](#extra-parameters)
|
27 |
+
6. [Common issues](#common-issues)
|
28 |
+
7. [Deploying to a HF Space](#deploying-to-a-hf-space)
|
29 |
+
8. [Building](#building)
|
30 |
|
31 |
+
## No Setup Deploy
|
32 |
|
33 |
+
If you don't want to configure, setup, and launch your own Chat UI yourself, you can use this option as a fast deploy alternative.
|
34 |
|
35 |
+
You can deploy your own customized Chat UI instance with any supported [LLM](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending) of your choice on [Hugging Face Spaces](https://huggingface.co/spaces). To do so, use the chat-ui template [available here](https://huggingface.co/new-space?template=huggingchat/chat-ui-template).
|
36 |
|
37 |
Set `HF_TOKEN` in [Space secrets](https://huggingface.co/docs/hub/spaces-overview#managing-secrets-and-environment-variables) to deploy a model with gated access or a model in a private repository. It's also compatible with [Inference for PROs](https://huggingface.co/blog/inference-pro) curated list of powerful models with higher rate limits. Make sure to create your personal token first in your [User Access Tokens settings](https://huggingface.co/settings/tokens).
|
38 |
|