Skip to content

Commit ec76585

Browse files
authored
Updated FAQ plus miscellaneous updates. (#364)
### What problem does this PR solve? ### Type of change - [x] Documentation Update
1 parent 826ad6a commit ec76585

File tree

4 files changed

+28
-20
lines changed

4 files changed

+28
-20
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@
5656
## 📌 Latest Features
5757

5858
- 2024-04-11 Support [Xinference](./docs/xinference.md) for local LLM deployment.
59-
- 2024-04-10 Add a new layout recognization model to the 'Laws' method.
59+
- 2024-04-10 Add a new layout recognization model for analyzing Laws documentation.
6060
- 2024-04-08 Support [Ollama](./docs/ollama.md) for local LLM deployment.
6161
- 2024-04-07 Support Chinese UI.
6262

README_ja.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,13 @@
5353
- 複数の想起と融合された再ランク付け。
5454
- 直感的な API によってビジネスとの統合がシームレスに。
5555

56+
## 📌 最新の機能
57+
58+
- 2024-04-11 ローカル LLM デプロイメント用に [Xinference](./docs/xinference.md) をサポートします。
59+
- 2024-04-10 メソッド「Laws」に新しいレイアウト認識モデルを追加します。
60+
- 2024-04-08 [Ollama](./docs/ollama.md) を使用した大規模モデルのローカライズされたデプロイメントをサポートします。
61+
- 2024-04-07 中国語インターフェースをサポートします。
62+
5663
## 🔎 システム構成
5764

5865
<div align="center" style="margin-top:20px;margin-bottom:20px;">
@@ -170,12 +177,9 @@ $ chmod +x ./entrypoint.sh
170177
$ docker compose up -d
171178
```
172179
173-
## 🆕 最新の新機能
180+
## 📚 ドキュメンテーション
174181

175-
- 2024-04-11 ローカル LLM デプロイメント用に [Xinference](./docs/xinference.md) をサポートします。
176-
- 2024-04-10 メソッド「Laws」に新しいレイアウト認識モデルを追加します。
177-
- 2024-04-08 [Ollama](./docs/ollama.md) を使用した大規模モデルのローカライズされたデプロイメントをサポートします。
178-
- 2024-04-07 中国語インターフェースをサポートします。
182+
- [FAQ](./docs/faq.md)
179183

180184
## 📜 ロードマップ
181185

README_zh.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,13 @@
5353
- 基于多路召回、融合重排序。
5454
- 提供易用的 API,可以轻松集成到各类企业系统。
5555

56+
## 📌 新增功能
57+
58+
- 2024-04-11 支持用 [Xinference](./docs/xinference.md) 本地化部署大模型。
59+
- 2024-04-10 为‘Laws’版面分析增加了底层模型。
60+
- 2024-04-08 支持用 [Ollama](./docs/ollama.md) 本地化部署大模型。
61+
- 2024-04-07 支持中文界面。
62+
5663
## 🔎 系统架构
5764

5865
<div align="center" style="margin-top:20px;margin-bottom:20px;">
@@ -170,12 +177,9 @@ $ chmod +x ./entrypoint.sh
170177
$ docker compose up -d
171178
```
172179
173-
## 🆕 最近新特性
180+
## 📚 技术文档
174181

175-
- 2024-04-11 支持用 [Xinference](./docs/xinference.md) for local LLM deployment.
176-
- 2024-04-10 为‘Laws’版面分析增加了模型。
177-
- 2024-04-08 支持用 [Ollama](./docs/ollama.md) 对大模型进行本地化部署。
178-
- 2024-04-07 支持中文界面。
182+
- [FAQ](./docs/faq.md)
179183

180184
## 📜 路线图
181185

docs/faq.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -45,14 +45,14 @@ This feature and the related APIs are still in development. Contributions are we
4545

4646
### How to increase the length of RAGFlow responses?
4747

48-
Adjust the **Max Tokens** slider in **Model Setting**:
49-
50-
![](https://github.com/infiniflow/ragflow/assets/93570324/6a9c3577-6f5c-496a-9b8d-bee7f98a9c3c)
48+
1. Right click the desired dialog to display the **Chat Configuration** window.
49+
2. Switch to the **Model Setting** tab and adjust the **Max Tokens** slider to get the desired length.
50+
3. Click **OK** to confirm your change.
5151

5252

5353
### What does Empty response mean? How to set it?
5454

55-
You limit what the system responds to what you specify in Empty response if nothing is retrieved from your knowledge base. If you do not specify anything in Empty response, you let your LLM improvise, giving it a chance to hallucinate.
55+
You limit what the system responds to what you specify in **Empty response** if nothing is retrieved from your knowledge base. If you do not specify anything in **Empty response**, you let your LLM improvise, giving it a chance to hallucinate.
5656

5757
### Can I set the base URL for OpenAI somewhere?
5858

@@ -70,9 +70,9 @@ You can use Ollama to deploy local LLM. See [here](https://github.com/infiniflow
7070

7171
### How to configure RAGFlow to respond with 100% matched results, rather than utilizing LLM?
7272

73-
In Configuration, choose **Q&A** as the chunk method:
74-
75-
![](https://github.com/infiniflow/ragflow/assets/93570324/b119f201-ddc2-425f-ab6d-e82fa7b7ce8c)
73+
1. Click the **Knowledge Base** tab in the middle top of the page.
74+
2. Right click the desired knowledge base to display the **Configuration** dialogue.
75+
3. Choose **Q&A** as the chunk method and click **Save** to confirm your change.
7676

7777
## Debugging
7878

@@ -136,15 +136,15 @@ $ docker ps
136136
es:
137137
hosts: 'http://es01:9200'
138138
```
139-
- - If you run RAGFlow outside of Docker, verify the ES host setting in **conf/service_conf.yml** using:
139+
- If you run RAGFlow outside of Docker, verify the ES host setting in **conf/service_conf.yml** using:
140140
```bash
141141
curl http://<IP_OF_ES>:<PORT_OF_ES>
142142
```
143143
144144
145145
### How to handle `{"data":null,"retcode":100,"retmsg":"<NotFound '404: Not Found'>"}`?
146146
147-
Your IP address or port number may be incorrect. If you are using the default configurations, enter http://<IP_OF_YOUR_MACHINE> (NOT `localhost`, NOT 9380, AND NO PORT NUMBER REQUIRED!) in your browser. This should work.
147+
Your IP address or port number may be incorrect. If you are using the default configurations, enter http://<IP_OF_YOUR_MACHINE> (**NOT `localhost`, NOT 9380, AND NO PORT NUMBER REQUIRED!**) in your browser. This should work.
148148
149149
150150

0 commit comments

Comments
 (0)