feat: refactor model manager

* chore: mv model icon
* fix: model icon
* fix: model icon
* feat: refactor model manager
* fix: model icon
* fix: model icon
* feat: refactor model manager

See merge request: !905
This commit is contained in:
徐兆楠
2025-07-24 13:12:44 +00:00
parent 12f7762797
commit 9b3814e2c5
114 changed files with 2888 additions and 4982 deletions

View File

@@ -0,0 +1,107 @@
id: 2004
name: DeepSeek-V3
icon_uri: default_icon/deepseek_v2.png
icon_url: ""
description:
zh: deepseek 模型简介
en: deepseek model description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成随机性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON Object
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: DeepSeek-V3
protocol: deepseek
capability:
function_call: false
input_modal:
- text
input_tokens: 128000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16384
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 1
top_k: 0
stop: []
openai: null
claude: null
ark: null
deepseek:
response_format_type: text
qwen: null
gemini: null
custom: {}
status: 0