feat: refactor model manager

* chore: mv model icon
* fix: model icon
* fix: model icon
* feat: refactor model manager
* fix: model icon
* fix: model icon
* feat: refactor model manager

See merge request: !905
This commit is contained in:
徐兆楠
2025-07-24 13:12:44 +00:00
parent 12f7762797
commit 9b3814e2c5
114 changed files with 2888 additions and 4982 deletions

View File

@@ -0,0 +1,201 @@
id: 100
name: test_model
icon_uri: default_icon/test_icon_uri.png
icon_url: test_icon_url
description:
zh: test_description
en: test_description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: frequency_penalty
label:
zh: 重复语句惩罚
en: Frequency penalty
desc:
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
type: float
min: "-2"
max: "2"
default_val:
default_val: "0"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: presence_penalty
label:
zh: 重复主题惩罚
en: Presence penalty
desc:
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
type: float
min: "-2"
max: "2"
default_val:
default_val: "0"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: Markdown
value: "1"
- label: JSON
value: "2"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: test_model
protocol: test_protocol
capability:
function_call: true
input_modal:
- text
- image
- audio
- video
input_tokens: 1024
json_mode: true
max_tokens: 2048
output_modal:
- text
- image
- audio
- video
output_tokens: 1024
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: https://localhost:1234/chat/completion
api_key: qweasdzxc
timeout: 10s
model: model_name
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 2048
top_p: 0
top_k: 0
stop:
- bye
enable_thinking: false
openai:
by_azure: true
api_version: "2024-10-21"
response_format:
type: text
jsonschema: null
claude:
by_bedrock: true
access_key: bedrock_ak
secret_access_key: bedrock_secret_ak
session_token: bedrock_session_token
region: bedrock_region
ark:
region: region
access_key: ak
secret_key: sk
retry_times: 123
custom_header:
key: val
deepseek:
response_format_type: text
qwen: null
gemini:
backend: 0
project: ""
location: ""
api_version: ""
headers:
key_1:
- val_1
- val_2
timeout_ms: 0
include_thoughts: true
thinking_budget: null
custom: {}
status: 0