feat: refactor model manager

* chore: mv model icon
* fix: model icon
* fix: model icon
* feat: refactor model manager
* fix: model icon
* fix: model icon
* feat: refactor model manager

See merge request: !905
This commit is contained in:
徐兆楠
2025-07-24 13:12:44 +00:00
parent 12f7762797
commit 9b3814e2c5
114 changed files with 2888 additions and 4982 deletions

View File

@@ -1,94 +0,0 @@
id: 2002 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao Model
description: test doubao description
meta:
id: 102
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'Markdown'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,66 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Lite
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -1,66 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Pro-256k
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -1,66 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Pro-32k
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -1,90 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Thinking-Pro
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,90 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Thinking-Vision-Pro
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,90 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Vision-Lite
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,90 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Vision-Pro
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,90 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-Seed-1.6-Flash
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,90 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-Seed-1.6-Thinking
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,90 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-Seed-1.6
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,66 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Deepseek-R1-VolcEngine
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -1,66 +0,0 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Deepseek-V3-VolcEngine
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -1,132 +0,0 @@
id: 100 # 模型 entity id, 同 id 数据不会覆盖
name: test_model
description: test_description
meta:
id: 0
scenario: 1
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: frequency_penalty
label:
zh: 重复语句惩罚
en: Frequency penalty
desc:
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: presence_penalty
label:
zh: 重复主题惩罚
en: Presence penalty
desc:
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'Markdown'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,47 +0,0 @@
id: 2006 # 模型 entity id, 同 id 数据不会覆盖
name: Claude-3.5-Sonnet
description: test claude description
meta:
id: 106
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,72 +0,0 @@
id: 2004 # 模型 entity id, 同 id 数据不会覆盖
name: DeepSeek-V3
description: test deepseek description
meta:
id: 104
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成随机性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON Object'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,90 +0,0 @@
id: 2007 # 模型 entity id, 同 id 数据不会覆盖
name: Gemini-2.5-Flash
description: test gemini description
meta:
id: 107
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,47 +0,0 @@
id: 2003 # 模型 entity id, 同 id 数据不会覆盖
name: Gemma-3
description: test gemma-3 description
meta:
id: 103
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,131 +0,0 @@
id: 2001 # 模型 entity id, 同 id 数据不会覆盖
name: GPT-4o
description: test gpt-4o description
meta:
id: 101
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: frequency_penalty
label:
zh: 重复语句惩罚
en: Frequency penalty
desc:
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: presence_penalty
label:
zh: 重复主题惩罚
en: Presence penalty
desc:
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'Markdown'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -1,66 +0,0 @@
id: 2005 # 模型 entity id, 同 id 数据不会覆盖
name: Qwen3-32B
description: test qwen description
meta:
id: 105
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.95'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -1,41 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 102 # 模型 id, 同 id 数据不会覆盖
name: Doubao # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 豆包模型简介 # 模型默认描述
en: doubao model description
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
input_tokens: 128000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16384 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,40 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-1.5-lite # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 'Doubao-1.5-lite全新一代轻量版模型极致响应速度效果与时延均达到全球一流水平。' # 模型默认描述
en: 'Doubao-1.5-lite, the new generation lightweight model, delivers ultra-fast response speed with both performance and latency reaching world-class standards.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 20000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 12000 # 输出 token 上限
max_tokens: 32000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,40 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-1.5-pro-256k # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 'doubao-1.5-pro-256k基于doubao-1.5-Pro全面升级版整体效果大幅提升10%。更高性能、更大窗口、超高性价比,适用于更广泛的应用场景。' # 模型默认描述
en: 'doubao-1.5-pro-256k is a fully upgraded version based on doubao-1.5-Pro, with an overall performance improvement of 10%. It offers higher performance, a larger context window, and exceptional cost-effectiveness, making it suitable for a wider range of application scenarios.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 12000 # 输出 token 上限
max_tokens: 256000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,40 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-1.5-pro-32k # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 'Doubao-1.5-pro全新一代主力模型性能全面升级在知识、代码、推理、等方面表现卓越。' # 模型默认描述
en: 'Doubao-1.5-pro, the new generation flagship model, features comprehensive performance upgrades and excels in areas such as knowledge, coding, and reasoning.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 12000 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,41 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-1.5-thinking-pro # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 'doubao-1.5 全新深度思考模型,在数学、编程、科学推理等专业领域及创意写作等通用任务中表现突出,在 AIME 2024、Codeforces、GPQA 等多项权威基准上达到或接近业界第一梯队水平。' # 模型默认描述
en: "doubao-1.5 is a brand-new deep thinking model that excels in specialized fields such as mathematics, programming, scientific reasoning, as well as general tasks like creative writing. It achieves or approaches the industrys top-tier level on multiple authoritative benchmarks including AIME 2024, Codeforces, and GPQA."
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16000 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,42 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-1.5-thinking-vision-pro # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 'doubao-1-5-thinking-vision-pro 全新视觉深度思考模型,具备更强的通用多模态理解和推理能力,在 59 个公开评测基准中的 37 个上取得 SOTA 表现。' # 模型默认描述
en: 'doubao-1-5-thinking-vision-pro is a brand-new visual deep thinking model, featuring stronger general multimodal understanding and reasoning abilities, achieving SOTA performance on 37 out of 59 public evaluation benchmarks.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
- video
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16000 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,41 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-1.5-vision-lite # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 'doubao-1.5-vision-lite极具性价比的多模态大模型支持任意分辨率和极端长宽比图像识别增强视觉推理、文档识别、细节信息理解和指令遵循能力。' # 模型默认描述
en: 'doubao-1.5-vision-lite is a highly cost-effective multimodal large model that supports image recognition at any resolution and extreme aspect ratios, enhancing visual reasoning, document recognition, detailed information comprehension, and instruction-following capabilities.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: false # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16000 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,42 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-1.5-vision-pro # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: 'doubao-1.5-vision-pro全新升级的多模态大模型支持任意分辨率和极端长宽比图像识别增强视觉推理、文档识别、细节信息理解和指令遵循能力。' # 模型默认描述
en: 'doubao-1.5-vision-pro is a newly upgraded multimodal large model that supports image recognition at any resolution and extreme aspect ratios, enhancing visual reasoning, document recognition, detailed information comprehension, and instruction-following capabilities.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
- video
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16000 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,42 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-seed-1.6-flash # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: '有极致推理速度的多模态深度思考模型;同时支持文本和视觉理解。文本理解能力超过上一代 Lite 系列模型,视觉理解比肩友商 Pro 系列模型。' # 模型默认描述
en: 'A multimodal deep thinking model with extreme reasoning speed; it supports both text and visual understanding. Its text comprehension surpasses the previous generation Lite series models, while its visual understanding rivals competitor Pro series models.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
- video
input_tokens: 224000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 32000 # 输出 token 上限
max_tokens: 256000 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,42 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-seed-1.6-thinking # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: '在思考能力上进行了大幅强化, 对比 doubao 1.5 代深度理解模型,在编程、数学、逻辑推理等基础能力上进一步提升, 支持视觉理解。' # 模型默认描述
en: 'Significantly enhanced in thinking capabilities, compared to the doubao 1.5 generation deep understanding model, with further improvements in fundamental skills such as programming, mathematics, and logical reasoning, and support for visual understanding.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
- video
input_tokens: 224000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16000 # 输出 token 上限
max_tokens: 256000 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,42 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: doubao-seed-1.6 # 模型 meta 名称
icon_uri: doubao_v2.png
description:
zh: '全新多模态深度思考模型,同时支持 thinking、non-thinking、auto三种思考模式。其中 non-thinking 模型对比 doubao-1.5-pro-32k-250115 模型大幅提升。' # 模型默认描述
en: 'A brand-new multimodal deep thinking model supports three thinking modes: thinking, non-thinking, and auto. Among them, the non-thinking model has significantly improved compared to the doubao-1.5-pro-32k-250115 model.'
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
- video
input_tokens: 224000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 32000 # 输出 token 上限
max_tokens: 256000 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,40 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: deepseek-r1-ve # 模型 meta 名称
icon_uri: deepseek_v2.png
description:
zh: 'deepseek-r1 是由深度求索推出的深度思考模型。在后训练阶段大规模使用了强化学习技术,在仅有极少标注数据的情况下,极大提升了模型推理能力。在数学、代码、自然语言推理等任务上,性能比肩 OpenAI o1 正式版。' # 模型默认描述
en: "deepseek-r1 is a deep thinking model launched by Deep Seek. It extensively employs reinforcement learning during the post-training phase, significantly enhancing the model's reasoning ability with very limited annotated data. In tasks such as mathematics, coding, and natural language reasoning, its performance rivals that of the official OpenAI o1 version."
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 32000 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,40 +0,0 @@
# ark model template
# model list: https://www.volcengine.com/docs/82379/1330310
# get api_key: https://www.volcengine.com/docs/82379/1399008#b00dee71
# get region: https://www.volcengine.com/docs/82379/1319853#%E8%AE%BE%E7%BD%AE%E5%9C%B0%E5%9F%9F%E5%92%8C%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D
# get ak/sk: https://www.volcengine.com/docs/82379/1319853#%E4%BD%BF%E7%94%A8access-key%E9%89%B4%E6%9D%83
id: 65536 # 模型 id, 同 id 数据不会覆盖
name: deepseek-v3-ve # 模型 meta 名称
icon_uri: deepseek_v2.png
description:
zh: 'deepseek-v3 由深度求索公司自研的MoE模型多项评测成绩超越了 qwen2.5-72b 和 llama-3.1-405b 等开源模型,并在性能上和世界顶尖的闭源模型 gpt-4o 及 claude-3.5-Sonnet 不分伯仲。' # 模型默认描述
en: "deepseek-v3 is a MoE model independently developed by Deep Seek. Its performance in multiple evaluations surpasses open-source models such as qwen2.5-72b and llama-3.1-405b, and it competes on par with world-leading closed-source models like gpt-4o and claude-3.5-Sonnet."
protocol: ark # 模型连接协议
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 96000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16000 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model name
temperature: 0.1 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.7 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
ark: # OPTIONAL
region: ''
access_key: ''
secret_key: ''
status: 1

View File

@@ -1,75 +0,0 @@
id: 0 # 模型 meta id, 同 id 数据不会覆盖
name: test_model # 模型展示名称
icon_uri: test_icon_uri # 模型展示图片 uri
icon_url: test_icon_url # 模型展示图片 url
description:
zh: test_description # 模型默认描述
en: test_description
protocol: test_protocol # 模型连接协议, see: backend/infra/contract/chatmodel/protocol.go
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
- audio
- video
input_tokens: 1024 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
- image
- audio
- video
output_tokens: 1024 # 输出 token 上限
max_tokens: 2048 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
base_url: https://localhost:1234/chat/completion
api_key: qweasdzxc
timeout: 100 # nanosec
model: model_name # 模型名称
temperature: 0.7 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 2048 # 默认 max_tokens
top_p: 0 # 默认 top_p
top_k: 0 # 默认 top_k
enable_thinking: false
stop:
- bye
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
openai:
by_azure: true
api_version: 2024-10-21
response_format:
type: text
claude:
by_bedrock: true
access_key: bedrock_ak
secret_access_key: bedrock_secret_ak
session_token: bedrock_session_token
region: bedrock_region
ark:
region: region
access_key: ak
secret_key: sk
retry_times: 123
custom_header:
key: val
deepseek:
response_format_type: text
gemini:
backend: 0
project: ''
location: ''
api_version: ''
headers:
key_1:
- val_1
- val_2
timeout: 0
include_thoughts: true
thinking_budget: null
status: 1

View File

@@ -1,40 +0,0 @@
id: 106 # 模型 id, 同 id 数据不会覆盖
name: Claude-3.5-Sonnet # 模型 meta 名称
icon_uri: claude_v2.png # 模型展示图片 uri
icon_url: '' # 模型展示图片 url
description:
zh: claude 模型简介 # 模型默认描述
en: claude model description
protocol: claude # 模型连接协议, see: backend/infra/contract/chatmodel/protocol.go
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
input_tokens: 128000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16384 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
base_url: '' # REQUIRED: base_url
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model
temperature: 0.7 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 1 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
claude: # OPTIONAL
by_bedrock: false # true if using bedrock service
access_key: '' # access key
secret_access_key: '' # secret access key
session_token: '' # session_token
region: '' # region
status: 1

View File

@@ -1,35 +0,0 @@
id: 104 # 模型 id, 同 id 数据不会覆盖
name: DeepSeek-V3 # 模型 meta 名称
icon_uri: deepseek_v2.png # 模型展示图片 uri
icon_url: '' # 模型展示图片 url
description:
zh: deepseek 模型简介
en: deepseek model description
protocol: deepseek # 模型连接协议, see: backend/infra/contract/chatmodel/protocol.go
capability: # 模型基础能力
function_call: false # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 128000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16384 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
base_url: '' # REQUIRED: base_url
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model
temperature: 0.7 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 1 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
deepseek: # OPTIONAL
response_format_type: text # response format
status: 1

View File

@@ -1,48 +0,0 @@
id: 107 # 模型 id, 同 id 数据不会覆盖
name: Gemini-2.5-Flash # 模型 meta 名称
icon_uri: gemini_v2.png # 模型展示图片 uri
icon_url: '' # 模型展示图片 url
description:
zh: gemini 模型简介 # 模型默认描述
en: gemini model description
protocol: gemini # 模型连接协议, see: backend/infra/contract/chatmodel/protocol.go
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
- audio
- video
input_tokens: 1048576 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 65536 # 输出 token 上限
max_tokens: 1114112 # 最大 token 数量
json_mode: true # 是否支持 json mode
prefix_caching: true # 是否支持 prefix caching
reasoning: true # 是否支持 reasoning
prefill_response: true # 是否支持续写
conn_config: # 模型连接参数
base_url: '' # REQUIRED: base_url
api_key: '' # REQUIRED: api_key
model: gemini-2.5-flash # REQUIRED: model
temperature: 0.7 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 1 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
gemini:
backend: 0
project: ''
location: ''
api_version: ''
headers:
key_1:
- val_1
- val_2
timeout: 0
include_thoughts: true
thinking_budget: null
status: 1

View File

@@ -1,31 +0,0 @@
id: 103 # 模型 id, 同 id 数据不会覆盖
name: Gemma-3 # 模型 meta 名称
icon_uri: ollama.png # 模型展示图片 uri
icon_url: '' # 模型展示图片 url
description:
zh: ollama 模型简介
en: ollama model description
protocol: ollama # 模型连接协议, see: backend/infra/contract/chatmodel/protocol.go
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 128000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16384 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
base_url: '' # REQUIRED: base_url
model: '' # REQUIRED: model
temperature: 0.6 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 0.95 # 默认 top_p
top_k: 20 # 默认 top_k
status: 1

View File

@@ -1,39 +0,0 @@
id: 101 # 模型 id, 同 id 数据不会覆盖
name: GPT-4o # 模型 meta 名称
icon_uri: openai_v2.png # 模型展示图片 uri
icon_url: '' # 模型展示图片 url
description:
zh: gpt 模型简介
en: Multi-modal, 320ms, 88.7% MMLU, excels in education, customer support, health, and entertainment. # 模型默认描述
protocol: openai # 模型连接协议, see: backend/infra/contract/chatmodel/protocol.go
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
- image
input_tokens: 128000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16384 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
base_url: '' # REQUIRED: base_url
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model
temperature: 0.7 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 1 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
openai: # OPTIONAL
by_azure: true # true if using azure openai
api_version: '' # azure api version
response_format: # response format
type: text
status: 1

View File

@@ -1,36 +0,0 @@
id: 105 # 模型 id, 同 id 数据不会覆盖
name: Qwen3-32B # 模型 meta 名称
icon_uri: qwen_v2.png # 模型展示图片 uri
icon_url: '' # 模型展示图片 url
description:
zh: 通义千问模型 # 模型默认描述
en: qwen model description
protocol: qwen # 模型连接协议, see: backend/infra/contract/chatmodel/protocol.go
capability: # 模型基础能力
function_call: true # 模型是否支持 function call
input_modal: # 模型输入支持模态
- text
input_tokens: 128000 # 输入 token 上限
output_modal: # 模型输出支持模态
- text
output_tokens: 16384 # 输出 token 上限
max_tokens: 128000 # 最大 token 数量
json_mode: false # 是否支持 json mode
prefix_caching: false # 是否支持 prefix caching
reasoning: false # 是否支持 reasoning
prefill_response: false # 是否支持续写
conn_config: # 模型连接参数
base_url: '' # REQUIRED: base_url
api_key: '' # REQUIRED: api_key
model: '' # REQUIRED: model
temperature: 0.7 # 默认 temperature
frequency_penalty: 0 # 默认 frequency_penalty
presence_penalty: 0 # 默认 presence_penalty
max_tokens: 4096 # 默认 max_tokens
top_p: 1 # 默认 top_p
top_k: 0 # 默认 top_k
# 下面是模型专用配置,仅配置 protocol 所对应的类型即可
qwen: # OPTIONAL
response_format: # response format
type: text
status: 1

View File

@@ -0,0 +1,133 @@
id: 2002
name: Doubao Model
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: 豆包模型简介
en: doubao model description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: Markdown
value: "1"
- label: JSON
value: "2"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: Doubao
protocol: ark
capability:
function_call: true
input_modal:
- text
- image
input_tokens: 128000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16384
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,108 @@
id: 65536
name: Doubao-1.5-Lite
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: Doubao-1.5-lite全新一代轻量版模型极致响应速度效果与时延均达到全球一流水平。
en: Doubao-1.5-lite, the new generation lightweight model, delivers ultra-fast response speed with both performance and latency reaching world-class standards.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
meta:
name: doubao-1.5-lite
protocol: ark
capability:
function_call: true
input_modal:
- text
input_tokens: 20000
json_mode: false
max_tokens: 32000
output_modal:
- text
output_tokens: 12000
prefix_caching: true
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,108 @@
id: 65536
name: Doubao-1.5-Pro-256k
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: doubao-1.5-pro-256k基于doubao-1.5-Pro全面升级版整体效果大幅提升10%。更高性能、更大窗口、超高性价比,适用于更广泛的应用场景。
en: doubao-1.5-pro-256k is a fully upgraded version based on doubao-1.5-Pro, with an overall performance improvement of 10%. It offers higher performance, a larger context window, and exceptional cost-effectiveness, making it suitable for a wider range of application scenarios.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
meta:
name: doubao-1.5-pro-256k
protocol: ark
capability:
function_call: true
input_modal:
- text
input_tokens: 96000
json_mode: false
max_tokens: 256000
output_modal:
- text
output_tokens: 12000
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,108 @@
id: 65536
name: Doubao-1.5-Pro-32k
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: Doubao-1.5-pro全新一代主力模型性能全面升级在知识、代码、推理、等方面表现卓越。
en: Doubao-1.5-pro, the new generation flagship model, features comprehensive performance upgrades and excels in areas such as knowledge, coding, and reasoning.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
meta:
name: doubao-1.5-pro-32k
protocol: ark
capability:
function_call: true
input_modal:
- text
input_tokens: 96000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 12000
prefix_caching: true
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,131 @@
id: 65536
name: Doubao-1.5-Thinking-Pro
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: doubao-1.5 全新深度思考模型,在数学、编程、科学推理等专业领域及创意写作等通用任务中表现突出,在 AIME 2024、Codeforces、GPQA 等多项权威基准上达到或接近业界第一梯队水平。
en: doubao-1.5 is a brand-new deep thinking model that excels in specialized fields such as mathematics, programming, scientific reasoning, as well as general tasks like creative writing. It achieves or approaches the industrys top-tier level on multiple authoritative benchmarks including AIME 2024, Codeforces, and GPQA.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: doubao-1.5-thinking-pro
protocol: ark
capability:
function_call: true
input_modal:
- text
- image
input_tokens: 96000
json_mode: true
max_tokens: 128000
output_modal:
- text
output_tokens: 16000
prefix_caching: false
reasoning: true
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,132 @@
id: 65536
name: Doubao-1.5-Thinking-Vision-Pro
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: doubao-1-5-thinking-vision-pro 全新视觉深度思考模型,具备更强的通用多模态理解和推理能力,在 59 个公开评测基准中的 37 个上取得 SOTA 表现。
en: doubao-1-5-thinking-vision-pro is a brand-new visual deep thinking model, featuring stronger general multimodal understanding and reasoning abilities, achieving SOTA performance on 37 out of 59 public evaluation benchmarks.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: doubao-1.5-thinking-vision-pro
protocol: ark
capability:
function_call: true
input_modal:
- text
- image
- video
input_tokens: 96000
json_mode: true
max_tokens: 128000
output_modal:
- text
output_tokens: 16000
prefix_caching: false
reasoning: true
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,131 @@
id: 65536
name: Doubao-1.5-Vision-Lite
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: doubao-1.5-vision-lite极具性价比的多模态大模型支持任意分辨率和极端长宽比图像识别增强视觉推理、文档识别、细节信息理解和指令遵循能力。
en: doubao-1.5-vision-lite is a highly cost-effective multimodal large model that supports image recognition at any resolution and extreme aspect ratios, enhancing visual reasoning, document recognition, detailed information comprehension, and instruction-following capabilities.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: doubao-1.5-vision-lite
protocol: ark
capability:
function_call: false
input_modal:
- text
- image
input_tokens: 96000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16000
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,132 @@
id: 65536
name: Doubao-1.5-Vision-Pro
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: doubao-1.5-vision-pro全新升级的多模态大模型支持任意分辨率和极端长宽比图像识别增强视觉推理、文档识别、细节信息理解和指令遵循能力。
en: doubao-1.5-vision-pro is a newly upgraded multimodal large model that supports image recognition at any resolution and extreme aspect ratios, enhancing visual reasoning, document recognition, detailed information comprehension, and instruction-following capabilities.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: doubao-1.5-vision-pro
protocol: ark
capability:
function_call: true
input_modal:
- text
- image
- video
input_tokens: 96000
json_mode: true
max_tokens: 128000
output_modal:
- text
output_tokens: 16000
prefix_caching: false
reasoning: true
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,132 @@
id: 65536
name: Doubao-Seed-1.6-Flash
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: 有极致推理速度的多模态深度思考模型;同时支持文本和视觉理解。文本理解能力超过上一代 Lite 系列模型,视觉理解比肩友商 Pro 系列模型。
en: A multimodal deep thinking model with extreme reasoning speed; it supports both text and visual understanding. Its text comprehension surpasses the previous generation Lite series models, while its visual understanding rivals competitor Pro series models.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: doubao-seed-1.6-flash
protocol: ark
capability:
function_call: true
input_modal:
- text
- image
- video
input_tokens: 224000
json_mode: true
max_tokens: 256000
output_modal:
- text
output_tokens: 32000
prefix_caching: true
reasoning: true
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,132 @@
id: 65536
name: Doubao-Seed-1.6-Thinking
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: 在思考能力上进行了大幅强化, 对比 doubao 1.5 代深度理解模型,在编程、数学、逻辑推理等基础能力上进一步提升, 支持视觉理解。
en: Significantly enhanced in thinking capabilities, compared to the doubao 1.5 generation deep understanding model, with further improvements in fundamental skills such as programming, mathematics, and logical reasoning, and support for visual understanding.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: doubao-seed-1.6-thinking
protocol: ark
capability:
function_call: true
input_modal:
- text
- image
- video
input_tokens: 224000
json_mode: true
max_tokens: 256000
output_modal:
- text
output_tokens: 16000
prefix_caching: true
reasoning: true
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,132 @@
id: 65536
name: Doubao-Seed-1.6
icon_uri: default_icon/doubao_v2.png
icon_url: ""
description:
zh: 全新多模态深度思考模型,同时支持 thinking、non-thinking、auto三种思考模式。其中 non-thinking 模型对比 doubao-1.5-pro-32k-250115 模型大幅提升。
en: 'A brand-new multimodal deep thinking model supports three thinking modes: thinking, non-thinking, and auto. Among them, the non-thinking model has significantly improved compared to the doubao-1.5-pro-32k-250115 model.'
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: doubao-seed-1.6
protocol: ark
capability:
function_call: true
input_modal:
- text
- image
- video
input_tokens: 224000
json_mode: true
max_tokens: 256000
output_modal:
- text
output_tokens: 32000
prefix_caching: true
reasoning: true
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,108 @@
id: 65536
name: Deepseek-R1-VolcEngine
icon_uri: default_icon/deepseek_v2.png
icon_url: ""
description:
zh: deepseek-r1 是由深度求索推出的深度思考模型。在后训练阶段大规模使用了强化学习技术,在仅有极少标注数据的情况下,极大提升了模型推理能力。在数学、代码、自然语言推理等任务上,性能比肩 OpenAI o1 正式版。
en: deepseek-r1 is a deep thinking model launched by Deep Seek. It extensively employs reinforcement learning during the post-training phase, significantly enhancing the model's reasoning ability with very limited annotated data. In tasks such as mathematics, coding, and natural language reasoning, its performance rivals that of the official OpenAI o1 version.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
meta:
name: deepseek-r1-ve
protocol: ark
capability:
function_call: true
input_modal:
- text
input_tokens: 96000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 32000
prefix_caching: true
reasoning: true
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,108 @@
id: 65536
name: Deepseek-V3-VolcEngine
icon_uri: default_icon/deepseek_v2.png
icon_url: ""
description:
zh: deepseek-v3 由深度求索公司自研的MoE模型多项评测成绩超越了 qwen2.5-72b 和 llama-3.1-405b 等开源模型,并在性能上和世界顶尖的闭源模型 gpt-4o 及 claude-3.5-Sonnet 不分伯仲。
en: deepseek-v3 is a MoE model independently developed by Deep Seek. Its performance in multiple evaluations surpasses open-source models such as qwen2.5-72b and llama-3.1-405b, and it competes on par with world-leading closed-source models like gpt-4o and claude-3.5-Sonnet.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
meta:
name: deepseek-v3-ve
protocol: ark
capability:
function_call: true
input_modal:
- text
input_tokens: 96000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16000
prefix_caching: true
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.1
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.7
top_k: 0
stop: []
openai: null
claude: null
ark:
region: ""
access_key: ""
secret_key: ""
retry_times: null
custom_header: {}
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,201 @@
id: 100
name: test_model
icon_uri: default_icon/test_icon_uri.png
icon_url: test_icon_url
description:
zh: test_description
en: test_description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: frequency_penalty
label:
zh: 重复语句惩罚
en: Frequency penalty
desc:
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
type: float
min: "-2"
max: "2"
default_val:
default_val: "0"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: presence_penalty
label:
zh: 重复主题惩罚
en: Presence penalty
desc:
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
type: float
min: "-2"
max: "2"
default_val:
default_val: "0"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: Markdown
value: "1"
- label: JSON
value: "2"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: test_model
protocol: test_protocol
capability:
function_call: true
input_modal:
- text
- image
- audio
- video
input_tokens: 1024
json_mode: true
max_tokens: 2048
output_modal:
- text
- image
- audio
- video
output_tokens: 1024
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: https://localhost:1234/chat/completion
api_key: qweasdzxc
timeout: 10s
model: model_name
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 2048
top_p: 0
top_k: 0
stop:
- bye
enable_thinking: false
openai:
by_azure: true
api_version: "2024-10-21"
response_format:
type: text
jsonschema: null
claude:
by_bedrock: true
access_key: bedrock_ak
secret_access_key: bedrock_secret_ak
session_token: bedrock_session_token
region: bedrock_region
ark:
region: region
access_key: ak
secret_key: sk
retry_times: 123
custom_header:
key: val
deepseek:
response_format_type: text
qwen: null
gemini:
backend: 0
project: ""
location: ""
api_version: ""
headers:
key_1:
- val_1
- val_2
timeout_ms: 0
include_thoughts: true
thinking_budget: null
custom: {}
status: 0

View File

@@ -0,0 +1,90 @@
id: 2006
name: Claude-3.5-Sonnet
icon_uri: default_icon/claude_v2.png
icon_url: ""
description:
zh: claude 模型简介
en: claude model description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: Claude-3.5-Sonnet
protocol: claude
capability:
function_call: true
input_modal:
- text
- image
input_tokens: 128000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16384
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 1
top_k: 0
stop: []
openai: null
claude:
by_bedrock: false
access_key: ""
secret_access_key: ""
session_token: ""
region: ""
ark: null
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,107 @@
id: 2004
name: DeepSeek-V3
icon_uri: default_icon/deepseek_v2.png
icon_url: ""
description:
zh: deepseek 模型简介
en: deepseek model description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成随机性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON Object
value: "1"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: DeepSeek-V3
protocol: deepseek
capability:
function_call: false
input_modal:
- text
input_tokens: 128000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16384
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 1
top_k: 0
stop: []
openai: null
claude: null
ark: null
deepseek:
response_format_type: text
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,139 @@
id: 2007
name: Gemini-2.5-Flash
icon_uri: default_icon/gemini_v2.png
icon_url: ""
description:
zh: gemini 模型简介
en: gemini model description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: JSON
value: "2"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: Gemini-2.5-Flash
protocol: gemini
capability:
function_call: true
input_modal:
- text
- image
- audio
- video
input_tokens: 1048576
json_mode: true
max_tokens: 1114112
output_modal:
- text
output_tokens: 65536
prefix_caching: true
reasoning: true
prefill_response: true
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: gemini-2.5-flash
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 1
top_k: 0
stop: []
openai: null
claude: null
ark: null
deepseek: null
qwen: null
gemini:
backend: 0
project: ""
location: ""
api_version: ""
headers:
key_1:
- val_1
- val_2
timeout_ms: 0
include_thoughts: true
thinking_budget: null
custom: {}
status: 0

View File

@@ -0,0 +1,84 @@
id: 2003
name: Gemma-3
icon_uri: default_icon/ollama.png
icon_url: ""
description:
zh: ollama 模型简介
en: ollama model description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: Gemma-3
protocol: ollama
capability:
function_call: true
input_modal:
- text
input_tokens: 128000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16384
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.6
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 0.95
top_k: 20
stop: []
openai: null
claude: null
ark: null
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,171 @@
id: 2001
name: GPT-4o
icon_uri: default_icon/openai_v2.png
icon_url: ""
description:
zh: gpt 模型简介
en: Multi-modal, 320ms, 88.7% MMLU, excels in education, customer support, health, and entertainment.
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.7"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: frequency_penalty
label:
zh: 重复语句惩罚
en: Frequency penalty
desc:
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
type: float
min: "-2"
max: "2"
default_val:
default_val: "0"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: presence_penalty
label:
zh: 重复主题惩罚
en: Presence penalty
desc:
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
type: float
min: "-2"
max: "2"
default_val:
default_val: "0"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
min: ""
max: ""
default_val:
default_val: "0"
options:
- label: Text
value: "0"
- label: Markdown
value: "1"
- label: JSON
value: "2"
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings
meta:
name: GPT-4o
protocol: openai
capability:
function_call: true
input_modal:
- text
- image
input_tokens: 128000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16384
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 1
top_k: 0
stop: []
openai:
by_azure: true
api_version: ""
response_format:
type: text
jsonschema: null
claude: null
ark: null
deepseek: null
qwen: null
gemini: null
custom: {}
status: 0

View File

@@ -0,0 +1,106 @@
id: 2005
name: Qwen3-32B
icon_uri: default_icon/qwen_v2.png
icon_url: ""
description:
zh: 通义千问模型
en: qwen model description
default_parameters:
- name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: "0"
max: "1"
default_val:
balance: "0.8"
creative: "1"
default_val: "1.0"
precise: "0.3"
precision: 1
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
- name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
type: int
min: "1"
max: "4096"
default_val:
default_val: "4096"
options: []
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
- name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: "0"
max: "1"
default_val:
default_val: "0.95"
precision: 2
options: []
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
meta:
name: Qwen3-32B
protocol: qwen
capability:
function_call: true
input_modal:
- text
input_tokens: 128000
json_mode: false
max_tokens: 128000
output_modal:
- text
output_tokens: 16384
prefix_caching: false
reasoning: false
prefill_response: false
conn_config:
base_url: ""
api_key: ""
timeout: 0s
model: ""
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
top_p: 1
top_k: 0
stop: []
openai: null
claude: null
ark: null
deepseek: null
qwen:
response_format:
type: text
jsonschema: null
gemini: null
custom: {}
status: 0