feat: manually mirror opencoze's code from bytedance

Change-Id: I09a73aadda978ad9511264a756b2ce51f5761adf
This commit is contained in:
fanlv
2025-07-20 17:36:12 +08:00
commit 890153324f
14811 changed files with 1923430 additions and 0 deletions

View File

@@ -0,0 +1,94 @@
id: 2002 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao Model
description: test doubao description
meta:
id: 102
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'Markdown'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,66 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Lite
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -0,0 +1,66 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Pro-256k
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -0,0 +1,66 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Pro-32k
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -0,0 +1,90 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Thinking-Pro
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,90 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Thinking-Vision-Pro
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,90 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Vision-Lite
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,90 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-1.5-Vision-Pro
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,90 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-Seed-1.6-Flash
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,90 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-Seed-1.6-Thinking
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,90 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Doubao-Seed-1.6
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,66 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Deepseek-R1-VolcEngine
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -0,0 +1,66 @@
id: 65536 # 模型 entity id, 同 id 数据不会覆盖
name: Deepseek-V3-VolcEngine
meta:
id: 65536
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
label_en: ''
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity

View File

@@ -0,0 +1,132 @@
id: 100 # 模型 entity id, 同 id 数据不会覆盖
name: test_model
description: test_description
meta:
id: 0
scenario: 1
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: frequency_penalty
label:
zh: 重复语句惩罚
en: Frequency penalty
desc:
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: presence_penalty
label:
zh: 重复主题惩罚
en: Presence penalty
desc:
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'Markdown'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,47 @@
id: 2006 # 模型 entity id, 同 id 数据不会覆盖
name: Claude-3.5-Sonnet
description: test claude description
meta:
id: 106
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,72 @@
id: 2004 # 模型 entity id, 同 id 数据不会覆盖
name: DeepSeek-V3
description: test deepseek description
meta:
id: 104
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成随机性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'JSON Object'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,90 @@
id: 2007 # 模型 entity id, 同 id 数据不会覆盖
name: Gemini-2.5-Flash
description: test gemini description
meta:
id: 107
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,47 @@
id: 2003 # 模型 entity id, 同 id 数据不会覆盖
name: Gemma-3
description: test gemma-3 description
meta:
id: 103
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,131 @@
id: 2001 # 模型 entity id, 同 id 数据不会覆盖
name: GPT-4o
description: test gpt-4o description
meta:
id: 101
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.7'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: frequency_penalty
label:
zh: 重复语句惩罚
en: Frequency penalty
desc:
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: presence_penalty
label:
zh: 重复主题惩罚
en: Presence penalty
desc:
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
type: float
min: '-2'
max: '2'
precision: 2
default_val:
default_val: '0'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: response_format
label:
zh: 输出格式
en: Response format
desc:
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
type: int
precision: 0
default_val:
default_val: '0'
options:
-
value: '0'
label: 'Text'
-
value: '1'
label: 'Markdown'
-
value: '2'
label: 'JSON'
style:
widget: radio_buttons
label:
zh: 输入及输出设置
en: Input and output settings

View File

@@ -0,0 +1,66 @@
id: 2005 # 模型 entity id, 同 id 数据不会覆盖
name: Qwen3-32B
description: test qwen description
meta:
id: 105
default_parameters:
-
name: temperature
label:
zh: 生成随机性
en: Temperature
desc:
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性反之降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
type: float
min: '0'
max: '1'
precision: 1
default_val:
default_val: '1.0'
creative: '1'
balance: '0.8'
precise: '0.3'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity
-
name: max_tokens
label:
zh: 最大回复长度
en: Response max length
desc:
zh: '控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。'
en: 'You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.'
type: int
min: '1'
max: '4096'
precision: 0
default_val:
default_val: '4096'
style:
widget: slider
label:
zh: 输入及输出设置
en: Input and output settings
-
name: top_p
label:
zh: Top P
en: Top P
desc:
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
type: float
min: '0'
max: '1'
precision: 2
default_val:
default_val: '0.95'
style:
widget: slider
label:
zh: 生成多样性
en: Generation diversity