128 lines
4.4 KiB
YAML
Executable File
128 lines
4.4 KiB
YAML
Executable File
id: 64020
|
||
name: Doubao-Seed-1.6
|
||
icon_uri: default_icon/doubao_v2.png
|
||
icon_url: ""
|
||
description:
|
||
zh: 全新多模态深度思考模型,同时支持 thinking、non-thinking、auto三种思考模式。其中 non-thinking 模型对比 doubao-1.5-pro-32k-250115 模型大幅提升。
|
||
en: 'A brand-new multimodal deep thinking model supports three thinking modes: thinking, non-thinking, and auto. Among them, the non-thinking model has significantly improved compared to the doubao-1.5-pro-32k-250115 model.'
|
||
default_parameters:
|
||
- name: temperature
|
||
label:
|
||
zh: 生成随机性
|
||
en: Temperature
|
||
desc:
|
||
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性,反之,降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
|
||
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
|
||
type: float
|
||
min: "0"
|
||
max: "1"
|
||
default_val:
|
||
balance: "0.8"
|
||
creative: "1"
|
||
default_val: "1.0"
|
||
precise: "0.3"
|
||
precision: 1
|
||
options: []
|
||
style:
|
||
widget: slider
|
||
label:
|
||
zh: 生成多样性
|
||
en: Generation diversity
|
||
- name: max_tokens
|
||
label:
|
||
zh: 最大回复长度
|
||
en: Response max length
|
||
desc:
|
||
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
|
||
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
|
||
type: int
|
||
min: "1"
|
||
max: "4096"
|
||
default_val:
|
||
default_val: "4096"
|
||
options: []
|
||
style:
|
||
widget: slider
|
||
label:
|
||
zh: 输入及输出设置
|
||
en: Input and output settings
|
||
- name: top_p
|
||
label:
|
||
zh: Top P
|
||
en: Top P
|
||
desc:
|
||
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择,直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
|
||
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
|
||
type: float
|
||
min: "0"
|
||
max: "1"
|
||
default_val:
|
||
default_val: "0.7"
|
||
precision: 2
|
||
options: []
|
||
style:
|
||
widget: slider
|
||
label:
|
||
zh: 生成多样性
|
||
en: Generation diversity
|
||
- name: response_format
|
||
label:
|
||
zh: 输出格式
|
||
en: Response format
|
||
desc:
|
||
zh: '- **JSON**: 将引导模型使用JSON格式输出'
|
||
en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'
|
||
type: int
|
||
min: ""
|
||
max: ""
|
||
default_val:
|
||
default_val: "0"
|
||
options:
|
||
- label: Text
|
||
value: "0"
|
||
- label: JSON
|
||
value: "1"
|
||
style:
|
||
widget: radio_buttons
|
||
label:
|
||
zh: 输入及输出设置
|
||
en: Input and output settings
|
||
meta:
|
||
protocol: ark
|
||
capability:
|
||
function_call: true
|
||
input_modal:
|
||
- text
|
||
- image
|
||
- video
|
||
input_tokens: 224000
|
||
json_mode: true
|
||
max_tokens: 256000
|
||
output_modal:
|
||
- text
|
||
output_tokens: 32000
|
||
prefix_caching: true
|
||
reasoning: true
|
||
prefill_response: false
|
||
conn_config:
|
||
# you could change the base_url to your own
|
||
base_url: "https://ark.ap-southeast.bytepluses.com/api/v3"
|
||
api_key: ""
|
||
timeout: 0s
|
||
model: "" # model_id / endpoint_id
|
||
temperature: 0.1
|
||
frequency_penalty: 0
|
||
presence_penalty: 0
|
||
max_tokens: 4096
|
||
top_p: 0.7
|
||
top_k: 0
|
||
stop: []
|
||
ark:
|
||
region: ""
|
||
access_key: ""
|
||
secret_key: ""
|
||
retry_times: null
|
||
custom_header: {}
|
||
custom: {}
|
||
status: 0
|