fix: remove the stream option of zhipu and gemini (#9319)

This commit is contained in:
非法操作 2024-10-15 19:13:43 +08:00 committed by GitHub
parent bc0dad6c1c
commit da25b91980
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
26 changed files with 0 additions and 234 deletions

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -32,15 +32,6 @@ parameter_rules:
max: 8192 max: 8192
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -27,15 +27,6 @@ parameter_rules:
default: 4096 default: 4096
min: 1 min: 1
max: 4096 max: 4096
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -31,15 +31,6 @@ parameter_rules:
max: 2048 max: 2048
- name: response_format - name: response_format
use_template: response_format use_template: response_format
- name: stream
label:
zh_Hans: 流式输出
en_US: Stream
type: boolean
help:
zh_Hans: 流式输出允许模型在生成文本的过程中逐步返回结果,而不是一次性生成全部结果后再返回。
en_US: Streaming output allows the model to return results incrementally as it generates text, rather than generating all the results at once.
default: false
pricing: pricing:
input: '0.00' input: '0.00'
output: '0.00' output: '0.00'

View File

@ -28,15 +28,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: return_type - name: return_type
label: label:
zh_Hans: 回复类型 zh_Hans: 回复类型

View File

@ -32,15 +32,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -32,15 +32,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -32,15 +32,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -32,15 +32,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -32,15 +32,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -32,15 +32,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -35,15 +35,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -32,15 +32,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -30,15 +30,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024

View File

@ -30,15 +30,6 @@ parameter_rules:
zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。 zh_Hans: do_sample 为 true 时启用采样策略do_sample 为 false 时采样策略 temperature、top_p 将不生效。默认值为 true。
en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true. en_US: When `do_sample` is set to true, the sampling strategy is enabled. When `do_sample` is set to false, the sampling strategies such as `temperature` and `top_p` will not take effect. The default value is true.
default: true default: true
- name: stream
label:
zh_Hans: 流处理
en_US: Event Stream
type: boolean
help:
zh_Hans: 使用同步调用时,此参数应当设置为 fasle 或者省略。表示模型生成完所有内容后一次性返回所有内容。默认值为 false。如果设置为 true模型将通过标准 Event Stream 逐块返回模型生成内容。Event Stream 结束时会返回一条data[DONE]消息。注意在模型流式输出生成内容的过程中我们会分批对模型生成内容进行检测当检测到违法及不良信息时API会返回错误码1301。开发者识别到错误码1301应及时采取清屏、重启对话等措施删除生成内容并确保不将含有违法及不良信息的内容传递给模型继续生成避免其造成负面影响。
en_US: When using synchronous invocation, this parameter should be set to false or omitted. It indicates that the model will return all the generated content at once after the generation is complete. The default value is false. If set to true, the model will return the generated content in chunks via the standard Event Stream. A data[DONE] message will be sent at the end of the Event Stream.NoteDuring the model's streaming output process, we will batch check the generated content. If illegal or harmful information is detected, the API will return an error code (1301). Developers who identify error code (1301) should promptly take actions such as clearing the screen or restarting the conversation to delete the generated content. They should also ensure that no illegal or harmful content is passed back to the model for continued generation to avoid negative impacts.
default: false
- name: max_tokens - name: max_tokens
use_template: max_tokens use_template: max_tokens
default: 1024 default: 1024