Skip to content

Azure OpenAI 聊天模型节点#

¥Azure OpenAI Chat Model node

使用 Azure OpenAI Chat Model 节点,将 OpenAI 的聊天模型与 Conversational agents 结合使用。

¥Use the Azure OpenAI Chat Model node to use OpenAI's chat models with conversational agents.

本页提供 Azure OpenAI 聊天模型节点的参数以及更多资源的链接。

¥On this page, you'll find the node parameters for the Azure OpenAI Chat Model node, and links to more resources.

Credentials

你可以在 此处 中找到此节点的身份验证信息。

¥You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

节点参数#

¥Node parameters

  • 模型:选择用于生成补全的模型。

¥Model: Select the model to use to generate the completion.

节点选项#

¥Node options

  • 访问频率惩罚:使用此选项可控制模型重复自身的概率。更高的值会降低模型重复的概率。

¥Frequency Penalty: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

  • 最大令牌数:请输入使用的最大令牌数,这将设置完成时长。

¥Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • 响应格式:选择文本或 JSON。JSON 确保模型返回有效的 JSON。

¥Response Format: Choose Text or JSON. JSON ensures the model returns valid JSON.

  • 存在惩罚:使用此选项可控制模型讨论新主题的概率。数值越高,模型讨论新话题的概率越大。

¥Presence Penalty: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

  • 采样温度:使用此选项可控制采样过程的随机性。更高的温度会产生更多样化的采样,但会增加出现幻觉的风险。

¥Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

  • 超时:请输入最大请求时间(以毫秒为单位)。

¥Timeout: Enter the maximum request time in milliseconds.

  • 最大重试次数:请输入请求重试的最大次数。

¥Max Retries: Enter the maximum number of times to retry a request.

  • 顶部 P:使用此选项可设置完成概率。使用较低的值来忽略可能性较小的选项。

¥Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

代理限制#

¥Proxy limitations

此节点不支持 NO_PROXY 环境变量

¥This node doesn't support the NO_PROXY environment variable.

模板和示例#

¥Templates and examples

Template widget placeholder.

相关资源#

¥Related resources

有关服务的更多信息,请参阅 LangChain 的 Azure OpenAI 文档

¥Refer to LangChains's Azure OpenAI documentation for more information about the service.

View n8n's Advanced AI documentation.