Skip to content

OpenAI Chat Model 节点(OpenAI Chat Model node)#

使用 OpenAI 聊天模型节点来使用 OpenAI 的聊天模型与会话代理

🌐 Use the OpenAI Chat Model node to use OpenAI's chat models with conversational agents.

本页提供 OpenAI 聊天模型节点的节点参数以及更多资源的链接。

🌐 On this page, you'll find the node parameters for the OpenAI Chat Model node and links to more resources.

凭证

你可以在此处找到此节点的认证信息。

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

节点参数(Node parameters)#

模型(Model)#

选择用于生成补全的模型。

🌐 Select the model to use to generate the completion.

n8n 会动态加载来自 OpenAI 的模型,你只会看到你账户可用的模型。

🌐 n8n dynamically loads models from OpenAI, and you'll only see the models available to your account.

使用 Responses API(Use Responses API)#

OpenAI 提供了两个用于从模型生成输出的端点:

🌐 OpenAI provides two endpoints for generating output from a model:

  • 聊天完成:聊天完成 API 端点根据包含对话的消息列表生成模型响应。该 API 需要用户手动处理对话状态,例如通过添加 简单记忆 子节点。对于新项目,OpenAI 建议使用 Responses API。
  • Responses:Responses API 是一个具代理性的循环,允许模型在一次 API 请求中调用多个内置工具。它还支持通过传递 conversation_id 来实现持久对话。

如果你希望模型使用 Responses API 生成输出,请切换到 使用 Responses API。否则,OpenAI 聊天模型节点将默认使用聊天补全 API。

🌐 Toggle to Use Responses API if you want the model to generate output using the Responses API. Otherwise, the OpenAI Chat Model node will default to using the Chat Completions API.

请参考 OpenAI 文档中的Chat Completions 与 Responses API 的比较

🌐 Refer to the OpenAI documentation for a comparison of the Chat Completions and Responses APIs.

内置工具(Built-in Tools)#

OpenAI Responses API 提供了一系列内置工具来丰富模型的响应。如果你希望模型能够使用以下内置工具,请切换到 使用 Responses API

🌐 The OpenAI Responses API provides a range of built-in tools to enrich the model's response. Toggle to Use Responses API if you want the model to have access to the following built-in tools:

  • 网络搜索:允许模型在生成回答之前搜索网络以获取最新信息。
  • 文件搜索:允许模型在生成回复之前,从之前上传的文件中搜索你的知识库以获取相关信息。更多信息请参考 OpenAI 文档
  • 代码解释器:允许模型在沙箱环境中编写和运行 Python 代码。

与 AI 代理节点一起使用

内置工具仅在将 OpenAI 聊天模型节点与 AI 代理节点结合使用时受支持。例如,当将 OpenAI 聊天模型节点与基础 LLM 链节点结合使用时,内置工具不可用。

节点选项(Node options)#

使用这些选项可以进一步优化节点的行为。无论是否使用 Responses API 生成模型输出,都可以使用以下选项。

🌐 Use these options to further refine the node's behavior. The following options are available whether you use the Responses API to generate model output or not.

频率惩罚(Frequency Penalty)#

使用此选项可以控制模型重复自身的几率。较高的数值可以降低模型重复的可能性。

🌐 Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

最大令牌数(Maximum Number of Tokens)#

请输入使用的最大令牌数,这将设置完成时长。

🌐 Enter the maximum number of tokens used, which sets the completion length.

在线惩罚(Presence Penalty)#

使用此选项可以控制模型谈论新话题的可能性。数值越高,模型谈论新话题的几率越大。

🌐 Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

采样温度(Sampling Temperature)#

使用此选项可以控制采样过程的随机性。较高的温度会产生更多样化的采样,但也增加了产生幻觉的风险。

🌐 Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

超时(Timeout)#

请输入最大请求时间(以毫秒为单位)。

🌐 Enter the maximum request time in milliseconds.

最大重试次数(Max Retries)#

请输入请求重试的最大次数。

🌐 Enter the maximum number of times to retry a request.

热门问题(Top P)#

使用此选项设置生成内容应使用的概率。使用较低的值可以忽略不太可能的选项。

🌐 Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

附加节点选项(仅限 Responses API)(Additional node options (Responses API only))#

切换到 使用 Responses API 时,可使用以下附加选项。

🌐 The following, additional options are available when toggling to Use Responses API.

对话 ID(Conversation ID)#

此回复所属的对话。本次回复的输入项和输出项将在回复完成后自动添加到此对话中。

🌐 The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation after this response completes.

提示缓存键(Prompt Cache Key)#

使用此键缓存类似请求,以优化缓存命中率。

🌐 Use this key for caching similar requests to optimize cache hit rates.

安全标识符(Safety Identifier)#

应用标识符来跟踪可能违反使用策略的用户。

🌐 Apply an identifier to track users who may violate usage policies.

服务层级(Service Tier)#

选择适合你需求的服务级别:自动、灵活、默认或优先。

🌐 Select the service tier that fits your needs: Auto, Flex, Default, or Priority.

元数据(Metadata)#

一组用于存储结构化信息的键值对。你可以向一个对象附加最多16对键值,这对于添加可通过API或仪表板进行搜索的自定义数据非常有用。

🌐 A set of key-value pairs for storing structured information. You can attach up to 16 pairs to an object, which is useful for adding custom data that can be used for searching by the API or in the dashboard.

热门日志问题(Top Logprobs)#

定义一个介于 0 和 20 之间的整数,指定在每个标记位置返回的最可能标记的数量,每个标记都关联一个对数概率。

🌐 Define an integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

输出格式(Output Format)#

选择一种响应格式:文本、JSON Schema 或 JSON 对象。如果你希望以 JSON 格式接收数据,建议使用 JSON Schema。

🌐 Choose a response format: Text, JSON Schema, or JSON Object. Use of JSON Schema is recommended, if you want to receive data in JSON format.

提示(Prompt)#

配置包含唯一 ID、其版本和可替换变量的提示。提示可以通过 OpenAI 仪表板进行配置。

🌐 Configure the prompt filled with a unique ID, its version, and substitutable variables. Prompts are configured through the OpenAI dashboard.

模板和示例(Templates and examples)#

Template widget placeholder.

有关该服务的更多信息,请参阅 LangChains 的 OpenAI 文档

🌐 Refer to LangChains's OpenAI documentation for more information about the service.

有关参数的更多信息,请参阅 OpenAI 文档

🌐 Refer to OpenAI documentation for more information about the parameters.

View n8n's Advanced AI documentation.

常见问题(Common issues)#

有关常见问题或问题及建议的解决方案,请参阅 常见问题

🌐 For common questions or issues and suggested solutions, refer to Common issues.