OpenAI 文本操作#
¥OpenAI Text operations
使用此操作向模型发送消息或对 OpenAI 中的文本进行违规分类。有关 OpenAI 节点本身的更多信息,请参阅 OpenAI。
¥Use this operation to message a model or classify text for violations in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.
Previous node versions
n8n 版本 1.117.0 引入了支持 OpenAI Responses API 的 OpenAI 节点 V2。它将 '消息模型' 操作重命名为 '生成聊天完成信息',以明确其与聊天完成 API 的关联,并引入了一个使用响应 API 的独立操作 '生成模型响应'。
¥n8n version 1.117.0 introduces the OpenAI node V2 that supports the OpenAI Responses API. It renames the 'Message a Model' operation to 'Generate a Chat Completion' to clarify its association with the Chat Completions API and introduces a separate 'Generate a Model Response' operation that uses the Responses API.
生成聊天完成信息#
¥Generate a Chat Completion
使用此操作向 OpenAI 模型发送消息或提示。 - 使用聊天完成 API。 - 并接收响应。
¥Use this operation to send a message or prompt to an OpenAI model - using the Chat Completions API - and receive a response.
请输入以下参数:
¥Enter these parameters:
- 用于连接的凭据:创建或选择现有 OpenAI 凭证。
¥Credential to connect with: Create or select an existing OpenAI credential.
- 资源:选择“文本”。
¥Resource: Select Text.
- 操作:选择“生成聊天记录”。
¥Operation: Select Generate a Chat Completion.
- 模型:选择要使用的模型。如果你不确定使用哪个模型,如果你需要高智能,请尝试
gpt-4o;如果你需要最快的速度和最低的成本,请尝试gpt-4o-mini。有关更多信息,请参阅 模型概览 | OpenAI 平台。
¥Model: Select the model you want to use. If you’re not sure which model to use, try gpt-4o if you need high intelligence or gpt-4o-mini if you need the fastest speed and lowest cost. Refer to Models overview | OpenAI Platform for more information.
- 消息:请输入文本提示,并分配模型用于生成响应的角色。有关如何使用这些角色编写更好的提示的更多信息,请参阅 Prompt 工程 | OpenAI。选择以下角色之一:
¥Messages: Enter a Text prompt and assign a Role that the model will use to generate responses. Refer to Prompt engineering | OpenAI for more information on how to write a better prompt by using these roles. Choose from one of these roles:
-
用户:以用户身份发送消息并从模型获取响应。
¥User: Sends a message as a user and gets a response from the model.
-
助手:告知模型采用特定的语气或个性。
¥Assistant: Tells the model to adopt a specific tone or personality.
-
系统:默认情况下,没有系统消息。你可以在用户消息中定义指令,但系统消息中设置的指令更有效。每个对话可以设置多条系统消息。使用此选项设置模型在下一条用户消息中的行为或上下文。
¥System: By default, there is no system message. You can define instructions in the user message, but the instructions set in the system message are more effective. You can set more than one system message per conversation. Use this to set the model's behavior or context for the next user message.
-
简化输出:启用此功能可返回简化的响应版本,而非原始数据。
¥Simplify Output: Turn on to return a simplified version of the response instead of the raw data.
- 输出内容(JSON):启用此选项可尝试以 JSON 格式返回响应。兼容
GPT-4 Turbo和所有GPT-3.5 Turbo模型(gpt-3.5-turbo-1106之后版本)。
¥Output Content as JSON: Turn on to attempt to return the response in JSON format. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.
选项#
¥Options
- 访问频率惩罚:应用惩罚以减少模型重复类似语句的倾向。取值范围为
0.0到2.0。
¥Frequency Penalty: Apply a penalty to reduce the model's tendency to repeat similar lines. The range is between 0.0 and 2.0.
- 最大令牌数:设置响应的最大令牌数。一个标记大约是标准英文文本的四个字符。使用此选项限制输出长度。
¥Maximum Number of Tokens: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output.
- 完成次数:默认值为 1。设置每个提示要生成的完成次数。请谨慎使用,因为设置过大的数字会迅速消耗你的令牌。
¥Number of Completions: Defaults to 1. Set the number of completions you want to generate for each prompt. Use carefully since setting a high number will quickly consume your tokens.
- 存在惩罚:施加惩罚以促使模型讨论新主题。取值范围为
0.0到2.0。
¥Presence Penalty: Apply a penalty to influence the model to discuss new topics. The range is between 0.0 and 2.0.
- 输出随机性(温度):调整响应的随机性。取值范围为
0.0(确定性)到1.0(最大随机性)。我们建议修改此项或输出随机性(Top P),但不要同时修改两者。从中等温度(大约0.7)开始,并根据观察到的输出进行调整。如果响应过于重复或僵硬,则提高温度。如果他们的工作流过于混乱或偏离轨道,请降低其优先级。默认为1.0。
¥Output Randomness (Temperature): Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to 1.0.
- 输出随机性(前 P 个元素):调整 Top P 设置以控制助手响应的多样性。例如,
0.5表示考虑所有似然加权选项的一半。我们建议修改此项或输出随机性(温度),但不要同时修改两者。默认为1.0。
¥Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example, 0.5 means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to 1.0.
有关更多信息,请参阅 聊天完成 | OpenAI 文档。
¥Refer to Chat Completions | OpenAI documentation for more information.
生成模型响应#
¥Generate a Model Response
使用此操作向 OpenAI 模型发送消息或提示。 - 使用响应 API - 并接收响应。
¥Use this operation to send a message or prompt to an OpenAI model - using the Responses API - and receive a response.
请输入以下参数:
¥Enter these parameters:
- 用于连接的凭据:创建或选择现有 OpenAI 凭证。
¥Credential to connect with: Create or select an existing OpenAI credential.
- 资源:选择“文本”。
¥Resource: Select Text.
- 操作:选择“生成模型响应”。
¥Operation: Select Generate a Model Response.
- 模型:选择要使用的模型。有关概述,请参阅 模型概览 | OpenAI 平台。
¥Model: Select the model you want to use. Refer to Models overview | OpenAI Platform for an overview.
- 消息:选择以下消息类型之一:
¥Messages: Choose from one of these a Message Types:
-
文本:请输入文本提示,并分配模型用于生成响应的角色。有关如何使用这些角色编写更好的提示的更多信息,请参阅 Prompt 工程 | OpenAI。
¥Text: Enter a Text prompt and assign a Role that the model will use to generate responses. Refer to Prompt engineering | OpenAI for more information on how to write a better prompt by using these roles.
-
图片:提供图片,可以通过图片 URL、文件 ID(使用 OpenAI 文件 API)或从工作流中的先前节点传递二进制数据来提供。
¥Image: Provide an Image either through an Image URL, a File ID (using the OpenAI Files API) or by passing binary data from an earlier node in your workflow.
-
文件:请以受支持的格式(目前仅支持 PDF)提供文件,可以通过文件 URL、文件 ID(使用 OpenAI 文件 API)或从工作流中的先前节点传递二进制数据来提供。
¥File: Provide a File in a supported format (currently: PDF only), either through a File URL, a File ID (using the OpenAI Files API) or by passing binary data from an earlier node in your workflow.
-
对于任何消息类型,你可以从以下角色中选择一个:
¥For any message type, you can choose from one of these roles:
- 用户:以用户身份发送消息并从模型获取响应。
¥User: Sends a message as a user and gets a response from the model.
- 助手:告知模型采用特定的语气或个性。
¥Assistant: Tells the model to adopt a specific tone or personality.
- 系统:默认情况下,系统消息为
"You are a helpful assistant"。你可以在用户消息中定义指令,但系统消息中设置的指令更有效。每个对话只能设置一条系统消息。使用此选项设置模型在下一条用户消息中的行为或上下文。
¥System: By default, the system message is
"You are a helpful assistant". You can define instructions in the user message, but the instructions set in the system message are more effective. You can only set one system message per conversation. Use this to set the model's behavior or context for the next user message. -
简化输出:启用此功能可返回简化的响应版本,而非原始数据。
¥Simplify Output: Turn on to return a simplified version of the response instead of the raw data.
内置工具#
¥Built-in Tools
OpenAI 响应 API 提供了一系列 内置工具 来丰富模型的响应:
¥The OpenAI Responses API provides a range of built-in tools to enrich the model's response:
- Web 搜索:允许模型在生成响应之前搜索网络以获取最新信息。
¥Web Search: Allows models to search the web for the latest information before generating a response.
- MCP 服务器:允许模型连接到远程 MCP 服务器。了解更多关于使用远程 MCP 服务器作为 此处 工具的信息。
¥MCP Servers: Allows models to connect to remote MCP servers. Find out more about using remote MCP servers as tools here.
- 文件搜索:允许模型在生成响应之前,从先前上传的文件中搜索知识库中的相关信息。更多信息,请参阅 OpenAI 文档。
¥File Search: Allow models to search your knowledgebase from previously uploaded files for relevant information before generating a response. Refer to the OpenAI documentation for more information.
- 代码解释器:允许模型在沙盒环境中编写和运行 Python 代码。
¥Code Interpreter: Allows models to write and run Python code in a sandboxed environment.
选项#
¥Options
- 最大令牌数:设置响应的最大令牌数。一个标记大约是标准英文文本的四个字符。使用此选项限制输出长度。
¥Maximum Number of Tokens: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output.
- 输出随机性(温度):调整响应的随机性。取值范围为
0.0(确定性)到1.0(最大随机性)。我们建议修改此项或输出随机性(Top P),但不要同时修改两者。从中等温度(大约0.7)开始,并根据观察到的输出进行调整。如果响应过于重复或僵硬,则提高温度。如果他们的工作流过于混乱或偏离轨道,请降低其优先级。默认为1.0。
¥Output Randomness (Temperature): Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to 1.0.
- 输出随机性(前 P 个元素):调整 Top P 设置以控制助手响应的多样性。例如,
0.5表示考虑所有似然加权选项的一半。我们建议修改此项或输出随机性(温度),但不要同时修改两者。默认为1.0。
¥Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example, 0.5 means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to 1.0.
- 对话 ID:此回复所属的对话。此响应的输入项和输出项将在此响应完成后自动添加到此对话中。
¥Conversation ID: The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation after this response completes.
- 先前响应 ID:要从中继续的先前响应的 ID。不能与对话 ID 同时使用。
¥Previous Response ID: The ID of the previous response to continue from. Can't be used in conjunction with Conversation ID.
- 推断:模型生成响应时应投入的推断工作量级别。包含返回模型执行推断摘要的功能(例如,用于调试目的)。
¥Reasoning: The level of reasoning effort the model should spend to generate the response. Includes the ability to return a Summary of the reasoning performed by the model (for example, for debugging purposes).
- 存储:是否存储生成的模型响应以便稍后通过 API 检索。默认为
true。
¥Store: Whether to store the generated model response for later retrieval via API. Defaults to true.
- 输出格式:是否以文本、指定的 JSON Schema 或 JSON 对象的形式返回响应。
¥Output Format: Whether to return the response as Text, in a specified JSON Schema or as a JSON Object.
- 背景:是否在 后台模式 中运行模型。这使得执行长时间运行的任务更加可靠。
¥Background: Whether to run the model in background mode. This allows executing long-running tasks more reliably.
有关更多信息,请参阅 回复 | OpenAI 文档。
¥Refer to Responses | OpenAI documentation for more information.
对违规文本进行分类#
¥Classify Text for Violations
使用此操作识别并标记可能有害的内容。OpenAI 模型将分析文本并返回包含以下内容的响应:
¥Use this operation to identify and flag content that might be harmful. OpenAI model will analyze the text and return a response containing:
flagged:一个布尔字段,指示内容是否可能有害。
¥flagged: A boolean field indicating if the content is potentially harmful.
categories:特定类别的违规标志列表。
¥categories: A list of category-specific violation flags.
category_scores:每个类别的分数。
¥category_scores: Scores for each category.
请输入以下参数:
¥Enter these parameters:
- 用于连接的凭据:创建或选择现有 OpenAI 凭证。
¥Credential to connect with: Create or select an existing OpenAI credential.
- 资源:选择“文本”。
¥Resource: Select Text.
- 操作:选择“对违规文本进行分类”。
¥Operation: Select Classify Text for Violations.
- 文本输入:输入如果违反审核策略,则需要进行分类的文本。
¥Text Input: Enter text to classify if it violates the moderation policy.
- 简化输出:启用此功能可返回简化的响应版本,而非原始数据。
¥Simplify Output: Turn on to return a simplified version of the response instead of the raw data.
选项#
¥Options
- 使用稳定模型:启用此选项以使用模型的稳定版本而非最新版本,准确率可能会略有降低。
¥Use Stable Model: Turn on to use the stable version of the model instead of the latest version, accuracy may be slightly lower.
有关更多信息,请参阅 审核 | OpenAI 文档。
¥Refer to Moderations | OpenAI documentation for more information.
常见问题#
¥Common issues
有关常见错误或问题以及建议的解决方法,请参阅 常见问题。
¥For common errors or issues and suggested resolution steps, refer to Common Issues.