Skip to content

Groq 聊天模型节点#

¥Groq Chat Model node

使用 Groq Chat Model 节点访问 Groq 的大型语言模型,用于对话式 AI 和文本生成任务。

¥Use the Groq Chat Model node to access Groq's large language models for conversational AI and text generation tasks.

本页包含 Groq 聊天模型节点的节点参数以及更多资源的链接。

¥On this page, you'll find the node parameters for the Groq Chat Model node, and links to more resources.

Credentials

你可以在 此处 中找到此节点的身份验证信息。

¥You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

节点参数#

¥Node parameters

  • 模型:选择用于生成补全的模型。n8n 会动态加载来自 Groq API 的可用模型。在 Groq 模型文档 中了解更多信息。

¥Model: Select the model which will generate the completion. n8n dynamically loads available models from the Groq API. Learn more in the Groq model documentation.

节点选项#

¥Node options

  • 最大令牌数:请输入使用的最大令牌数,这将设置完成时长。

¥Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • 采样温度:使用此选项可控制采样过程的随机性。更高的温度会产生更多样化的采样,但会增加出现幻觉的风险。

¥Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

模板和示例#

¥Templates and examples

Template widget placeholder.

相关资源#

¥Related resources

有关服务的更多信息,请参阅 Groq API 文档

¥Refer to Groq's API documentation for more information about the service.

View n8n's Advanced AI documentation.