Skip to content

拥抱脸推断模型节点#

¥Hugging Face Inference Model node

使用 Hugging Face 推断模型节点使用 Hugging Face 的模型。

¥Use the Hugging Face Inference Model node to use Hugging Face's models.

本页包含 Hugging Face 推断模型节点的节点参数以及更多资源的链接。

¥On this page, you'll find the node parameters for the Hugging Face Inference Model node, and links to more resources.

此节点缺少工具支持,因此无法与 AI 代理 节点一起使用。与其将其与 基础 LLM 链 节点连接。

¥This node lacks tools support, so it won't work with the AI Agent node. Instead, connect it with the Basic LLM Chain node.

Credentials

你可以在 此处 中找到此节点的身份验证信息。

¥You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

节点参数#

¥Node parameters

  • 模型:选择用于生成补全的模型。

¥Model: Select the model to use to generate the completion.

节点选项#

¥Node options

  • 自定义推断端点:输入自定义推断端点 URL。

¥Custom Inference Endpoint: Enter a custom inference endpoint URL.

  • 访问频率惩罚:使用此选项可控制模型重复自身的概率。更高的值会降低模型重复的概率。

¥Frequency Penalty: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

  • 最大令牌数:请输入使用的最大令牌数,这将设置完成时长。

¥Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • 存在惩罚:使用此选项可控制模型讨论新主题的概率。数值越高,模型讨论新话题的概率越大。

¥Presence Penalty: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

  • 采样温度:使用此选项可控制采样过程的随机性。更高的温度会产生更多样化的采样,但会增加出现幻觉的风险。

¥Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

  • 顶部 K:输入模型用于生成下一个令牌的令牌选择数。

¥Top K: Enter the number of token choices the model uses to generate the next token.

  • 顶部 P:使用此选项可设置完成概率。使用较低的值来忽略可能性较小的选项。

¥Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

模板和示例#

¥Templates and examples

Template widget placeholder.

相关资源#

¥Related resources

有关服务的更多信息,请参阅 LangChains 的 Hugging Face 推断模型文档

¥Refer to LangChains's Hugging Face Inference Model documentation for more information about the service.

View n8n's Advanced AI documentation.