内存相关错误(Memory-related errors)#
n8n 不限制每个节点可以获取和处理的数据量。虽然这为你提供了自由,但在工作流执行所需内存超过可用内存时,可能会导致错误。本页面解释了如何识别和避免这些错误。
🌐 n8n doesn't restrict the amount of data each node can fetch and process. While this gives you freedom, it can lead to errors when workflow executions require more memory than available. This page explains how to identify and avoid these errors.
识别内存不足的情况(Identifying out of memory situations)#
n8n 会在某些内存不足的情况下提供错误消息警告。例如,类似 执行在此节点停止(n8n 在执行时可能已耗尽内存) 的消息。
🌐 n8n provides error messages that warn you in some out of memory situations. For example, messages such as Execution stopped at this node (n8n may have run out of memory while executing it).
错误信息,包括 运行工作流时出现问题、连接丢失 或 503 服务暂时不可用,表明 n8n 实例已不可用。
🌐 Error messages including Problem running workflow, Connection Lost, or 503 Service Temporarily Unavailable suggest that an n8n instance has become unavailable.
在自托管 n8n 时,你可能还会在服务器日志中看到诸如 分配失败 - JavaScript 堆内存不足 的错误信息。
🌐 When self-hosting n8n, you may also see error messages such as Allocation failed - JavaScript heap out of memory in your server logs.
在 n8n Cloud 上,或者使用 n8n 的 Docker 镜像时,n8n 遇到此类问题会自动重启。然而,当使用 npm 运行 n8n 时,你可能需要手动重启它。
🌐 On n8n Cloud, or when using n8n's Docker image, n8n restarts automatically when encountering such an issue. However, when running n8n with npm you might need to restart it manually.
常见原因(Typical causes)#
当工作流执行所需的内存超过 n8n 实例可用内存时,就会出现此类问题。增加工作流执行内存使用量的因素包括:
🌐 Such problems occur when a workflow execution requires more memory than available to an n8n instance. Factors increasing the memory usage for a workflow execution include:
- JSON 数据的数量。
- 二进制数据的大小。
- 工作流中的节点数。
- 有些节点占用大量内存:Code 节点和较旧的 Function 节点会显著增加内存消耗。
- 手动或自动工作流执行:手动执行会增加内存消耗,因为 n8n 会为前端创建数据副本。
- 同时运行的其他工作流。
避免内存不足(Avoiding out of memory situations)#
在遇到内存不足的情况时,有两个选择:要么增加n8n可用的内存,要么减少内存消耗。
🌐 When encountering an out of memory situation, there are two options: either increase the amount of memory available to n8n or reduce the memory consumption.
增加可用内存(Increase available memory)#
当自托管 n8n 时,增加 n8n 可用的内存意味着为你的 n8n 实例提供更多内存。这可能会导致你的托管服务提供商产生额外费用。
🌐 When self-hosting n8n, increasing the amount of memory available to n8n means provisioning your n8n instance with more memory. This may incur additional costs with your hosting provider.
在 n8n Cloud 上,你需要升级到更高级别的套餐。
🌐 On n8n cloud you need to upgrade to a larger plan.
降低内存消耗(Reduce memory consumption)#
这种方法更为复杂,需要重新构建导致问题的工作流。本节提供了一些关于如何减少内存消耗的指导。并非所有建议都适用于所有工作流。
🌐 This approach is more complex and means re-building the workflows causing the issue. This section provides some guidelines on how to reduce memory consumption. Not all suggestions are applicable to all workflows.
- Split the data processed into smaller chunks. For example, instead of fetching 10,000 rows with each execution, process 200 rows with each execution.
- Avoid using the Code node where possible.
- Avoid manual executions when processing larger amounts of data.
- Split the workflow up into sub-workflows and ensure each sub-workflow returns a limited amount of data to its parent workflow.
Splitting the workflow might seem counter-intuitive at first as it usually requires adding at least two more nodes: the Loop Over Items node to split up the items into smaller batches and the Execute Workflow node to start the sub-workflow.
However, as long as your sub-workflow does the heavy lifting for each batch and then returns only a small result set to the main workflow, this reduces memory consumption. This is because the sub-workflow only holds the data for the current batch in memory, after which the memory is free again.
增加旧内存(Increase old memory)#
这适用于自托管 n8n。当遇到 JavaScript 堆内存不足 错误时,通常可以通过为 V8 JavaScript 引擎的旧内存区域分配额外内存来解决。为此,可以通过 CLI 或 NODE_OPTIONS 环境变量 设置适当的 V8 选项 --max-old-space-size=SIZE。
🌐 This applies to self-hosting n8n. When encountering JavaScript heap out of memory errors, it's often useful to allocate additional memory to the old memory section of the V8 JavaScript engine. To do this, set the appropriate V8 option --max-old-space-size=SIZE either through the CLI or through the NODE_OPTIONS environment variable.