提示和常见问题#
¥Tips and common issues
合并多个触发器#
¥Combining multiple triggers
如果你的工作流程中已有其他触发器,你可以从以下两个入口开始:该触发器和 评估触发器。为了确保无论哪个触发器执行,你的工作流都能按预期运行,你需要将这些分支合并在一起。
¥If you have another trigger in the workflow already, you have two potential starting points: that trigger and the evaluation trigger. To make sure your workflow works as expected no matter which trigger executes, you will need to merge these branches together.
禁用方法:
¥To do so:
- 获取其他触发器的数据格式:
¥Get the data format of the other trigger:
-
执行其他触发器。
¥Execute the other trigger.
-
打开它并导航到其输出窗格的 JSON 视图。
¥Open it and navigate to the JSON view of its output pane.
-
点击右侧的“复制”按钮。
¥Click the copy button on the right. 2. 重新调整评估触发数据以匹配:
¥Re-shape the evaluation trigger data to match:
-
在评估触发器之后插入一个 编辑字段(集)节点 节点,并将它们连接起来。
¥Insert an Edit Fields (Set) node after the evaluation trigger and connect them together.
-
将其模式更改为 JSON。
¥Change its mode to JSON.
-
将你的数据粘贴到 'JSON' 字段中,并删除第一行和最后一行的
[和]。¥Paste your data into the 'JSON' field, removing the
[and]on the first and last lines. -
将字段类型切换为表达式。
¥Switch the field type to Expression.
-
从输入窗格拖动数据,即可映射触发器中的数据。
¥Map in the data from the trigger by dragging it from the input pane.
-
对于字符串,请确保替换整个值(包括引号),并在表达式末尾添加
.toJsonString()。¥For strings, make sure to replace the entire value (including the quotes) and add
.toJsonString()to the end of the expression. 3. 使用 '空操作' 节点合并分支:插入一个 空操作节点 节点,并将其他触发器和设置节点都连接到该节点。'空操作' 节点仅输出其接收到的任何输入。
¥Merge the branches using a 'No-op' node: Insert a No-op node and wire both the other trigger and the Set node up to it. The 'No-op' node just outputs whatever input it receives. 4. 在工作流的其余部分引用 '空操作' 节点的输出:由于两条路径的数据都将以相同的格式流经此节点,因此你可以确保输入数据始终存在。
¥Reference the 'No-op' node outputs in the rest of the workflow: Since both paths will flow through this node with the same format, you can be sure that your input data will always be there.
避免评估中断聊天#
¥Avoiding evaluation breaking the chat
n8n 的内部聊天读取工作流中最后一个执行节点的输出数据。添加包含 '设置输出' 操作 的评估节点后,数据格式可能不符合预期,甚至可能不包含聊天回复。
¥n8n's internal chat reads the output data of the last executed node in the workflow. After adding an evaluation node with the 'set outputs' operation, this data may not be in the expected format, or even contain the chat response.
解决方案是添加一个从代理发出的额外分支。以 n8n 格式配置 下级分支稍后执行,这意味着你附加到此分支的任何节点都将最后执行。你可以在此处使用空操作节点,因为它只需要传递代理输出。
¥The solution is to add an extra branch coming out of your agent. Lower branches execute later in n8n, which means any node you attach to this branch will execute last. You can use a no-op node here since it only needs to pass the agent output through.
计算指标时访问工具数据#
¥Accessing tool data when calculating metrics
有时,你需要了解代理的已执行子节点中发生了什么,例如,检查它是否执行了某个工具。无法使用表达式直接引用这些节点,但可以在代理中启用“返回中间步骤”选项。这将添加一个名为 intermediateSteps 的额外输出字段,你可以在后续节点中使用它:
¥Sometimes you need to know what happened in executed sub-nodes of an agent, for example to check whether it executed a tool. You can't reference these nodes directly with expressions, but you can enable the Return intermediate steps option in the agent. This will add an extra output field called intermediateSteps which you can use in later nodes:
在同一工作流中进行多次评估#
¥Multiple evaluations in the same workflow
每个工作流程只能设置一个评估。换句话说,每个工作流只能有一个评估触发器。
¥You can only have one evaluation set up per workflow. In other words, you can only have one evaluation trigger per workflow.
即便如此,你仍然可以通过将工作流程的不同部分放入 sub-workflows 并评估每个子工作流程,来使用不同的评估来测试工作流程的不同部分。
¥Even so, you can still test different parts of your workflow with different evaluations by putting those parts in sub-workflows and evaluating each sub-workflow.
处理不一致的结果#
¥Dealing with inconsistent results
指标通常会包含噪声:对于完全相同的工作流,在不同的评估运行中,它们可能会有所不同。这是因为工作流本身可能会返回不同的结果,或者任何基于 LLM 的指标都可能存在自然波动。
¥Metrics can often have noise: they may be different across evaluation runs of the exact same workflow. This is because the workflow itself may return different results, or any LLM-based metrics might have natural variation in them.
你可以通过复制数据集中的行来弥补这一点,这样每一行在数据集中都会出现多次。这意味着每个输入实际上都会运行多次,因此可以平滑任何波动。
¥You can compensate for this by duplicating the rows of your dataset, so that each row appears more than once in the dataset. Since this means that each input will effectively be running multiple times, it will smooth out any variations.

