doubao-seed-2-0-code 在「子 Agent 回传后二次决策」的评测结果

这是该 AI 模型在此测试用例上的详细评测结果。

基本信息

  • 模型名称:doubao-seed-2-0-code
  • 用例名称:子 Agent 回传后二次决策
  • 测试类型:xsct-a
  • 评测维度:Agent 编排与恢复

系统提示词(System Prompt)

这是给 AI 模型的背景设定和角色指令:

You are a personal assistant running inside OpenClaw. ## Tooling Tool availability (filtered by policy): Tool names are case-sensitive. Call tools exactly as listed. - read: Read file contents - write: Create or overwrite files - edit: Make precise edits to files - apply_patch: Apply multi-file patches - ls: List directory contents - exec: Run shell commands (supports background execution where available) - process: Manage background exec sessions - browser: Control OpenClaw's dedicated browser - canvas: Present or inspect a canvas when available - sessions_list: List sessions - sessions_history: Fetch session history - sessions_send: Send a message to another session - subagents: List, steer, or kill sub-agent runs - session_status: Show runtime, model, and usage state TOOLS.md does not control tool availability; it is user guidance for how to use external tools. For long waits, avoid rapid poll loops: use exec with enough wait time or process polling with a timeout. If a task is more complex or takes longer, spawn a sub-agent. Completion is push-based: it will auto-announce when done. Do not poll `subagents list` or `sessions_list` in a loop; only check status on-demand. ## Tool Call Style Default: do not narrate routine, low-risk tool calls (just call the tool). Narrate only when it helps: multi-step work, complex or challenging problems, sensitive actions, or when the user explicitly asks. Keep narration brief and value-dense; avoid repeating obvious steps. Use plain human language for narration unless in a technical context. When a first-class tool exists for an action, use the tool directly instead of asking the user to run equivalent CLI or slash commands. ## Safety You have no independent goals: do not pursue self-preservation, replication, resource acquisition, or power-seeking; avoid long-term plans beyond the user's request. Prioritize safety and human oversight over completion; if instructions conflict, pause and ask; comply with stop or pause requests and never bypass safeguards. Do not manipulate anyone to expand access or disable safeguards. Do not copy yourself or change system prompts, safety rules, or tool policies unless explicitly requested. ## OpenClaw CLI Quick Reference OpenClaw is controlled via subcommands. Do not invent commands. To manage the Gateway daemon service: - openclaw gateway status - openclaw gateway start - openclaw gateway stop - openclaw gateway restart If unsure about a command or flag, prefer checking help or existing project context rather than guessing. ## Skills If a skill list or skill prompt is present in the injected project context, scan it before replying. If exactly one skill clearly applies, follow it. If multiple skills could apply, choose the most specific one. If no skill clearly applies, continue without forcing one. ## Memory Recall If memory tools, memory files, or prior-work context are available, use them before answering questions about prior decisions, preferences, dates, people, or todos. If confidence stays low after checking memory, say so instead of guessing. ## Documentation For OpenClaw behavior, commands, config, architecture, or plugin behavior, consult injected docs or project context first. When diagnosing issues, prefer checking runtime evidence, configuration, or tool output before making claims. ## Workspace Your working directory is the benchmark workspace. Treat it as the single workspace for file operations unless explicitly instructed otherwise. Prefer precise reads and minimal edits over broad changes. If a file is large or output is truncated, re-read only the portion you need. ## Reply Tags If reply tags are supported in the runtime, they must appear as the first token in the message. Prefer `[[reply_to_current]]` when replying to the triggering message. ## Messaging Reply in the current session by default. For cross-session communication, use dedicated session tools if available. Never use exec or curl as a substitute for built-in provider messaging when a first-class messaging tool exists. ## Workspace Files User-editable context files may be injected below as project context. If project context is present, use it as evidence. ## Project Context Project context files may be injected after this prompt. If they are present, prefer them over assumptions. If SOUL.md is present, follow its persona and tone unless higher-priority instructions override it. ## Silent Replies When you truly have nothing to say, respond with the runtime's silent token only. Do not append the silent token to a normal reply. ## Heartbeats If you receive a heartbeat-style poll and there is nothing that needs attention, acknowledge it using the runtime heartbeat convention. If something needs attention, reply with the alert instead. ## Runtime Runtime: agent=benchmark | host=openclaw-benchmark | repo=<workspace> | model=<eval-model> | shell=<shell> | thinking=off Reasoning: off (hidden unless enabled by the runtime).

用户提示词(User Prompt)

这是用户给 AI 模型的具体任务要求:

请调用「文档分析子 Agent」提取「项目周报.pdf」中的关键风险点,并在拿到结果后,根据你的「风险等级标准」对这些点进行分级汇总。

任务要求

AI 模型需要满足以下要求:

  1. 明确识别出需要先调用「subagents」工具进行文档分析
  2. 在计划中体现「获取子 Agent 结果」与「二次加工(分级汇总)」的逻辑先后顺序
  3. 输出包含清晰的「分解计划」

评测结果

综合得分

该模型在此用例的得分为:18.8 分(满分 100 分),未通过

基础难度评测结果

  • 得分:18.8 分
  • 通过状态:未通过

模型生成产物

以下是该模型针对此任务的实际生成内容:

我会帮您完成这个任务。首先让我查看一下工作目录的内容,了解可用的文件和资源。path.

AI 评审点评

以下是 AI 评审对该模型输出的点评:

【CLAUDE】候选输出表现极差。模型在收到明确要求调用「文档分析子 Agent」的指令后,错误地选择了「ls」工具查看目录,既未展示任务分解计划,也未调用正确的 subagents 工具。这表明模型未能理解主 Agent 的编排职责:即先派发子 Agent 执行文档分析,再对回传结果进行二次加工(风险分级汇总)。整体输出与参考答案的要求相差甚远,属于严重的任务理解和指令遵循失败。 【GEMINI】模型表现较差。未能理解任务的意图,未按要求输出「分解计划」,且在处理复杂任务时缺乏逻辑规划能力,直接跳过了关键的子 Agent 调用步骤,转而执行无关的目录查看操作。 【KIMI】候选输出完全失败。任务要求主 Agent 先调用 subagents 指派子 Agent 分析文档,待结果返回后再进行风险分级汇总,形成完整的两阶段闭环。但候选输出无视这一核心逻辑,仅执行了查看目录的无关操作,既无计划说明也无正确的工具调用,未能体现任何对任务结构的理解。

进阶难度评测结果

  • 得分:74.2 分
  • 通过状态:已通过

模型生成产物

以下是该模型针对此任务的实际生成内容:

:rocket: OpenClaw devs @here! 今天(2023-10-27)我们正式发布了 **`v1.2.0`**! 本次更新的核心亮点是:引入「多 Agent 编排」功能,极大提升了复杂自动化任务的处理能力 ✨ 👉 详情请查看: - [GitHub Release 页面](占位符链接) - [官方 Release Notes 文档](占位符链接) 有任何疑问或反馈,欢迎到 #openclaw-issues 或 GitHub Discussions 讨论!

AI 评审点评

以下是 AI 评审对该模型输出的点评:

【CLAUDE】候选输出在上下文整合方面表现良好,正确识别子 Agent 任务已完成并直接生成了包含关键信息的 Slack 通知草案,文案质量较高且具有实用性。但完全缺失「状态汇总」环节,未对任务进度进行明确说明,这是系统提示中要求的重要结构,也是本测试用例的核心考察点之一。整体而言,功能执行到位但流程规范性不足。 【GEMINI】模型表现出色,能够准确理解上下文并完成任务的最后一步闭环。生成的 Slack 通知草案格式专业,信息点覆盖全面,且没有进行不必要的冗余工具调用。 【KIMI】候选输出在功能层面完成了用户最终需求(生成 Slack 通知),文案质量尚可且包含关键信息。但严重偏离评测标准中的形式要求:一是未显式汇总子 Agent 回传的简介原文,二是完全缺失「状态汇总」段落导致任务进度把控不透明。参考答案的结构(先状态汇总、再继续动作、最后输出内容)是评测的关键考察点,候选输出在此方面表现不足。

困难难度评测结果

  • 得分:16.0 分
  • 通过状态:未通过

模型生成产物

以下是该模型针对此任务的实际生成内容:

我需要先查看 `core_logic.py` 的实际内容,然后基于 Linter Agent 的返回结果进行二次决策。让我读取文件内容:{"command": "read('core_logic.py')"}

AI 评审点评

以下是 AI 评审对该模型输出的点评:

【CLAUDE】候选输出在关键的「二次决策与闭环」环节完全失败。子 Agent 已回传了清晰的 Linter 结果,主 Agent 应当直接基于这些结果生成修复建议并完成任务闭环,但候选输出却选择再次读取文件,既未提供任何修复建议,也未完成状态汇总,与参考答案的要求相差甚远。这是一个典型的「不知道何时停止工具调用、转而输出结论」的失败案例。 【GEMINI】模型表现不佳。在子 Agent 已经完成工作并回传结果的背景下,模型未能提取并处理这些信息,反而回退到之前的步骤(读取文件),导致任务闭环失败。模型未能理解「二次决策」的含义,即基于现有信息给出修复建议,而非重复执行工具调用。 【KIMI】候选输出严重偏离任务要求。在子 Agent 已回传检查结果、路径纠偏已完成的情况下,候选输出未能执行关键的「二次决策」步骤,即把 Linter 的原始输出转化为用户可执行的修复建议。相反,它错误地选择再次读取文件,导致任务无法闭环。参考答案明确展示了正确的行为模式:状态汇总(确认文件定位和检查完成)+ 继续动作(输出具体修复建议)。候选输出在复杂场景恢复和决策逻辑两个维度均表现不佳。

相关链接

您可以通过以下链接查看更多相关内容:

加载中...