mirror of
https://github.com/instructkr/claw-code.git
synced 2026-04-09 09:34:50 +08:00
Compare commits
16 Commits
fix/linux-
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c1b1ce465e | ||
|
|
8e25611064 | ||
|
|
eb044f0a02 | ||
|
|
75476c9005 | ||
|
|
e4c3871882 | ||
|
|
beb09df4b8 | ||
|
|
811b7b4c24 | ||
|
|
8a9300ea96 | ||
|
|
e7e0fd2dbf | ||
|
|
da451c66db | ||
|
|
ad38032ab8 | ||
|
|
7173f2d6c6 | ||
|
|
a0b4156174 | ||
|
|
3bf45fc44a | ||
|
|
af58b6a7c7 | ||
|
|
514c3da7ad |
16
ROADMAP.md
16
ROADMAP.md
@@ -483,8 +483,20 @@ Model name prefix now wins unconditionally over env-var presence. Regression tes
|
||||
|
||||
31. **`code-on-disk → verified commit lands` depends on undocumented executor quirks** — dogfooded 2026-04-08 during live fix session. Three hidden contracts tripped the "last mile" path when using droid via acpx in the claw-code workspace: **(a) hidden CWD contract** — droid's `terminal/create` rejects `cd /path && cargo build` compound commands with `spawn ENOENT`; callers must pass `--cwd` or split commands; **(b) hidden commit-message transport limit** — embedding a multi-line commit message in a single shell invocation hits `ENAMETOOLONG`; workaround is `git commit -F <file>` but the caller must know to write the file first; **(c) hidden workspace lint/edition contract** — `unsafe_code = "forbid"` workspace-wide with Rust 2021 edition makes `unsafe {}` wrappers incorrect for `set_var`/`remove_var`, but droid generates Rust 2024-style unsafe blocks without inspecting the workspace Cargo.toml or clippy config. Each of these required the orchestrator to learn the constraint by failing, then switching strategies. **Acceptance bar:** a fresh agent should be able to verify/commit/push a correct diff in this workspace without needing to know executor-specific shell trivia ahead of time. **Fix shape:** (1) `run-acpx.sh`-style wrapper that normalizes the commit idiom (always writes to temp file, sets `--cwd`, splits compound commands); (2) inject workspace constraints into the droid/acpx task preamble (edition, lint gates, known shell executor quirks) so the model doesn't have to discover them from failures; (3) or upstream a fix to the executor itself so `cd /path && cmd` chains work correctly.
|
||||
|
||||
32. **OpenAI-compatible provider/model-id passthrough is not fully literal** — dogfooded 2026-04-08 via live user in #claw-code who confirmed the exact backend model id works outside claw but fails through claw for an OpenAI-compatible endpoint. The gap: `openai/` prefix is correctly used for **transport selection** (pick the OpenAI-compat client) but the **wire model id** — the string placed in `"model": "..."` in the JSON request body — may not be the literal backend model string the user supplied. Two candidate failure modes: **(a)** `resolve_model_alias()` is called on the model string before it reaches the wire — alias expansion designed for Anthropic/known models corrupts a user-supplied backend-specific id; **(b)** the `openai/` routing prefix may not be stripped before `build_chat_completion_request` packages the body, so backends receive `openai/gpt-4` instead of `gpt-4`. **Fix shape:** cleanly separate transport selection from wire model id. Transport selection uses the prefix; wire model id is the user-supplied string minus only the routing prefix — no alias expansion, no prefix leakage. **Trace path for next session:** (1) find where `resolve_model_alias()` is called relative to the OpenAI-compat dispatch path; (2) inspect what `build_chat_completion_request` puts in `"model"` for an `openai/some-backend-id` input. **Source:** live user in #claw-code 2026-04-08, confirmed exact model id works outside claw, fails through claw for OpenAI-compat backend.
|
||||
32. **OpenAI-compatible provider/model-id passthrough is not fully literal** — **verified no-bug on 2026-04-09**: `resolve_model_alias()` only matches bare shorthand aliases (`opus`/`sonnet`/`haiku`) and passes everything else through unchanged, so `openai/gpt-4` reaches the dispatch layer unmodified. `strip_routing_prefix()` at `openai_compat.rs:732` then strips only recognised routing prefixes (`openai`, `xai`, `grok`, `qwen`) so the wire model is the bare backend id. No fix needed. **Original filing below.**
|
||||
|
||||
32. **OpenAI-compatible provider/model-id passthrough is not fully literal** — dogfooded 2026-04-08 via live user in #claw-code who confirmed the exact backend model id works outside claw but fails through claw for an OpenAI-compatible endpoint. The gap: `openai/` prefix is correctly used for **transport selection** (pick the OpenAI-compat client) but the **wire model id** — the string placed in `"model": "..."` in the JSON request body — may not be the literal backend model string the user supplied. Two candidate failure modes: **(a)** `resolve_model_alias()` is called on the model string before it reaches the wire — alias expansion designed for Anthropic/known models corrupts a user-supplied backend-specific id; **(b)** the `openai/` routing prefix may not be stripped before `build_chat_completion_request` packages the body, so backends receive `openai/gpt-4` instead of `gpt-4`. **Fix shape:** cleanly separate transport selection from wire model id. Transport selection uses the prefix; wire model id is the user-supplied string minus only the routing prefix — no alias expansion, no prefix leakage. **Trace path for next session:** (1) find where `resolve_model_alias()` is called relative to the OpenAI-compat dispatch path; (2) inspect what `build_chat_completion_request` puts in `"model"` for an `openai/some-backend-id` input. **Source:** live user in #claw-code 2026-04-08, confirmed exact model id works outside claw, fails through claw for OpenAI-compat backend.
|
||||
|
||||
32. **OpenAI-compatible provider/model-id passthrough is not fully literal** — dogfooded 2026-04-08 via live user in #claw-code who confirmed the exact backend model id works outside claw but fails through claw for an OpenAI-compatible endpoint. The gap: `openai/` prefix is correctly used for **transport selection** (pick the OpenAI-compat client) but the **wire model id** — the string placed in `"model": "..."` in the JSON request body — may not be the literal backend model string the user supplied. Two candidate failure modes: **(a)** `resolve_model_alias()` is called on the model string before it reaches the wire — alias expansion designed for Anthropic/known models corrupts a user-supplied backend-specific id; **(b)** the `openai/` routing prefix may not be stripped before `build_chat_completion_request` packages the body, so backends receive `openai/gpt-4` instead of `gpt-4`. **Fix shape:** cleanly separate transport selection from wire model id. Transport selection uses the prefix; wire model id is the user-supplied string minus only the routing prefix — no alias expansion, no prefix leakage. **Trace path for next session:** (1) find where `resolve_model_alias()` is called relative to the OpenAI-compat dispatch path; (2) inspect what `build_chat_completion_request` puts in `"model"` for an `openai/some-backend-id` input. **Source:** live user in #claw-code 2026-04-08, confirmed exact model id works outside claw, fails through claw for OpenAI-compat backend.
|
||||
33. **OpenAI `/responses` endpoint rejects claw's tool schema: `object schema missing properties` / `invalid_function_parameters`** — **done at `e7e0fd2` on 2026-04-09**. Added `normalize_object_schema()` in `openai_compat.rs` which recursively walks JSON Schema trees and injects `"properties": {}` and `"additionalProperties": false` on every object-type node (without overwriting existing values). Called from `openai_tool_definition()` so both `/chat/completions` and `/responses` receive strict-validator-safe schemas. 3 unit tests added. All api tests pass. **Original filing below.**
|
||||
33. **OpenAI `/responses` endpoint rejects claw's tool schema: `object schema missing properties` / `invalid_function_parameters`** — dogfooded 2026-04-08 via live user in #claw-code. Repro: startup succeeds, provider routing succeeds (`Connected: gpt-5.4 via openai`), but request fails when claw sends tool/function schema to a `/responses`-compatible OpenAI backend. Backend rejects `StructuredOutput` with `object schema missing properties` and `invalid_function_parameters`. This is distinct from the `#32` model-id passthrough issue — routing and transport work correctly. The failure is at the schema validation layer: claw's tool schema is acceptable for `/chat/completions` but not strict enough for `/responses` endpoint validation. **Sharp next check:** emit what schema claw sends for `StructuredOutput` tool functions, compare against OpenAI `/responses` spec for strict JSON schema validation (required `properties` object, `additionalProperties: false`, etc). Likely fix: add missing `properties: {}` on object types, ensure `additionalProperties: false` is present on all object schemas in the function tool JSON. **Source:** live user in #claw-code 2026-04-08 with `gpt-5.4` on OpenAI-compat backend.
|
||||
|
||||
|
||||
34. **`reasoning_effort` / `budget_tokens` not surfaced on OpenAI-compat path** — dogfooded 2026-04-09. Users asking for "reasoning effort parity with opencode" are hitting a structural gap: `MessageRequest` in `rust/crates/api/src/types.rs` has no `reasoning_effort` or `budget_tokens` field, and `build_chat_completion_request` in `openai_compat.rs` does not inject either into the request body. This means passing `--thinking` or equivalent to an OpenAI-compat reasoning model (e.g. `o4-mini`, `deepseek-r1`, any model that accepts `reasoning_effort`) silently drops the field — the model runs without the requested effort level, and the user gets no warning. **Contrast with Anthropic path:** `anthropic.rs` already maps `thinking` config into `anthropic.thinking.budget_tokens` in the request body. **Fix shape:** (a) Add optional `reasoning_effort: Option<String>` field to `MessageRequest`; (b) In `build_chat_completion_request`, if `reasoning_effort` is `Some`, emit `"reasoning_effort": value` in the JSON body; (c) In the CLI, wire `--thinking low/medium/high` or equivalent to populate the field when the resolved provider is `ProviderKind::OpenAi`; (d) Add unit test asserting `reasoning_effort` appears in the request body when set. **Source:** live user questions in #claw-code 2026-04-08/09 (dan_theman369 asking for "same flow as opencode for reasoning effort"; gaebal-gajae confirmed gap at `1491453913100976339`). Companion gap to #33 on the OpenAI-compat path.
|
||||
|
||||
35. **OpenAI gpt-5.x requires max_completion_tokens not max_tokens** -- dogfooded 2026-04-09. rklehm repro: gpt-5.2 via OpenAI-compat, startup OK, routing OK, but requests fail because claw emits max_tokens where gpt-5* requires max_completion_tokens. Fix: emit max_completion_tokens on OpenAI-compat path (backward-compatible). Add unit test. Source: rklehm in #claw-code 2026-04-09.
|
||||
|
||||
36. **Custom/project skill invocation disconnected from skill discovery** -- dogfooded 2026-04-09. /skills lists custom skills (e.g. caveman) but bare skill-name invocation does not dispatch them; falls through to plain model prompt. Fix: audit classify_skills_slash_command, ensure any skill listed by /skills has a deterministic invocation path, or document the correct syntax. Source: gaebal-gajae dogfood 2026-04-09.
|
||||
|
||||
37. **Claude subscription login path should be removed, not deprecated** -- dogfooded 2026-04-09. Official auth should be API key only (ANTHROPIC_API_KEY or OAuth access token via ANTHROPIC_AUTH_TOKEN). claw login with Claude subscription credentials creates legal/billing ambiguity. Fix: remove the subscription login surface entirely; update README/USAGE.md to say API key is the only supported path. Source: gaebal-gajae policy decision 2026-04-09.
|
||||
|
||||
38. **Dead-session opacity: bot cannot self-detect compaction vs broken tool surface** -- dogfooded 2026-04-09. Jobdori session spent ~15h declaring itself "dead" in-channel while tools were actually returning correct results within each turn. Root cause: context compaction causes tool outputs to be summarised away between turns, making the bot interpret absence-of-remembered-output as tool failure. This is a distinct failure mode from ROADMAP #31 (executor quirks): the session is alive and tools are functional, but the agent cannot tell the difference between "my last tool call produced no output" (compaction) and "the tool is broken". Downstream: repetitive false-dead signals in the channel, work not getting done despite the execution surface being live. Fix shape: (a) probe with a short known-output command at turn start if context has been compacted; (b) gate "I am dead" declarations behind at least one within-turn tool call with a verified non-empty result; (c) consider adding a session-health canary cron that fires a wake with a minimal probe and checks the result. Source: Jobdori self-dogfood 2026-04-09; observed in #clawcode-building-in-public across multiple Clawhip nudge cycles.
|
||||
|
||||
@@ -726,6 +726,24 @@ fn is_reasoning_model(model: &str) -> bool {
|
||||
|| canonical.contains("thinking")
|
||||
}
|
||||
|
||||
/// Strip routing prefix (e.g., "openai/gpt-4" → "gpt-4") for the wire.
|
||||
/// The prefix is used only to select transport; the backend expects the
|
||||
/// bare model id.
|
||||
fn strip_routing_prefix(model: &str) -> &str {
|
||||
if let Some(pos) = model.find('/') {
|
||||
let prefix = &model[..pos];
|
||||
// Only strip if the prefix before "/" is a known routing prefix,
|
||||
// not if "/" appears in the middle of the model name for other reasons.
|
||||
if matches!(prefix, "openai" | "xai" | "grok" | "qwen") {
|
||||
&model[pos + 1..]
|
||||
} else {
|
||||
model
|
||||
}
|
||||
} else {
|
||||
model
|
||||
}
|
||||
}
|
||||
|
||||
fn build_chat_completion_request(request: &MessageRequest, config: OpenAiCompatConfig) -> Value {
|
||||
let mut messages = Vec::new();
|
||||
if let Some(system) = request.system.as_ref().filter(|value| !value.is_empty()) {
|
||||
@@ -738,9 +756,21 @@ fn build_chat_completion_request(request: &MessageRequest, config: OpenAiCompatC
|
||||
messages.extend(translate_message(message));
|
||||
}
|
||||
|
||||
// Strip routing prefix (e.g., "openai/gpt-4" → "gpt-4") for the wire.
|
||||
let wire_model = strip_routing_prefix(&request.model);
|
||||
|
||||
// gpt-5* requires `max_completion_tokens`; older OpenAI models accept both.
|
||||
// We send the correct field based on the wire model name so gpt-5.x requests
|
||||
// don't fail with "unknown field max_tokens".
|
||||
let max_tokens_key = if wire_model.starts_with("gpt-5") {
|
||||
"max_completion_tokens"
|
||||
} else {
|
||||
"max_tokens"
|
||||
};
|
||||
|
||||
let mut payload = json!({
|
||||
"model": request.model,
|
||||
"max_tokens": request.max_tokens,
|
||||
"model": wire_model,
|
||||
max_tokens_key: request.max_tokens,
|
||||
"messages": messages,
|
||||
"stream": request.stream,
|
||||
});
|
||||
@@ -780,6 +810,10 @@ fn build_chat_completion_request(request: &MessageRequest, config: OpenAiCompatC
|
||||
payload["stop"] = json!(stop);
|
||||
}
|
||||
}
|
||||
// reasoning_effort for OpenAI-compatible reasoning models (o4-mini, o3, etc.)
|
||||
if let Some(effort) = &request.reasoning_effort {
|
||||
payload["reasoning_effort"] = json!(effort);
|
||||
}
|
||||
|
||||
payload
|
||||
}
|
||||
@@ -848,13 +882,45 @@ fn flatten_tool_result_content(content: &[ToolResultContentBlock]) -> String {
|
||||
.join("\n")
|
||||
}
|
||||
|
||||
/// Recursively ensure every object-type node in a JSON Schema has
|
||||
/// `"properties"` (at least `{}`) and `"additionalProperties": false`.
|
||||
/// The OpenAI `/responses` endpoint validates schemas strictly and rejects
|
||||
/// objects that omit these fields; `/chat/completions` is lenient but also
|
||||
/// accepts them, so we normalise unconditionally.
|
||||
fn normalize_object_schema(schema: &mut Value) {
|
||||
if let Some(obj) = schema.as_object_mut() {
|
||||
if obj.get("type").and_then(Value::as_str) == Some("object") {
|
||||
obj.entry("properties").or_insert_with(|| json!({}));
|
||||
obj.entry("additionalProperties")
|
||||
.or_insert(Value::Bool(false));
|
||||
}
|
||||
// Recurse into properties values
|
||||
if let Some(props) = obj.get_mut("properties") {
|
||||
if let Some(props_obj) = props.as_object_mut() {
|
||||
let keys: Vec<String> = props_obj.keys().cloned().collect();
|
||||
for k in keys {
|
||||
if let Some(v) = props_obj.get_mut(&k) {
|
||||
normalize_object_schema(v);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Recurse into items (arrays)
|
||||
if let Some(items) = obj.get_mut("items") {
|
||||
normalize_object_schema(items);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn openai_tool_definition(tool: &ToolDefinition) -> Value {
|
||||
let mut parameters = tool.input_schema.clone();
|
||||
normalize_object_schema(&mut parameters);
|
||||
json!({
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": tool.name,
|
||||
"description": tool.description,
|
||||
"parameters": tool.input_schema,
|
||||
"parameters": parameters,
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -1122,6 +1188,76 @@ mod tests {
|
||||
assert_eq!(payload["tool_choice"], json!("auto"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_schema_object_gets_strict_fields_for_responses_endpoint() {
|
||||
// OpenAI /responses endpoint rejects object schemas missing
|
||||
// "properties" and "additionalProperties". Verify normalize_object_schema
|
||||
// fills them in so the request shape is strict-validator-safe.
|
||||
use super::normalize_object_schema;
|
||||
|
||||
// Bare object — no properties at all
|
||||
let mut schema = json!({"type": "object"});
|
||||
normalize_object_schema(&mut schema);
|
||||
assert_eq!(schema["properties"], json!({}));
|
||||
assert_eq!(schema["additionalProperties"], json!(false));
|
||||
|
||||
// Nested object inside properties
|
||||
let mut schema2 = json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {"type": "object", "properties": {"lat": {"type": "number"}}}
|
||||
}
|
||||
});
|
||||
normalize_object_schema(&mut schema2);
|
||||
assert_eq!(schema2["additionalProperties"], json!(false));
|
||||
assert_eq!(
|
||||
schema2["properties"]["location"]["additionalProperties"],
|
||||
json!(false)
|
||||
);
|
||||
|
||||
// Existing properties/additionalProperties should not be overwritten
|
||||
let mut schema3 = json!({
|
||||
"type": "object",
|
||||
"properties": {"x": {"type": "string"}},
|
||||
"additionalProperties": true
|
||||
});
|
||||
normalize_object_schema(&mut schema3);
|
||||
assert_eq!(
|
||||
schema3["additionalProperties"],
|
||||
json!(true),
|
||||
"must not overwrite existing"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reasoning_effort_is_included_when_set() {
|
||||
let payload = build_chat_completion_request(
|
||||
&MessageRequest {
|
||||
model: "o4-mini".to_string(),
|
||||
max_tokens: 1024,
|
||||
messages: vec![InputMessage::user_text("think hard")],
|
||||
reasoning_effort: Some("high".to_string()),
|
||||
..Default::default()
|
||||
},
|
||||
OpenAiCompatConfig::openai(),
|
||||
);
|
||||
assert_eq!(payload["reasoning_effort"], json!("high"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reasoning_effort_omitted_when_not_set() {
|
||||
let payload = build_chat_completion_request(
|
||||
&MessageRequest {
|
||||
model: "gpt-4o".to_string(),
|
||||
max_tokens: 64,
|
||||
messages: vec![InputMessage::user_text("hello")],
|
||||
..Default::default()
|
||||
},
|
||||
OpenAiCompatConfig::openai(),
|
||||
);
|
||||
assert!(payload.get("reasoning_effort").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn openai_streaming_requests_include_usage_opt_in() {
|
||||
let payload = build_chat_completion_request(
|
||||
@@ -1239,6 +1375,7 @@ mod tests {
|
||||
frequency_penalty: Some(0.5),
|
||||
presence_penalty: Some(0.3),
|
||||
stop: Some(vec!["\n".to_string()]),
|
||||
reasoning_effort: None,
|
||||
};
|
||||
let payload = build_chat_completion_request(&request, OpenAiCompatConfig::openai());
|
||||
assert_eq!(payload["temperature"], 0.7);
|
||||
@@ -1323,4 +1460,45 @@ mod tests {
|
||||
assert!(payload.get("presence_penalty").is_none());
|
||||
assert!(payload.get("stop").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn gpt5_uses_max_completion_tokens_not_max_tokens() {
|
||||
// gpt-5* models require `max_completion_tokens`; legacy `max_tokens` causes
|
||||
// a request-validation failure. Verify the correct key is emitted.
|
||||
let request = MessageRequest {
|
||||
model: "gpt-5.2".to_string(),
|
||||
max_tokens: 512,
|
||||
messages: vec![],
|
||||
stream: false,
|
||||
..Default::default()
|
||||
};
|
||||
let payload = build_chat_completion_request(&request, OpenAiCompatConfig::openai());
|
||||
assert_eq!(
|
||||
payload["max_completion_tokens"],
|
||||
json!(512),
|
||||
"gpt-5.2 should emit max_completion_tokens"
|
||||
);
|
||||
assert!(
|
||||
payload.get("max_tokens").is_none(),
|
||||
"gpt-5.2 must not emit max_tokens"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn non_gpt5_uses_max_tokens() {
|
||||
// Older OpenAI models expect `max_tokens`; verify gpt-4o is unaffected.
|
||||
let request = MessageRequest {
|
||||
model: "gpt-4o".to_string(),
|
||||
max_tokens: 512,
|
||||
messages: vec![],
|
||||
stream: false,
|
||||
..Default::default()
|
||||
};
|
||||
let payload = build_chat_completion_request(&request, OpenAiCompatConfig::openai());
|
||||
assert_eq!(payload["max_tokens"], json!(512));
|
||||
assert!(
|
||||
payload.get("max_completion_tokens").is_none(),
|
||||
"gpt-4o must not emit max_completion_tokens"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -26,6 +26,11 @@ pub struct MessageRequest {
|
||||
pub presence_penalty: Option<f64>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub stop: Option<Vec<String>>,
|
||||
/// Reasoning effort level for OpenAI-compatible reasoning models (e.g. `o4-mini`).
|
||||
/// Accepted values: `"low"`, `"medium"`, `"high"`. Omitted when `None`.
|
||||
/// Silently ignored by backends that do not support it.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub reasoning_effort: Option<String>,
|
||||
}
|
||||
|
||||
impl MessageRequest {
|
||||
|
||||
@@ -561,43 +561,4 @@ mod tests {
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn output_with_stdin_tolerates_broken_pipe_when_child_closes_stdin_early() {
|
||||
// given: a hook that immediately closes stdin without consuming the
|
||||
// JSON payload. Use an oversized payload so the parent keeps writing
|
||||
// long enough for Linux to surface EPIPE on the old implementation.
|
||||
let root = temp_dir("stdin-close");
|
||||
let script = root.join("close-stdin.sh");
|
||||
fs::create_dir_all(&root).expect("temp hook dir");
|
||||
fs::write(
|
||||
&script,
|
||||
"#!/bin/sh\nexec 0<&-\nprintf 'stdin closed early\\n'\nsleep 0.05\n",
|
||||
)
|
||||
.expect("write stdin-closing hook");
|
||||
make_executable(&script);
|
||||
|
||||
let mut child = super::shell_command(script.to_str().expect("utf8 path"));
|
||||
child.stdin(std::process::Stdio::piped());
|
||||
child.stdout(std::process::Stdio::piped());
|
||||
child.stderr(std::process::Stdio::piped());
|
||||
let large_input = vec![b'x'; 2 * 1024 * 1024];
|
||||
|
||||
// when
|
||||
let output = child
|
||||
.output_with_stdin(&large_input)
|
||||
.expect("broken pipe should be tolerated");
|
||||
|
||||
// then
|
||||
assert!(
|
||||
output.status.success(),
|
||||
"child should still exit cleanly: {output:?}"
|
||||
);
|
||||
assert_eq!(
|
||||
String::from_utf8_lossy(&output.stdout).trim(),
|
||||
"stdin closed early"
|
||||
);
|
||||
|
||||
let _ = fs::remove_dir_all(root);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -209,6 +209,7 @@ fn run() -> Result<(), Box<dyn std::error::Error>> {
|
||||
permission_mode,
|
||||
compact,
|
||||
base_commit,
|
||||
..
|
||||
} => {
|
||||
run_stale_base_preflight(base_commit.as_deref());
|
||||
// Only consume piped stdin as prompt context when the permission
|
||||
@@ -243,6 +244,7 @@ fn run() -> Result<(), Box<dyn std::error::Error>> {
|
||||
allowed_tools,
|
||||
permission_mode,
|
||||
base_commit,
|
||||
..
|
||||
} => run_repl(model, allowed_tools, permission_mode, base_commit)?,
|
||||
CliAction::HelpTopic(topic) => print_help_topic(topic),
|
||||
CliAction::Help { output_format } => print_help(output_format)?,
|
||||
@@ -304,6 +306,7 @@ enum CliAction {
|
||||
permission_mode: PermissionMode,
|
||||
compact: bool,
|
||||
base_commit: Option<String>,
|
||||
reasoning_effort: Option<String>,
|
||||
},
|
||||
Login {
|
||||
output_format: CliOutputFormat,
|
||||
@@ -330,6 +333,7 @@ enum CliAction {
|
||||
allowed_tools: Option<AllowedToolSet>,
|
||||
permission_mode: PermissionMode,
|
||||
base_commit: Option<String>,
|
||||
reasoning_effort: Option<String>,
|
||||
},
|
||||
HelpTopic(LocalHelpTopic),
|
||||
// prompt-mode formatting is only supported for non-interactive runs
|
||||
@@ -453,6 +457,7 @@ fn parse_args(args: &[String]) -> Result<CliAction, String> {
|
||||
.unwrap_or_else(default_permission_mode),
|
||||
compact,
|
||||
base_commit: base_commit.clone(),
|
||||
reasoning_effort: None,
|
||||
});
|
||||
}
|
||||
"--print" => {
|
||||
@@ -511,6 +516,7 @@ fn parse_args(args: &[String]) -> Result<CliAction, String> {
|
||||
allowed_tools,
|
||||
permission_mode,
|
||||
base_commit,
|
||||
reasoning_effort: None,
|
||||
});
|
||||
}
|
||||
if rest.first().map(String::as_str) == Some("--resume") {
|
||||
@@ -549,6 +555,7 @@ fn parse_args(args: &[String]) -> Result<CliAction, String> {
|
||||
permission_mode,
|
||||
compact,
|
||||
base_commit,
|
||||
reasoning_effort: None,
|
||||
}),
|
||||
SkillSlashDispatch::Local => Ok(CliAction::Skills {
|
||||
args,
|
||||
@@ -574,6 +581,7 @@ fn parse_args(args: &[String]) -> Result<CliAction, String> {
|
||||
permission_mode,
|
||||
compact,
|
||||
base_commit: base_commit.clone(),
|
||||
reasoning_effort: None,
|
||||
})
|
||||
}
|
||||
other if other.starts_with('/') => parse_direct_slash_cli_action(
|
||||
@@ -593,6 +601,7 @@ fn parse_args(args: &[String]) -> Result<CliAction, String> {
|
||||
permission_mode,
|
||||
compact,
|
||||
base_commit,
|
||||
reasoning_effort: None,
|
||||
}),
|
||||
}
|
||||
}
|
||||
@@ -713,6 +722,7 @@ fn parse_direct_slash_cli_action(
|
||||
permission_mode,
|
||||
compact,
|
||||
base_commit,
|
||||
reasoning_effort: None,
|
||||
}),
|
||||
SkillSlashDispatch::Local => Ok(CliAction::Skills {
|
||||
args,
|
||||
@@ -8174,6 +8184,7 @@ mod tests {
|
||||
allowed_tools: None,
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8337,6 +8348,7 @@ mod tests {
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8426,6 +8438,7 @@ mod tests {
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8455,6 +8468,7 @@ mod tests {
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
compact: true,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8496,6 +8510,7 @@ mod tests {
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8573,6 +8588,7 @@ mod tests {
|
||||
allowed_tools: None,
|
||||
permission_mode: PermissionMode::ReadOnly,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8592,6 +8608,7 @@ mod tests {
|
||||
allowed_tools: None,
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8620,6 +8637,7 @@ mod tests {
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8645,6 +8663,7 @@ mod tests {
|
||||
),
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -8754,6 +8773,7 @@ mod tests {
|
||||
permission_mode: crate::default_permission_mode(),
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
assert_eq!(
|
||||
@@ -9137,6 +9157,7 @@ mod tests {
|
||||
permission_mode: crate::default_permission_mode(),
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
}
|
||||
@@ -9203,6 +9224,7 @@ mod tests {
|
||||
permission_mode: crate::default_permission_mode(),
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
assert_eq!(
|
||||
@@ -9228,6 +9250,7 @@ mod tests {
|
||||
permission_mode: crate::default_permission_mode(),
|
||||
compact: false,
|
||||
base_commit: None,
|
||||
reasoning_effort: None,
|
||||
}
|
||||
);
|
||||
let error = parse_args(&["/status".to_string()])
|
||||
|
||||
Reference in New Issue
Block a user