mirror of
https://github.com/instructkr/claw-code.git
synced 2026-04-27 01:14:59 +08:00
Compare commits
4 Commits
main
...
feat/jobdo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f079b7b616 | ||
|
|
93da4f14ab | ||
|
|
d305178591 | ||
|
|
0cbff5dc76 |
@@ -1,5 +0,0 @@
|
||||
{
|
||||
"aliases": {
|
||||
"quick": "haiku"
|
||||
}
|
||||
}
|
||||
79
README.md
79
README.md
@@ -98,87 +98,10 @@ export ANTHROPIC_API_KEY="sk-ant-..."
|
||||
|
||||
**Git Bash / WSL** are optional alternatives, not requirements. If you prefer bash-style paths (`/c/Users/you/...` instead of `C:\Users\you\...`), Git Bash (ships with Git for Windows) works well. In Git Bash, the `MINGW64` prompt is expected and normal — not a broken install.
|
||||
|
||||
## Post-build: locate the binary and verify
|
||||
|
||||
After running `cargo build --workspace`, the `claw` binary is built but **not** automatically installed to your system. Here's where to find it and how to verify the build succeeded.
|
||||
|
||||
### Binary location
|
||||
|
||||
After `cargo build --workspace` in `claw-code/rust/`:
|
||||
|
||||
**Debug build (default, faster compile):**
|
||||
- **macOS/Linux:** `rust/target/debug/claw`
|
||||
- **Windows:** `rust/target/debug/claw.exe`
|
||||
|
||||
**Release build (optimized, slower compile):**
|
||||
- **macOS/Linux:** `rust/target/release/claw`
|
||||
- **Windows:** `rust/target/release/claw.exe`
|
||||
|
||||
If you ran `cargo build` without `--release`, the binary is in the `debug/` folder.
|
||||
|
||||
### Verify the build succeeded
|
||||
|
||||
Test the binary directly using its path:
|
||||
|
||||
```bash
|
||||
# macOS/Linux (debug build)
|
||||
./rust/target/debug/claw --help
|
||||
./rust/target/debug/claw doctor
|
||||
|
||||
# Windows PowerShell (debug build)
|
||||
.\rust\target\debug\claw.exe --help
|
||||
.\rust\target\debug\claw.exe doctor
|
||||
```
|
||||
|
||||
If these commands succeed, the build is working. `claw doctor` is your first health check — it validates your API key, model access, and tool configuration.
|
||||
|
||||
### Optional: Add to PATH
|
||||
|
||||
If you want to run `claw` from any directory without the full path, choose one of these approaches:
|
||||
|
||||
**Option 1: Symlink (macOS/Linux)**
|
||||
```bash
|
||||
ln -s $(pwd)/rust/target/debug/claw /usr/local/bin/claw
|
||||
```
|
||||
Then reload your shell and test:
|
||||
```bash
|
||||
claw --help
|
||||
```
|
||||
|
||||
**Option 2: Use `cargo install` (all platforms)**
|
||||
|
||||
Build and install to Cargo's default location (`~/.cargo/bin/`, which is usually on PATH):
|
||||
```bash
|
||||
# From the claw-code/rust/ directory
|
||||
cargo install --path . --force
|
||||
|
||||
# Then from anywhere
|
||||
claw --help
|
||||
```
|
||||
|
||||
**Option 3: Update shell profile (bash/zsh)**
|
||||
|
||||
Add this line to `~/.bashrc` or `~/.zshrc`:
|
||||
```bash
|
||||
export PATH="$(pwd)/rust/target/debug:$PATH"
|
||||
```
|
||||
|
||||
Reload your shell:
|
||||
```bash
|
||||
source ~/.bashrc # or source ~/.zshrc
|
||||
claw --help
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
- **"command not found: claw"** — The binary is in `rust/target/debug/claw`, but it's not on your PATH. Use the full path `./rust/target/debug/claw` or symlink/install as above.
|
||||
- **"permission denied"** — On macOS/Linux, you may need `chmod +x rust/target/debug/claw` if the executable bit isn't set (rare).
|
||||
- **Debug vs. release** — If the build is slow, you're in debug mode (default). Add `--release` to `cargo build` for faster runtime, but the build itself will take 5–10 minutes.
|
||||
|
||||
> [!NOTE]
|
||||
> **Auth:** claw requires an **API key** (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, etc.) — Claude subscription login is not a supported auth path.
|
||||
|
||||
Run the workspace test suite after verifying the binary works:
|
||||
Run the workspace test suite:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
|
||||
1264
ROADMAP.md
1264
ROADMAP.md
File diff suppressed because it is too large
Load Diff
108
USAGE.md
108
USAGE.md
@@ -43,35 +43,6 @@ cd rust
|
||||
/doctor
|
||||
```
|
||||
|
||||
Or run doctor directly with JSON output for scripting:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
./target/debug/claw doctor --output-format json
|
||||
```
|
||||
|
||||
**Note:** Diagnostic verbs (`doctor`, `status`, `sandbox`, `version`) support `--output-format json` for machine-readable output. Invalid suffix arguments (e.g., `--json`) are now rejected at parse time rather than falling through to prompt dispatch.
|
||||
|
||||
### Initialize a repository
|
||||
|
||||
Set up a new repository with `.claw` config, `.claw.json`, `.gitignore` entries, and a `CLAUDE.md` guidance file:
|
||||
|
||||
```bash
|
||||
cd /path/to/your/repo
|
||||
./target/debug/claw init
|
||||
```
|
||||
|
||||
Text mode (human-readable) shows artifact creation summary with project path and next steps. Idempotent — running multiple times in the same repo marks already-created files as "skipped".
|
||||
|
||||
JSON mode for scripting:
|
||||
```bash
|
||||
./target/debug/claw init --output-format json
|
||||
```
|
||||
|
||||
Returns structured output with `project_path`, `created[]`, `updated[]`, `skipped[]` arrays (one per artifact), and `artifacts[]` carrying each file's `name` and machine-stable `status` tag. The legacy `message` field preserves backward compatibility.
|
||||
|
||||
**Why structured fields matter:** Claws can detect per-artifact state (`created` vs `updated` vs `skipped`) without substring-matching human prose. Use the `created[]`, `updated[]`, and `skipped[]` arrays for conditional follow-up logic (e.g., only commit if files were actually created, not just updated).
|
||||
|
||||
### Interactive REPL
|
||||
|
||||
```bash
|
||||
@@ -100,85 +71,6 @@ cd rust
|
||||
./target/debug/claw --output-format json prompt "status"
|
||||
```
|
||||
|
||||
### Inspect worker state
|
||||
|
||||
The `claw state` command reads `.claw/worker-state.json`, which is written by the interactive REPL or a one-shot prompt when a worker executes a task. This file contains the worker ID, session reference, model, and permission mode.
|
||||
|
||||
Prerequisite: You must run `claw` (interactive REPL) or `claw prompt <text>` at least once in the repository to produce the worker state file.
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
./target/debug/claw state
|
||||
```
|
||||
|
||||
JSON mode:
|
||||
```bash
|
||||
./target/debug/claw state --output-format json
|
||||
```
|
||||
|
||||
If you run `claw state` before any worker has executed, you will see a helpful error:
|
||||
```
|
||||
error: no worker state file found at .claw/worker-state.json
|
||||
Hint: worker state is written by the interactive REPL or a non-interactive prompt.
|
||||
Run: claw # start the REPL (writes state on first turn)
|
||||
Or: claw prompt <text> # run one non-interactive turn
|
||||
Then rerun: claw state [--output-format json]
|
||||
```
|
||||
|
||||
## Advanced slash commands (Interactive REPL only)
|
||||
|
||||
These commands are available inside the interactive REPL (`claw` with no args). They extend the assistant with workspace analysis, planning, and navigation features.
|
||||
|
||||
### `/ultraplan` — Deep planning with multi-step reasoning
|
||||
|
||||
**Purpose:** Break down a complex task into steps using extended reasoning.
|
||||
|
||||
```bash
|
||||
# Start the REPL
|
||||
claw
|
||||
|
||||
# Inside the REPL
|
||||
/ultraplan refactor the auth module to use async/await
|
||||
/ultraplan design a caching layer for database queries
|
||||
/ultraplan analyze this module for performance bottlenecks
|
||||
```
|
||||
|
||||
Output: A structured plan with numbered steps, reasoning for each step, and expected outcomes. Use this when you want the assistant to think through a problem in detail before coding.
|
||||
|
||||
### `/teleport` — Jump to a file or symbol
|
||||
|
||||
**Purpose:** Quickly navigate to a file, function, class, or struct by name.
|
||||
|
||||
```bash
|
||||
# Jump to a symbol
|
||||
/teleport UserService
|
||||
/teleport authenticate_user
|
||||
/teleport RequestHandler
|
||||
|
||||
# Jump to a file
|
||||
/teleport src/auth.rs
|
||||
/teleport crates/runtime/lib.rs
|
||||
/teleport ./ARCHITECTURE.md
|
||||
```
|
||||
|
||||
Output: The file content, with the requested symbol highlighted or the file fully loaded. Useful for exploring the codebase without manually navigating directories. If multiple matches exist, the assistant shows the top candidates.
|
||||
|
||||
### `/bughunter` — Scan for likely bugs and issues
|
||||
|
||||
**Purpose:** Analyze code for common pitfalls, anti-patterns, and potential bugs.
|
||||
|
||||
```bash
|
||||
# Scan the entire workspace
|
||||
/bughunter
|
||||
|
||||
# Scan a specific directory or file
|
||||
/bughunter src/handlers
|
||||
/bughunter rust/crates/runtime
|
||||
/bughunter src/auth.rs
|
||||
```
|
||||
|
||||
Output: A list of suspicious patterns with explanations (e.g., "unchecked unwrap()", "potential race condition", "missing error handling"). Each finding includes the file, line number, and suggested fix. Use this as a first pass before a full code review.
|
||||
|
||||
## Model and permission controls
|
||||
|
||||
```bash
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"defaultMode": "dontAsk"
|
||||
}
|
||||
}
|
||||
4
rust/.gitignore
vendored
4
rust/.gitignore
vendored
@@ -1,7 +1,3 @@
|
||||
target/
|
||||
.omx/
|
||||
.clawd-agents/
|
||||
# Claw Code local artifacts
|
||||
.claw/settings.local.json
|
||||
.claw/sessions/
|
||||
.clawhip/
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claw Code (clawcode.dev) when working with code in this repository.
|
||||
|
||||
## Detected stack
|
||||
- Languages: Rust.
|
||||
- Frameworks: none detected from the supported starter markers.
|
||||
|
||||
## Verification
|
||||
- Run Rust verification from the repo root: `cargo fmt`, `cargo clippy --workspace --all-targets -- -D warnings`, `cargo test --workspace`
|
||||
|
||||
## Working agreement
|
||||
- Prefer small, reviewable changes and keep generated bootstrap files aligned with actual repo workflows.
|
||||
- Keep shared defaults in `.claw.json`; reserve `.claw/settings.local.json` for machine-local overrides.
|
||||
- Do not overwrite existing `CLAUDE.md` content automatically; update it intentionally when repo workflows change.
|
||||
@@ -2554,22 +2554,11 @@ fn render_mcp_report_for(
|
||||
|
||||
match normalize_optional_args(args) {
|
||||
None | Some("list") => {
|
||||
// #144: degrade gracefully on config parse failure (same contract
|
||||
// as #143 for `status`). Text mode prepends a "Config load error"
|
||||
// block before the MCP list; the list falls back to empty.
|
||||
match loader.load() {
|
||||
Ok(runtime_config) => Ok(render_mcp_summary_report(
|
||||
cwd,
|
||||
runtime_config.mcp().servers(),
|
||||
)),
|
||||
Err(err) => {
|
||||
let empty = std::collections::BTreeMap::new();
|
||||
Ok(format!(
|
||||
"Config load error\n Status fail\n Summary runtime config failed to load; reporting partial MCP view\n Details {err}\n Hint `claw doctor` classifies config parse errors; fix the listed field and rerun\n\n{}",
|
||||
render_mcp_summary_report(cwd, &empty)
|
||||
))
|
||||
}
|
||||
}
|
||||
let runtime_config = loader.load()?;
|
||||
Ok(render_mcp_summary_report(
|
||||
cwd,
|
||||
runtime_config.mcp().servers(),
|
||||
))
|
||||
}
|
||||
Some(args) if is_help_arg(args) => Ok(render_mcp_usage(None)),
|
||||
Some("show") => Ok(render_mcp_usage(Some("show"))),
|
||||
@@ -2582,19 +2571,12 @@ fn render_mcp_report_for(
|
||||
if parts.next().is_some() {
|
||||
return Ok(render_mcp_usage(Some(args)));
|
||||
}
|
||||
// #144: same degradation for `mcp show`; if config won't parse,
|
||||
// the specific server lookup can't succeed, so report the parse
|
||||
// error with context.
|
||||
match loader.load() {
|
||||
Ok(runtime_config) => Ok(render_mcp_server_report(
|
||||
cwd,
|
||||
server_name,
|
||||
runtime_config.mcp().get(server_name),
|
||||
)),
|
||||
Err(err) => Ok(format!(
|
||||
"Config load error\n Status fail\n Summary runtime config failed to load; cannot resolve `{server_name}`\n Details {err}\n Hint `claw doctor` classifies config parse errors; fix the listed field and rerun"
|
||||
)),
|
||||
}
|
||||
let runtime_config = loader.load()?;
|
||||
Ok(render_mcp_server_report(
|
||||
cwd,
|
||||
server_name,
|
||||
runtime_config.mcp().get(server_name),
|
||||
))
|
||||
}
|
||||
Some(args) => Ok(render_mcp_usage(Some(args))),
|
||||
}
|
||||
@@ -2617,35 +2599,11 @@ fn render_mcp_report_json_for(
|
||||
|
||||
match normalize_optional_args(args) {
|
||||
None | Some("list") => {
|
||||
// #144: match #143's degraded envelope contract. On config parse
|
||||
// failure, emit top-level `status: "degraded"` with
|
||||
// `config_load_error`, empty servers[], and exit 0. On clean
|
||||
// runs, the existing serializer adds `status: "ok"` below.
|
||||
match loader.load() {
|
||||
Ok(runtime_config) => {
|
||||
let mut value = render_mcp_summary_report_json(
|
||||
cwd,
|
||||
runtime_config.mcp().servers(),
|
||||
);
|
||||
if let Some(map) = value.as_object_mut() {
|
||||
map.insert("status".to_string(), Value::String("ok".to_string()));
|
||||
map.insert("config_load_error".to_string(), Value::Null);
|
||||
}
|
||||
Ok(value)
|
||||
}
|
||||
Err(err) => {
|
||||
let empty = std::collections::BTreeMap::new();
|
||||
let mut value = render_mcp_summary_report_json(cwd, &empty);
|
||||
if let Some(map) = value.as_object_mut() {
|
||||
map.insert("status".to_string(), Value::String("degraded".to_string()));
|
||||
map.insert(
|
||||
"config_load_error".to_string(),
|
||||
Value::String(err.to_string()),
|
||||
);
|
||||
}
|
||||
Ok(value)
|
||||
}
|
||||
}
|
||||
let runtime_config = loader.load()?;
|
||||
Ok(render_mcp_summary_report_json(
|
||||
cwd,
|
||||
runtime_config.mcp().servers(),
|
||||
))
|
||||
}
|
||||
Some(args) if is_help_arg(args) => Ok(render_mcp_usage_json(None)),
|
||||
Some("show") => Ok(render_mcp_usage_json(Some("show"))),
|
||||
@@ -2658,29 +2616,12 @@ fn render_mcp_report_json_for(
|
||||
if parts.next().is_some() {
|
||||
return Ok(render_mcp_usage_json(Some(args)));
|
||||
}
|
||||
// #144: same degradation pattern for show action.
|
||||
match loader.load() {
|
||||
Ok(runtime_config) => {
|
||||
let mut value = render_mcp_server_report_json(
|
||||
cwd,
|
||||
server_name,
|
||||
runtime_config.mcp().get(server_name),
|
||||
);
|
||||
if let Some(map) = value.as_object_mut() {
|
||||
map.insert("status".to_string(), Value::String("ok".to_string()));
|
||||
map.insert("config_load_error".to_string(), Value::Null);
|
||||
}
|
||||
Ok(value)
|
||||
}
|
||||
Err(err) => Ok(serde_json::json!({
|
||||
"kind": "mcp",
|
||||
"action": "show",
|
||||
"server": server_name,
|
||||
"status": "degraded",
|
||||
"config_load_error": err.to_string(),
|
||||
"working_directory": cwd.display().to_string(),
|
||||
})),
|
||||
}
|
||||
let runtime_config = loader.load()?;
|
||||
Ok(render_mcp_server_report_json(
|
||||
cwd,
|
||||
server_name,
|
||||
runtime_config.mcp().get(server_name),
|
||||
))
|
||||
}
|
||||
Some(args) => Ok(render_mcp_usage_json(Some(args))),
|
||||
}
|
||||
@@ -5538,82 +5479,6 @@ mod tests {
|
||||
let _ = fs::remove_dir_all(config_home);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mcp_degrades_gracefully_on_malformed_mcp_config_144() {
|
||||
// #144: mirror of #143's partial-success contract for `claw mcp`.
|
||||
// Previously `mcp` hard-failed on any config parse error, hiding
|
||||
// well-formed servers and forcing claws to fall back to `doctor`.
|
||||
// Now `mcp` emits a degraded envelope instead: exit 0, status:
|
||||
// "degraded", config_load_error populated, servers[] empty.
|
||||
let _guard = env_guard();
|
||||
let workspace = temp_dir("mcp-degrades-144");
|
||||
let config_home = temp_dir("mcp-degrades-144-cfg");
|
||||
fs::create_dir_all(workspace.join(".claw")).expect("create workspace .claw dir");
|
||||
fs::create_dir_all(&config_home).expect("create config home");
|
||||
// One valid server + one malformed entry missing `command`.
|
||||
fs::write(
|
||||
workspace.join(".claw.json"),
|
||||
r#"{
|
||||
"mcpServers": {
|
||||
"everything": {"command": "npx", "args": ["-y", "@modelcontextprotocol/server-everything"]},
|
||||
"missing-command": {"args": ["arg-only-no-command"]}
|
||||
}
|
||||
}
|
||||
"#,
|
||||
)
|
||||
.expect("write malformed .claw.json");
|
||||
|
||||
let loader = ConfigLoader::new(&workspace, &config_home);
|
||||
// list action: must return Ok (not Err) with degraded envelope.
|
||||
let list = render_mcp_report_json_for(&loader, &workspace, None)
|
||||
.expect("mcp list should not hard-fail on config parse errors (#144)");
|
||||
assert_eq!(list["kind"], "mcp");
|
||||
assert_eq!(list["action"], "list");
|
||||
assert_eq!(
|
||||
list["status"].as_str(),
|
||||
Some("degraded"),
|
||||
"top-level status should be 'degraded': {list}"
|
||||
);
|
||||
let err = list["config_load_error"]
|
||||
.as_str()
|
||||
.expect("config_load_error must be a string on degraded runs");
|
||||
assert!(
|
||||
err.contains("mcpServers.missing-command"),
|
||||
"config_load_error should name the malformed field path: {err}"
|
||||
);
|
||||
assert_eq!(list["configured_servers"], 0);
|
||||
assert!(list["servers"].as_array().unwrap().is_empty());
|
||||
|
||||
// show action: should also degrade (not hard-fail).
|
||||
let show = render_mcp_report_json_for(&loader, &workspace, Some("show everything"))
|
||||
.expect("mcp show should not hard-fail on config parse errors (#144)");
|
||||
assert_eq!(show["kind"], "mcp");
|
||||
assert_eq!(show["action"], "show");
|
||||
assert_eq!(
|
||||
show["status"].as_str(),
|
||||
Some("degraded"),
|
||||
"show action should also report status: 'degraded': {show}"
|
||||
);
|
||||
assert!(show["config_load_error"].is_string());
|
||||
|
||||
// Clean path: status: "ok", config_load_error: null.
|
||||
let clean_ws = temp_dir("mcp-degrades-144-clean");
|
||||
fs::create_dir_all(&clean_ws).expect("clean ws");
|
||||
let clean_loader = ConfigLoader::new(&clean_ws, &config_home);
|
||||
let clean_list = render_mcp_report_json_for(&clean_loader, &clean_ws, None)
|
||||
.expect("clean mcp list should succeed");
|
||||
assert_eq!(
|
||||
clean_list["status"].as_str(),
|
||||
Some("ok"),
|
||||
"clean run should report status: 'ok'"
|
||||
);
|
||||
assert!(clean_list["config_load_error"].is_null());
|
||||
|
||||
let _ = fs::remove_dir_all(workspace);
|
||||
let _ = fs::remove_dir_all(config_home);
|
||||
let _ = fs::remove_dir_all(clean_ws);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_quoted_skill_frontmatter_values() {
|
||||
let contents = "---\nname: \"hud\"\ndescription: 'Quoted description'\n---\n";
|
||||
|
||||
@@ -8,7 +8,6 @@ use tokio::process::Command as TokioCommand;
|
||||
use tokio::runtime::Builder;
|
||||
use tokio::time::timeout;
|
||||
|
||||
use crate::lane_events::{LaneEvent, ShipMergeMethod, ShipProvenance};
|
||||
use crate::sandbox::{
|
||||
build_linux_sandbox_command, resolve_sandbox_status_for_request, FilesystemIsolationMode,
|
||||
SandboxConfig, SandboxStatus,
|
||||
@@ -103,76 +102,11 @@ pub fn execute_bash(input: BashCommandInput) -> io::Result<BashCommandOutput> {
|
||||
runtime.block_on(execute_bash_async(input, sandbox_status, cwd))
|
||||
}
|
||||
|
||||
/// Detect git push to main and emit ship provenance event
|
||||
fn detect_and_emit_ship_prepared(command: &str) {
|
||||
let trimmed = command.trim();
|
||||
// Simple detection: git push with main/master
|
||||
if trimmed.contains("git push") && (trimmed.contains("main") || trimmed.contains("master")) {
|
||||
// Emit ship.prepared event
|
||||
let now = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_millis();
|
||||
let provenance = ShipProvenance {
|
||||
source_branch: get_current_branch().unwrap_or_else(|| "unknown".to_string()),
|
||||
base_commit: get_head_commit().unwrap_or_default(),
|
||||
commit_count: 0, // Would need to calculate from range
|
||||
commit_range: "unknown..HEAD".to_string(),
|
||||
merge_method: ShipMergeMethod::DirectPush,
|
||||
actor: get_git_actor().unwrap_or_else(|| "unknown".to_string()),
|
||||
pr_number: None,
|
||||
};
|
||||
let _event = LaneEvent::ship_prepared(format!("{}", now), &provenance);
|
||||
// Log to stderr as interim routing before event stream integration
|
||||
eprintln!(
|
||||
"[ship.prepared] branch={} -> main, commits={}, actor={}",
|
||||
provenance.source_branch, provenance.commit_count, provenance.actor
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
fn get_current_branch() -> Option<String> {
|
||||
let output = Command::new("git")
|
||||
.args(["branch", "--show-current"])
|
||||
.output()
|
||||
.ok()?;
|
||||
if output.status.success() {
|
||||
Some(String::from_utf8_lossy(&output.stdout).trim().to_string())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
fn get_head_commit() -> Option<String> {
|
||||
let output = Command::new("git")
|
||||
.args(["rev-parse", "--short", "HEAD"])
|
||||
.output()
|
||||
.ok()?;
|
||||
if output.status.success() {
|
||||
Some(String::from_utf8_lossy(&output.stdout).trim().to_string())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
fn get_git_actor() -> Option<String> {
|
||||
let name = Command::new("git")
|
||||
.args(["config", "user.name"])
|
||||
.output()
|
||||
.ok()
|
||||
.filter(|o| o.status.success())
|
||||
.map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string())?;
|
||||
Some(name)
|
||||
}
|
||||
|
||||
async fn execute_bash_async(
|
||||
input: BashCommandInput,
|
||||
sandbox_status: SandboxStatus,
|
||||
cwd: std::path::PathBuf,
|
||||
) -> io::Result<BashCommandOutput> {
|
||||
// Detect and emit ship provenance for git push operations
|
||||
detect_and_emit_ship_prepared(&input.command);
|
||||
|
||||
let mut command = prepare_tokio_command(&input.command, &cwd, &sandbox_status, true);
|
||||
|
||||
let output_result = if let Some(timeout_ms) = input.timeout {
|
||||
|
||||
@@ -1254,21 +1254,11 @@ mod tests {
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
fn temp_dir() -> std::path::PathBuf {
|
||||
// #149: previously used `runtime-config-{nanos}` which collided
|
||||
// under parallel `cargo test --workspace` when multiple tests
|
||||
// started within the same nanosecond bucket on fast machines.
|
||||
// Add process id + a monotonically-incrementing atomic counter
|
||||
// so every callsite gets a provably-unique directory regardless
|
||||
// of clock resolution or scheduling.
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
static COUNTER: AtomicU64 = AtomicU64::new(0);
|
||||
let nanos = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("time should be after epoch")
|
||||
.as_nanos();
|
||||
let pid = std::process::id();
|
||||
let seq = COUNTER.fetch_add(1, Ordering::Relaxed);
|
||||
std::env::temp_dir().join(format!("runtime-config-{pid}-{nanos}-{seq}"))
|
||||
std::env::temp_dir().join(format!("runtime-config-{nanos}"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -38,15 +38,6 @@ pub enum LaneEventName {
|
||||
BranchStaleAgainstMain,
|
||||
#[serde(rename = "branch.workspace_mismatch")]
|
||||
BranchWorkspaceMismatch,
|
||||
/// Ship/provenance events — §4.44.5
|
||||
#[serde(rename = "ship.prepared")]
|
||||
ShipPrepared,
|
||||
#[serde(rename = "ship.commits_selected")]
|
||||
ShipCommitsSelected,
|
||||
#[serde(rename = "ship.merged")]
|
||||
ShipMerged,
|
||||
#[serde(rename = "ship.pushed_main")]
|
||||
ShipPushedMain,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
|
||||
@@ -392,31 +383,11 @@ pub fn dedupe_terminal_events(events: &[LaneEvent]) -> Vec<LaneEvent> {
|
||||
result
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub enum BlockedSubphase {
|
||||
#[serde(rename = "blocked.trust_prompt")]
|
||||
TrustPrompt { gate_repo: String },
|
||||
#[serde(rename = "blocked.prompt_delivery")]
|
||||
PromptDelivery { attempt: u32 },
|
||||
#[serde(rename = "blocked.plugin_init")]
|
||||
PluginInit { plugin_name: String },
|
||||
#[serde(rename = "blocked.mcp_handshake")]
|
||||
McpHandshake { server_name: String, attempt: u32 },
|
||||
#[serde(rename = "blocked.branch_freshness")]
|
||||
BranchFreshness { behind_main: u32 },
|
||||
#[serde(rename = "blocked.test_hang")]
|
||||
TestHang { elapsed_secs: u32, test_name: Option<String> },
|
||||
#[serde(rename = "blocked.report_pending")]
|
||||
ReportPending { since_secs: u32 },
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct LaneEventBlocker {
|
||||
#[serde(rename = "failureClass")]
|
||||
pub failure_class: LaneFailureClass,
|
||||
pub detail: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub subphase: Option<BlockedSubphase>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
@@ -433,29 +404,6 @@ pub struct LaneCommitProvenance {
|
||||
pub lineage: Vec<String>,
|
||||
}
|
||||
|
||||
/// Ship/provenance metadata — §4.44.5
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct ShipProvenance {
|
||||
pub source_branch: String,
|
||||
pub base_commit: String,
|
||||
pub commit_count: u32,
|
||||
pub commit_range: String,
|
||||
pub merge_method: ShipMergeMethod,
|
||||
pub actor: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub pr_number: Option<u32>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum ShipMergeMethod {
|
||||
DirectPush,
|
||||
FastForward,
|
||||
MergeCommit,
|
||||
SquashMerge,
|
||||
RebaseMerge,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct LaneEvent {
|
||||
pub event: LaneEventName,
|
||||
@@ -539,56 +487,16 @@ impl LaneEvent {
|
||||
|
||||
#[must_use]
|
||||
pub fn blocked(emitted_at: impl Into<String>, blocker: &LaneEventBlocker) -> Self {
|
||||
let mut event = Self::new(LaneEventName::Blocked, LaneEventStatus::Blocked, emitted_at)
|
||||
Self::new(LaneEventName::Blocked, LaneEventStatus::Blocked, emitted_at)
|
||||
.with_failure_class(blocker.failure_class)
|
||||
.with_detail(blocker.detail.clone());
|
||||
if let Some(ref subphase) = blocker.subphase {
|
||||
event = event.with_data(serde_json::to_value(subphase).expect("subphase should serialize"));
|
||||
}
|
||||
event
|
||||
.with_detail(blocker.detail.clone())
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn failed(emitted_at: impl Into<String>, blocker: &LaneEventBlocker) -> Self {
|
||||
let mut event = Self::new(LaneEventName::Failed, LaneEventStatus::Failed, emitted_at)
|
||||
Self::new(LaneEventName::Failed, LaneEventStatus::Failed, emitted_at)
|
||||
.with_failure_class(blocker.failure_class)
|
||||
.with_detail(blocker.detail.clone());
|
||||
if let Some(ref subphase) = blocker.subphase {
|
||||
event = event.with_data(serde_json::to_value(subphase).expect("subphase should serialize"));
|
||||
}
|
||||
event
|
||||
}
|
||||
|
||||
/// Ship prepared — §4.44.5
|
||||
#[must_use]
|
||||
pub fn ship_prepared(emitted_at: impl Into<String>, provenance: &ShipProvenance) -> Self {
|
||||
Self::new(LaneEventName::ShipPrepared, LaneEventStatus::Ready, emitted_at)
|
||||
.with_data(serde_json::to_value(provenance).expect("ship provenance should serialize"))
|
||||
}
|
||||
|
||||
/// Ship commits selected — §4.44.5
|
||||
#[must_use]
|
||||
pub fn ship_commits_selected(
|
||||
emitted_at: impl Into<String>,
|
||||
commit_count: u32,
|
||||
commit_range: impl Into<String>,
|
||||
) -> Self {
|
||||
Self::new(LaneEventName::ShipCommitsSelected, LaneEventStatus::Ready, emitted_at)
|
||||
.with_detail(format!("{} commits: {}", commit_count, commit_range.into()))
|
||||
}
|
||||
|
||||
/// Ship merged — §4.44.5
|
||||
#[must_use]
|
||||
pub fn ship_merged(emitted_at: impl Into<String>, provenance: &ShipProvenance) -> Self {
|
||||
Self::new(LaneEventName::ShipMerged, LaneEventStatus::Completed, emitted_at)
|
||||
.with_data(serde_json::to_value(provenance).expect("ship provenance should serialize"))
|
||||
}
|
||||
|
||||
/// Ship pushed to main — §4.44.5
|
||||
#[must_use]
|
||||
pub fn ship_pushed_main(emitted_at: impl Into<String>, provenance: &ShipProvenance) -> Self {
|
||||
Self::new(LaneEventName::ShipPushedMain, LaneEventStatus::Completed, emitted_at)
|
||||
.with_data(serde_json::to_value(provenance).expect("ship provenance should serialize"))
|
||||
.with_detail(blocker.detail.clone())
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
@@ -662,10 +570,9 @@ mod tests {
|
||||
|
||||
use super::{
|
||||
compute_event_fingerprint, dedupe_superseded_commit_events, dedupe_terminal_events,
|
||||
is_terminal_event, BlockedSubphase, EventProvenance, LaneCommitProvenance, LaneEvent,
|
||||
LaneEventBlocker, LaneEventBuilder, LaneEventMetadata, LaneEventName, LaneEventStatus,
|
||||
LaneFailureClass, LaneOwnership, SessionIdentity, ShipMergeMethod, ShipProvenance,
|
||||
WatcherAction,
|
||||
is_terminal_event, EventProvenance, LaneCommitProvenance, LaneEvent, LaneEventBlocker,
|
||||
LaneEventBuilder, LaneEventMetadata, LaneEventName, LaneEventStatus, LaneFailureClass,
|
||||
LaneOwnership, SessionIdentity, WatcherAction,
|
||||
};
|
||||
|
||||
#[test]
|
||||
@@ -694,10 +601,6 @@ mod tests {
|
||||
LaneEventName::BranchWorkspaceMismatch,
|
||||
"branch.workspace_mismatch",
|
||||
),
|
||||
(LaneEventName::ShipPrepared, "ship.prepared"),
|
||||
(LaneEventName::ShipCommitsSelected, "ship.commits_selected"),
|
||||
(LaneEventName::ShipMerged, "ship.merged"),
|
||||
(LaneEventName::ShipPushedMain, "ship.pushed_main"),
|
||||
];
|
||||
|
||||
for (event, expected) in cases {
|
||||
@@ -738,10 +641,6 @@ mod tests {
|
||||
let blocker = LaneEventBlocker {
|
||||
failure_class: LaneFailureClass::McpStartup,
|
||||
detail: "broken server".to_string(),
|
||||
subphase: Some(BlockedSubphase::McpHandshake {
|
||||
server_name: "test-server".to_string(),
|
||||
attempt: 1,
|
||||
}),
|
||||
};
|
||||
|
||||
let blocked = LaneEvent::blocked("2026-04-04T00:00:00Z", &blocker);
|
||||
@@ -787,34 +686,6 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ship_provenance_events_serialize_to_expected_wire_values() {
|
||||
let provenance = ShipProvenance {
|
||||
source_branch: "feature/provenance".to_string(),
|
||||
base_commit: "dd73962".to_string(),
|
||||
commit_count: 6,
|
||||
commit_range: "dd73962..c956f78".to_string(),
|
||||
merge_method: ShipMergeMethod::DirectPush,
|
||||
actor: "Jobdori".to_string(),
|
||||
pr_number: None,
|
||||
};
|
||||
|
||||
let prepared = LaneEvent::ship_prepared("2026-04-20T14:30:00Z", &provenance);
|
||||
let prepared_json = serde_json::to_value(&prepared).expect("ship event should serialize");
|
||||
assert_eq!(prepared_json["event"], "ship.prepared");
|
||||
assert_eq!(prepared_json["data"]["commit_count"], 6);
|
||||
assert_eq!(prepared_json["data"]["source_branch"], "feature/provenance");
|
||||
|
||||
let pushed = LaneEvent::ship_pushed_main("2026-04-20T14:35:00Z", &provenance);
|
||||
let pushed_json = serde_json::to_value(&pushed).expect("ship event should serialize");
|
||||
assert_eq!(pushed_json["event"], "ship.pushed_main");
|
||||
assert_eq!(pushed_json["data"]["merge_method"], "direct_push");
|
||||
|
||||
let round_trip: LaneEvent =
|
||||
serde_json::from_value(pushed_json).expect("ship event should deserialize");
|
||||
assert_eq!(round_trip.event, LaneEventName::ShipPushedMain);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn commit_events_can_carry_worktree_and_supersession_metadata() {
|
||||
let event = LaneEvent::commit_created(
|
||||
|
||||
@@ -84,10 +84,9 @@ pub use hooks::{
|
||||
};
|
||||
pub use lane_events::{
|
||||
compute_event_fingerprint, dedupe_superseded_commit_events, dedupe_terminal_events,
|
||||
is_terminal_event, BlockedSubphase, EventProvenance, LaneCommitProvenance, LaneEvent,
|
||||
LaneEventBlocker, LaneEventBuilder, LaneEventMetadata, LaneEventName, LaneEventStatus,
|
||||
LaneFailureClass, LaneOwnership, SessionIdentity, ShipMergeMethod, ShipProvenance,
|
||||
WatcherAction,
|
||||
is_terminal_event, EventProvenance, LaneCommitProvenance, LaneEvent, LaneEventBlocker,
|
||||
LaneEventBuilder, LaneEventMetadata, LaneEventName, LaneEventStatus, LaneFailureClass,
|
||||
LaneOwnership, SessionIdentity, WatcherAction,
|
||||
};
|
||||
pub use mcp::{
|
||||
mcp_server_signature, mcp_tool_name, mcp_tool_prefix, normalize_name_for_mcp,
|
||||
|
||||
@@ -31,19 +31,14 @@ impl SessionStore {
|
||||
/// The on-disk layout becomes `<cwd>/.claw/sessions/<workspace_hash>/`.
|
||||
pub fn from_cwd(cwd: impl AsRef<Path>) -> Result<Self, SessionControlError> {
|
||||
let cwd = cwd.as_ref();
|
||||
// #151: canonicalize so equivalent paths (symlinks, relative vs
|
||||
// absolute, /tmp vs /private/tmp on macOS) produce the same
|
||||
// workspace_fingerprint. Falls back to the raw path if canonicalize
|
||||
// fails (e.g. the directory doesn't exist yet).
|
||||
let canonical_cwd = fs::canonicalize(cwd).unwrap_or_else(|_| cwd.to_path_buf());
|
||||
let sessions_root = canonical_cwd
|
||||
let sessions_root = cwd
|
||||
.join(".claw")
|
||||
.join("sessions")
|
||||
.join(workspace_fingerprint(&canonical_cwd));
|
||||
.join(workspace_fingerprint(cwd));
|
||||
fs::create_dir_all(&sessions_root)?;
|
||||
Ok(Self {
|
||||
sessions_root,
|
||||
workspace_root: canonical_cwd,
|
||||
workspace_root: cwd.to_path_buf(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -56,18 +51,14 @@ impl SessionStore {
|
||||
workspace_root: impl AsRef<Path>,
|
||||
) -> Result<Self, SessionControlError> {
|
||||
let workspace_root = workspace_root.as_ref();
|
||||
// #151: canonicalize workspace_root for consistent fingerprinting
|
||||
// across equivalent path representations.
|
||||
let canonical_workspace = fs::canonicalize(workspace_root)
|
||||
.unwrap_or_else(|_| workspace_root.to_path_buf());
|
||||
let sessions_root = data_dir
|
||||
.as_ref()
|
||||
.join("sessions")
|
||||
.join(workspace_fingerprint(&canonical_workspace));
|
||||
.join(workspace_fingerprint(workspace_root));
|
||||
fs::create_dir_all(&sessions_root)?;
|
||||
Ok(Self {
|
||||
sessions_root,
|
||||
workspace_root: canonical_workspace,
|
||||
workspace_root: workspace_root.to_path_buf(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -112,7 +103,7 @@ impl SessionStore {
|
||||
candidate
|
||||
} else if looks_like_path {
|
||||
return Err(SessionControlError::Format(
|
||||
format_missing_session_reference(reference, &self.sessions_root),
|
||||
format_missing_session_reference(reference),
|
||||
));
|
||||
} else {
|
||||
self.resolve_managed_path(reference)?
|
||||
@@ -143,7 +134,7 @@ impl SessionStore {
|
||||
}
|
||||
}
|
||||
Err(SessionControlError::Format(
|
||||
format_missing_session_reference(session_id, &self.sessions_root),
|
||||
format_missing_session_reference(session_id),
|
||||
))
|
||||
}
|
||||
|
||||
@@ -161,7 +152,7 @@ impl SessionStore {
|
||||
self.list_sessions()?
|
||||
.into_iter()
|
||||
.next()
|
||||
.ok_or_else(|| SessionControlError::Format(format_no_managed_sessions(&self.sessions_root)))
|
||||
.ok_or_else(|| SessionControlError::Format(format_no_managed_sessions()))
|
||||
}
|
||||
|
||||
pub fn load_session(
|
||||
@@ -522,25 +513,15 @@ fn session_id_from_path(path: &Path) -> Option<String> {
|
||||
.map(ToOwned::to_owned)
|
||||
}
|
||||
|
||||
fn format_missing_session_reference(reference: &str, sessions_root: &Path) -> String {
|
||||
// #80: show the actual workspace-fingerprint directory instead of lying about .claw/sessions/
|
||||
let fingerprint_dir = sessions_root
|
||||
.file_name()
|
||||
.and_then(|f| f.to_str())
|
||||
.unwrap_or("<unknown>");
|
||||
fn format_missing_session_reference(reference: &str) -> String {
|
||||
format!(
|
||||
"session not found: {reference}\nHint: managed sessions live in .claw/sessions/{fingerprint_dir}/ (workspace-specific partition).\nTry `{LATEST_SESSION_REFERENCE}` for the most recent session or `/session list` in the REPL."
|
||||
"session not found: {reference}\nHint: managed sessions live in .claw/sessions/. Try `{LATEST_SESSION_REFERENCE}` for the most recent session or `/session list` in the REPL."
|
||||
)
|
||||
}
|
||||
|
||||
fn format_no_managed_sessions(sessions_root: &Path) -> String {
|
||||
// #80: show the actual workspace-fingerprint directory instead of lying about .claw/sessions/
|
||||
let fingerprint_dir = sessions_root
|
||||
.file_name()
|
||||
.and_then(|f| f.to_str())
|
||||
.unwrap_or("<unknown>");
|
||||
fn format_no_managed_sessions() -> String {
|
||||
format!(
|
||||
"no managed sessions found in .claw/sessions/{fingerprint_dir}/\nStart `claw` to create a session, then rerun with `--resume {LATEST_SESSION_REFERENCE}`.\nNote: claw partitions sessions per workspace fingerprint; sessions from other CWDs are invisible."
|
||||
"no managed sessions found in .claw/sessions/\nStart `claw` to create a session, then rerun with `--resume {LATEST_SESSION_REFERENCE}`."
|
||||
)
|
||||
}
|
||||
|
||||
@@ -763,40 +744,6 @@ mod tests {
|
||||
assert_eq!(fp_a1.len(), 16, "fingerprint must be a 16-char hex string");
|
||||
}
|
||||
|
||||
/// #151 regression: equivalent paths (e.g. `/tmp/foo` vs `/private/tmp/foo`
|
||||
/// on macOS where `/tmp` is a symlink to `/private/tmp`) must resolve to
|
||||
/// the same session store. Previously they diverged because
|
||||
/// `workspace_fingerprint()` hashed the raw path string. Now
|
||||
/// `SessionStore::from_cwd()` canonicalizes first.
|
||||
#[test]
|
||||
fn session_store_from_cwd_canonicalizes_equivalent_paths() {
|
||||
let base = temp_dir();
|
||||
let real_dir = base.join("real-workspace");
|
||||
fs::create_dir_all(&real_dir).expect("real workspace should exist");
|
||||
|
||||
// Build two stores via different but equivalent path representations:
|
||||
// the raw path and the canonicalized path.
|
||||
let raw_path = real_dir.clone();
|
||||
let canonical_path = fs::canonicalize(&real_dir).expect("canonicalize ok");
|
||||
|
||||
let store_from_raw =
|
||||
SessionStore::from_cwd(&raw_path).expect("store from raw should build");
|
||||
let store_from_canonical =
|
||||
SessionStore::from_cwd(&canonical_path).expect("store from canonical should build");
|
||||
|
||||
assert_eq!(
|
||||
store_from_raw.sessions_dir(),
|
||||
store_from_canonical.sessions_dir(),
|
||||
"equivalent paths must produce the same sessions dir (raw={} canonical={})",
|
||||
raw_path.display(),
|
||||
canonical_path.display()
|
||||
);
|
||||
|
||||
if base.exists() {
|
||||
fs::remove_dir_all(base).expect("cleanup ok");
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn session_store_from_cwd_isolates_sessions_by_workspace() {
|
||||
// given
|
||||
@@ -885,11 +832,6 @@ mod tests {
|
||||
let workspace_b = base.join("repo-beta");
|
||||
fs::create_dir_all(&workspace_a).expect("workspace a should exist");
|
||||
fs::create_dir_all(&workspace_b).expect("workspace b should exist");
|
||||
// #151: canonicalize so test expectations match the store's canonical
|
||||
// workspace_root. Without this, the test builds sessions with a raw
|
||||
// path but the store resolves to the canonical form.
|
||||
let workspace_a = fs::canonicalize(&workspace_a).unwrap_or(workspace_a);
|
||||
let workspace_b = fs::canonicalize(&workspace_b).unwrap_or(workspace_b);
|
||||
|
||||
let store_b = SessionStore::from_cwd(&workspace_b).expect("store b should build");
|
||||
let legacy_root = workspace_b.join(".claw").join("sessions");
|
||||
@@ -923,8 +865,6 @@ mod tests {
|
||||
// given
|
||||
let base = temp_dir();
|
||||
fs::create_dir_all(&base).expect("base dir should exist");
|
||||
// #151: canonicalize for path-representation consistency with store.
|
||||
let base = fs::canonicalize(&base).unwrap_or(base);
|
||||
let store = SessionStore::from_cwd(&base).expect("store should build");
|
||||
let legacy_root = base.join(".claw").join("sessions");
|
||||
let legacy_path = legacy_root.join("legacy-safe.jsonl");
|
||||
@@ -953,8 +893,6 @@ mod tests {
|
||||
// given
|
||||
let base = temp_dir();
|
||||
fs::create_dir_all(&base).expect("base dir should exist");
|
||||
// #151: canonicalize for path-representation consistency with store.
|
||||
let base = fs::canonicalize(&base).unwrap_or(base);
|
||||
let store = SessionStore::from_cwd(&base).expect("store should build");
|
||||
let legacy_root = base.join(".claw").join("sessions");
|
||||
let legacy_path = legacy_root.join("legacy-unbound.json");
|
||||
|
||||
@@ -27,18 +27,6 @@ impl InitStatus {
|
||||
Self::Skipped => "skipped (already exists)",
|
||||
}
|
||||
}
|
||||
|
||||
/// Machine-stable identifier for structured output (#142).
|
||||
/// Unlike `label()`, this never changes wording: claws can switch on
|
||||
/// these values without brittle substring matching.
|
||||
#[must_use]
|
||||
pub(crate) fn json_tag(self) -> &'static str {
|
||||
match self {
|
||||
Self::Created => "created",
|
||||
Self::Updated => "updated",
|
||||
Self::Skipped => "skipped",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
@@ -70,36 +58,6 @@ impl InitReport {
|
||||
lines.push(" Next step Review and tailor the generated guidance".to_string());
|
||||
lines.join("\n")
|
||||
}
|
||||
|
||||
/// Summary constant that claws can embed in JSON output without having
|
||||
/// to read it out of the human-formatted `message` string (#142).
|
||||
pub(crate) const NEXT_STEP: &'static str = "Review and tailor the generated guidance";
|
||||
|
||||
/// Artifact names that ended in the given status. Used to build the
|
||||
/// structured `created[]`/`updated[]`/`skipped[]` arrays for #142.
|
||||
#[must_use]
|
||||
pub(crate) fn artifacts_with_status(&self, status: InitStatus) -> Vec<String> {
|
||||
self.artifacts
|
||||
.iter()
|
||||
.filter(|artifact| artifact.status == status)
|
||||
.map(|artifact| artifact.name.to_string())
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Structured artifact list for JSON output (#142). Each entry carries
|
||||
/// `name` and machine-stable `status` tag.
|
||||
#[must_use]
|
||||
pub(crate) fn artifact_json_entries(&self) -> Vec<serde_json::Value> {
|
||||
self.artifacts
|
||||
.iter()
|
||||
.map(|artifact| {
|
||||
serde_json::json!({
|
||||
"name": artifact.name,
|
||||
"status": artifact.status.json_tag(),
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, PartialEq, Eq)]
|
||||
@@ -375,7 +333,7 @@ fn framework_notes(detection: &RepoDetection) -> Vec<String> {
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{initialize_repo, render_init_claude_md, InitStatus};
|
||||
use super::{initialize_repo, render_init_claude_md};
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
@@ -455,63 +413,6 @@ mod tests {
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn artifacts_with_status_partitions_fresh_and_idempotent_runs() {
|
||||
// #142: the structured JSON output needs to be able to partition
|
||||
// artifacts into created/updated/skipped without substring matching
|
||||
// the human-formatted `message` string.
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(&root).expect("create root");
|
||||
|
||||
let fresh = initialize_repo(&root).expect("fresh init should succeed");
|
||||
let created_names = fresh.artifacts_with_status(InitStatus::Created);
|
||||
assert_eq!(
|
||||
created_names,
|
||||
vec![
|
||||
".claw/".to_string(),
|
||||
".claw.json".to_string(),
|
||||
".gitignore".to_string(),
|
||||
"CLAUDE.md".to_string(),
|
||||
],
|
||||
"fresh init should place all four artifacts in created[]"
|
||||
);
|
||||
assert!(
|
||||
fresh.artifacts_with_status(InitStatus::Skipped).is_empty(),
|
||||
"fresh init should have no skipped artifacts"
|
||||
);
|
||||
|
||||
let second = initialize_repo(&root).expect("second init should succeed");
|
||||
let skipped_names = second.artifacts_with_status(InitStatus::Skipped);
|
||||
assert_eq!(
|
||||
skipped_names,
|
||||
vec![
|
||||
".claw/".to_string(),
|
||||
".claw.json".to_string(),
|
||||
".gitignore".to_string(),
|
||||
"CLAUDE.md".to_string(),
|
||||
],
|
||||
"idempotent init should place all four artifacts in skipped[]"
|
||||
);
|
||||
assert!(
|
||||
second.artifacts_with_status(InitStatus::Created).is_empty(),
|
||||
"idempotent init should have no created artifacts"
|
||||
);
|
||||
|
||||
// artifact_json_entries() uses the machine-stable `json_tag()` which
|
||||
// never changes wording (unlike `label()` which says "skipped (already exists)").
|
||||
let entries = second.artifact_json_entries();
|
||||
assert_eq!(entries.len(), 4);
|
||||
for entry in &entries {
|
||||
let status = entry.get("status").and_then(|v| v.as_str()).unwrap();
|
||||
assert_eq!(
|
||||
status, "skipped",
|
||||
"machine status tag should be the bare word 'skipped', not label()'s 'skipped (already exists)'"
|
||||
);
|
||||
}
|
||||
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn render_init_template_mentions_detected_python_and_nextjs_markers() {
|
||||
let root = temp_dir();
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -5,7 +5,6 @@ use std::sync::atomic::{AtomicU64, Ordering};
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use mock_anthropic_service::{MockAnthropicService, SCENARIO_PREFIX};
|
||||
use serde_json::Value;
|
||||
|
||||
static TEMP_COUNTER: AtomicU64 = AtomicU64::new(0);
|
||||
|
||||
@@ -126,60 +125,6 @@ fn compact_flag_streaming_text_only_emits_final_message_text() {
|
||||
fs::remove_dir_all(&workspace).expect("workspace cleanup should succeed");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn compact_flag_with_json_output_emits_structured_json() {
|
||||
let runtime = tokio::runtime::Runtime::new().expect("tokio runtime should build");
|
||||
let server = runtime
|
||||
.block_on(MockAnthropicService::spawn())
|
||||
.expect("mock service should start");
|
||||
let base_url = server.base_url();
|
||||
|
||||
let workspace = unique_temp_dir("compact-json");
|
||||
let config_home = workspace.join("config-home");
|
||||
let home = workspace.join("home");
|
||||
fs::create_dir_all(&workspace).expect("workspace should exist");
|
||||
fs::create_dir_all(&config_home).expect("config home should exist");
|
||||
fs::create_dir_all(&home).expect("home should exist");
|
||||
|
||||
let prompt = format!("{SCENARIO_PREFIX}streaming_text");
|
||||
let output = run_claw(
|
||||
&workspace,
|
||||
&config_home,
|
||||
&home,
|
||||
&base_url,
|
||||
&[
|
||||
"--model",
|
||||
"sonnet",
|
||||
"--permission-mode",
|
||||
"read-only",
|
||||
"--output-format",
|
||||
"json",
|
||||
"--compact",
|
||||
&prompt,
|
||||
],
|
||||
);
|
||||
|
||||
assert!(
|
||||
output.status.success(),
|
||||
"compact json run should succeed
|
||||
stdout:
|
||||
{}
|
||||
|
||||
stderr:
|
||||
{}",
|
||||
String::from_utf8_lossy(&output.stdout),
|
||||
String::from_utf8_lossy(&output.stderr),
|
||||
);
|
||||
let stdout = String::from_utf8(output.stdout).expect("stdout should be utf8");
|
||||
let parsed: Value = serde_json::from_str(&stdout).expect("compact json stdout should parse");
|
||||
assert_eq!(parsed["message"], "Mock streaming says hello from the parity harness.");
|
||||
assert_eq!(parsed["compact"], true);
|
||||
assert_eq!(parsed["model"], "claude-sonnet-4-6");
|
||||
assert!(parsed["usage"].is_object());
|
||||
|
||||
fs::remove_dir_all(&workspace).expect("workspace cleanup should succeed");
|
||||
}
|
||||
|
||||
fn run_claw(
|
||||
cwd: &std::path::Path,
|
||||
config_home: &std::path::Path,
|
||||
|
||||
@@ -180,8 +180,6 @@ fn resume_latest_restores_the_most_recent_managed_session() {
|
||||
// given
|
||||
let temp_dir = unique_temp_dir("resume-latest");
|
||||
let project_dir = temp_dir.join("project");
|
||||
fs::create_dir_all(&project_dir).expect("project dir should exist");
|
||||
let project_dir = fs::canonicalize(&project_dir).unwrap_or(project_dir);
|
||||
let store = runtime::SessionStore::from_cwd(&project_dir).expect("session store should build");
|
||||
let older_path = store.create_handle("session-older").path;
|
||||
let newer_path = store.create_handle("session-newer").path;
|
||||
|
||||
@@ -4459,7 +4459,6 @@ fn classify_lane_blocker(error: &str) -> LaneEventBlocker {
|
||||
LaneEventBlocker {
|
||||
failure_class: classify_lane_failure(error),
|
||||
detail,
|
||||
subphase: None,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user