Users were typing skill names (e.g. 'caveman', 'find-skills') directly in
the REPL and getting LLM responses instead of skill invocation. Only
'/skills <name>' triggered dispatch; bare names fell through to run_turn.
Fix: after slash-command parse returns None (bare text), check if the first
token looks like a skill name (alphanumeric/dash/underscore, no slash).
If resolve_skill_invocation() confirms the skill exists, dispatch the full
input as a skill prompt. Unknown words fall through unchanged.
156 CLI tests pass, fmt clean.
Previously any string was accepted and silently forwarded to the API,
which would fail at the provider with an unhelpful error. Now invalid
values produce a clear error at parse time:
invalid value for --reasoning-effort: 'xyz'; must be low, medium, or high
156 CLI tests pass, fmt clean.
The --help slash-command section was listing ~35 unimplemented commands
alongside working ones. Combined with the completions fix (c55c510), the
discovery surface now consistently shows only implemented commands.
Changes:
- commands crate: add render_slash_command_help_filtered(exclude: &[&str])
- move STUB_COMMANDS to module-level const in main.rs (reused by both
completions and help rendering)
- replace render_slash_command_help() with filtered variant at all
help-rendering call sites
156 CLI tests pass, fmt clean.
'claw prompt --help' was triggering an API call instead of showing help
because --help was parsed as part of the prompt args. Now '--help' after
known pass-through subcommands (prompt, login, logout, version, state,
init, export, commit, pr, issue) sets wants_help=true and shows the
top-level help page.
Subcommands that consume their own args (agents, mcp, plugins, skills)
and local help-topic subcommands (status, sandbox, doctor) are excluded
from this interception so their existing --help handling is preserved.
156 CLI tests pass, fmt clean.
Commands registered in the spec list but not yet implemented in this build
were appearing in REPL tab-completions, making the discovery surface
over-promise what actually works. Users (mezz2301) reported 'many features
are not supported' after discovering these through completions.
Add STUB_COMMANDS exclusion list in slash_command_completion_candidates_with_sessions.
Excluded: login logout vim upgrade stats share feedback files fast exit
summary desktop brief advisor stickers insights thinkback release-notes
security-review keybindings privacy-settings plan review tasks theme
voice usage rename copy hooks context color effort branch rewind ide
tag output-style add-dir
These commands still parse and run (with the 'not yet implemented' message
for users who type them directly), but they no longer surface as
tab-completion candidates.
The previous quick start jumped from 'cargo build' to 'claw prompt' without
showing the required auth step or the health-check command. A user following
it linearly would fail because the prompt needs an API key.
Changes:
- Numbered steps: build -> set ANTHROPIC_API_KEY -> claw doctor -> prompt
- Windows note updated to show cargo run form as alternative
- Added explicit NOTE that Claude subscription login is not supported (pre-empts #claw-code FAQ)
Source: cold-start friction observed from mezz/mukduk and kapcomunica in #claw-code 2026-04-09.
User repro: mezz on Windows PowerShell tried './target/debug/claw'
which fails because the binary is 'claw.exe' on Windows.
Add a NOTE callout after the quick-start block directing Windows users
to use .\target\debug\claw.exe or cargo run -- --help.
Parse --reasoning-effort <low|medium|high> in parse_args, thread through
CliAction::Prompt and CliAction::Repl, LiveCli::set_reasoning_effort(),
AnthropicRuntimeClient.reasoning_effort field, and MessageRequest.reasoning_effort.
Changes:
- parse_args: new --reasoning-effort / --reasoning-effort=VAL flag arms
- AnthropicRuntimeClient: new reasoning_effort field + set_reasoning_effort() method
- LiveCli: new set_reasoning_effort() that reaches through BuiltRuntime -> ConversationRuntime -> api_client_mut()
- runtime::ConversationRuntime: new pub api_client_mut() accessor
- MessageRequest construction: reasoning_effort: self.reasoning_effort.clone()
- run_repl(): accepts and applies reasoning_effort parameter
- parse_direct_slash_cli_action(): propagates reasoning_effort
All 156 CLI tests pass, all api tests pass, cargo fmt clean.
Repeated onboarding friction in #claw-code: users try 'cargo install clawcode'
which fails because the package is not published on crates.io. Add a prominent
NOTE callout before the quick-start block directing users to build from source.
Source: gaebal-gajae pinpoint 2026-04-09 from #claw-code.
Adds reasoning_effort: Option<String> to CliAction::Prompt and
CliAction::Repl enum variants. All constructor and pattern sites updated.
All test literals updated with reasoning_effort: None.
156 cli tests pass, fmt clean. The --reasoning-effort flag parse and
propagation to AnthropicRuntimeClient remains as follow-up work.
gpt-5.x models reject requests with max_tokens and require max_completion_tokens.
Detect wire model starting with 'gpt-5' and switch the JSON key accordingly.
Older models (gpt-4o etc.) continue to receive max_tokens unchanged.
Two regression tests added:
- gpt5_uses_max_completion_tokens_not_max_tokens
- non_gpt5_uses_max_tokens
140 api tests pass, cargo fmt clean.
Users of OpenAI-compatible reasoning models (o4-mini, o3, deepseek-r1,
etc.) had no way to control reasoning effort — the field was missing from
MessageRequest and never emitted in the request body.
Changes:
- Add `reasoning_effort: Option<String>` to `MessageRequest` in types.rs
- Annotated with skip_serializing_if = "Option::is_none" for clean JSON
- Accepted values: "low", "medium", "high" (passed through verbatim)
- In `build_chat_completion_request`, emit `"reasoning_effort"` when set
- Two unit tests:
- `reasoning_effort_is_included_when_set`: o4-mini + "high" → field present
- `reasoning_effort_omitted_when_not_set`: gpt-4o, no field → absent
Existing callers use `..Default::default()` and are unaffected.
One struct-literal test that listed all fields explicitly updated with
`reasoning_effort: None`.
The CLI flag to expose this to users is a follow-up (ROADMAP #34 partial).
This commit lands the foundational API-layer plumbing needed for that.
Partial ROADMAP #34.
OpenAI /responses validates tool function schemas strictly:
- object types must have "properties" (at minimum {})
- "additionalProperties": false is required
/chat/completions is lenient and accepts schemas without these fields,
but /responses rejects them with "object schema missing properties" /
"invalid_function_parameters".
Add normalize_object_schema() which recursively walks the JSON Schema
tree and fills in missing "properties"/{} and "additionalProperties":false
on every object-type node. Existing values are not overwritten.
Call it in openai_tool_definition() before building the request payload
so both /chat/completions and /responses receive strict-validator-safe
schemas.
Add unit tests covering:
- bare object schema gets both fields injected
- nested object schemas are normalised recursively
- existing additionalProperties is not overwritten
Fixes the live repro where gpt-5.4 via OpenAI compat accepted connection
and routing but rejected every tool call with schema validation errors.
Closes ROADMAP #33.
web_search_extracts_and_filters_results set CLAWD_WEB_SEARCH_BASE_URL
without holding env_lock(), while the sibling test
web_search_handles_generic_links_and_invalid_base_url always held it.
Under parallel test execution the two tests interleave set_var/remove_var
calls, pointing the search client at the wrong mock server port and
causing assertion failures.
Fix: add env_lock() guard at the top of web_search_extracts_and_filters_results,
matching the serialization pattern already used by every other env-mutating
test in this module.
Root cause of CI flake on run 24127551802.
Identified and fixed during dogfood session.
Add rust/crates/rusty-claude-cli/build.rs that:
- Captures git rev-parse --short HEAD at build time → GIT_SHA env
- Reads Cargo's TARGET env var → TARGET env
- Derives BUILD_DATE from SOURCE_DATE_EPOCH / BUILD_DATE env or
the current date via `date +%Y-%m-%d` fallback
- Registers rerun-if-changed on .git/HEAD and .git/refs so the SHA
stays fresh across commits
Update main.rs DEFAULT_DATE to pick up BUILD_DATE from option_env!()
instead of the hardcoded 2026-03-31 static string.
Before: `claw --version` always showed Git SHA: unknown, Target: unknown,
Build date: 2026-03-31 in local builds.
After: e.g. Git SHA: 7f53d82, Target: aarch64-apple-darwin, Build date: 2026-04-08
Generated by droid (Kimi K2.5 Turbo) via acpx (wrote build.rs),
cleaned up by Jobdori (added BUILD_DATE step, updated main.rs const).
Co-Authored-By: Droid <noreply@factory.ai>
ProviderClient::from_model_with_anthropic_auth was dispatching every
ProviderKind::OpenAi match to OpenAiCompatConfig::openai(), which reads
OPENAI_API_KEY and points at api.openai.com. But DashScope models
(qwen-plus, qwen/qwen3-coder, etc.) also return ProviderKind::OpenAi
from detect_provider_kind because DashScope speaks the OpenAI wire
format. The metadata layer correctly identifies them as needing
DASHSCOPE_API_KEY and the DashScope compatible-mode endpoint, but that
metadata was being ignored at dispatch time.
Result: users running `claw --model qwen-plus` with DASHSCOPE_API_KEY
set would get a "missing OPENAI_API_KEY" error instead of being routed
to DashScope.
Fix: consult providers::metadata_for_model in the OpenAi dispatch arm
and pick dashscope() vs openai() based on metadata.auth_env.
Adds a regression test asserting ProviderClient::from_model("qwen-plus")
builds with the DashScope base URL. Exposes a pub base_url() accessor
on OpenAiCompatClient so the test can verify the routing.
Authored by droid (Kimi K2.5 Turbo) via acpx, cleaned up by Jobdori
(removed unsafe blocks unnecessary under edition 2021, imported
ProviderClient from super, adopted EnvVarGuard pattern from
providers/mod.rs tests).
Co-Authored-By: Droid <noreply@factory.ai>
#21 (Resumed /status JSON parity gap): resolved by the broader
Resumed local-command JSON parity gap work tracked as #26. Re-verified
on main HEAD 8dc6580 — the regression test passes.
#29 (CLI provider dispatch hardcoded to Anthropic): landed at 8dc6580.
ApiProviderClient dispatch now routes correctly based on
detect_provider_kind. Original filing preserved as trace record.
The CLI entry point (build_runtime_with_plugin_state in main.rs)
was hardcoded to always instantiate AnthropicRuntimeClient with an
AnthropicClient, regardless of what detect_provider_kind(model)
returned. This meant `--model openai/gpt-4` with OPENAI_API_KEY
set and no ANTHROPIC_* vars still failed with "missing Anthropic
credentials" because the CLI never dispatched to the OpenAI-compat
backend that already exists in the api crate.
Root cause: AnthropicRuntimeClient.client was typed as
AnthropicClient (concrete) rather than ApiProviderClient (enum).
The api crate already had a ProviderClient enum with Anthropic /
Xai / OpenAi variants that dispatches correctly via
detect_provider_kind, plus a unified MessageStream enum that wraps
both anthropic::MessageStream and openai_compat::MessageStream
with the same next_event() -> StreamEvent interface. The CLI just
wasn't using it.
Changes (1 file, +59 -7):
- Import api::ProviderClient as ApiProviderClient
- Change AnthropicRuntimeClient.client from AnthropicClient to
ApiProviderClient
- In AnthropicRuntimeClient::new(), dispatch based on
detect_provider_kind(&resolved_model):
* Anthropic: build AnthropicClient directly with
resolve_cli_auth_source() + api::read_base_url() +
PromptCache (preserves ANTHROPIC_BASE_URL override for mock
test harness and the session-scoped prompt cache)
* xAI / OpenAi: delegate to
ApiProviderClient::from_model_with_anthropic_auth which routes
to OpenAiCompatClient::from_env with the matching config
(reads OPENAI_API_KEY/XAI_API_KEY/DASHSCOPE_API_KEY and their
BASE_URL overrides internally)
- Change push_prompt_cache_record to take &ApiProviderClient
(ProviderClient::take_last_prompt_cache_record returns None for
non-Anthropic variants, so the helper is a no-op on
OpenAI-compat providers without extra branching)
What this unlocks for users:
claw --model openai/gpt-4.1-mini prompt 'hello' # OpenAI
claw --model grok-3 prompt 'hello' # xAI
claw --model qwen-plus prompt 'hello' # DashScope
OPENAI_BASE_URL=https://openrouter.ai/api/v1 \
claw --model openai/anthropic/claude-sonnet-4 prompt 'hello' # OpenRouter
All previously broken, now routed correctly by prefix.
Verification:
- cargo build --release -p rusty-claude-cli: clean
- cargo test --release -p rusty-claude-cli: 182 tests, 0 failures
(including compact_output tests that exercise the Anthropic mock)
- cargo fmt --all: clean
- cargo clippy --workspace: warnings-only (pre-existing)
- cargo test --release --workspace: all crates green except one
pre-existing race in runtime::config::tests (passes in isolation)
Source: live users nicma (1491342350960562277) and Jengro
(1491345009021030533) in #claw-code on 2026-04-08.
#28 error-copy improvements landed on ff1df4c but real users (nicma,
Jengro) hit `error: missing Anthropic credentials` within hours when
using `--model openai/gpt-4` with OPENAI_API_KEY set and all
ANTHROPIC_* env vars unset on main.
Traced root cause in build_runtime_with_plugin_state at line ~6244:
AnthropicRuntimeClient::new() is hardcoded. BuiltRuntime is
statically typed as ConversationRuntime<AnthropicRuntimeClient, ...>.
providers::detect_provider_kind() computes the right routing at the
metadata layer but the runtime client is never dispatched.
Files #29 with the detailed trace + a focused action plan:
DynamicApiClient enum wrapping Anthropic + OpenAiCompat variants,
retype BuiltRuntime, dispatch in build_runtime based on
detect_provider_kind, integration test with mock OpenAI-compat
server.
#28 is marked partial — the error-copy improvements are real and
stayed in, but the routing gap they were meant to cover is the
actual bug and needs #29 to land.
Two live users in #claw-code on 2026-04-08 hit adjacent auth confusion:
varleg set OPENAI_API_KEY for OpenRouter but prefix routing didn't
activate without openai/ model prefix, and stanley078852 put sk-ant-*
in ANTHROPIC_AUTH_TOKEN (Bearer path) instead of ANTHROPIC_API_KEY
(x-api-key path) and got 401 Invalid bearer token.
Changes:
1. ApiError::MissingCredentials gained optional hint field (error.rs)
2. anthropic_missing_credentials_hint() sniffs OPENAI/XAI/DASHSCOPE
env vars and suggests prefix routing when present (providers/mod.rs)
3. All 4 Anthropic auth paths wire the hint helper (anthropic.rs)
4. 401 + sk-ant-* in bearer token detected and hint appended
5. 'Which env var goes where' section added to USAGE.md
Tests: unit tests for all three improvements (no HTTP calls needed).
Workspace: all tests green, fmt clean, clippy warnings-only.
Source: live users varleg + stanley078852 in #claw-code 2026-04-08.
Co-authored-by: gaebal-gajae <gaebal-gajae@layofflabs.com>
Filed from live #claw-code dogfood on 2026-04-08 where two real users
hit adjacent auth confusion within minutes:
- varleg set OPENAI_API_KEY for OpenRouter but prefix routing didn't
win because the model name wasn't prefixed with openai/; unsetting
ANTHROPIC_API_KEY then hit MissingApiKey with no hint that the
OpenAI path was already configured
- stanley078852 put an sk-ant-* key in ANTHROPIC_AUTH_TOKEN instead
of ANTHROPIC_API_KEY, causing claw to send it as
Authorization: Bearer sk-ant-..., which Anthropic rejects at the
edge with 401 Invalid bearer token
Both fixes delivered live in #claw-code as direct replies, but the
pattern is structural: the error surface doesn't bridge HTTP-layer
symptoms back to env-var choice.
Action block spells out a single main-side PR with three
improvements: (a) MissingCredentials hint when an adjacent
provider's env var is already set, (b) 401-on-Anthropic hint when
bearer token starts with sk-ant-, (c) 'which env var goes where'
paragraph in both README matrices mapping sk-ant-* -> x-api-key and
OAuth access token -> Authorization: Bearer.
All three improvements are unit-testable against ApiError::fmt
output with no HTTP calls required.
The original ROADMAP #25 entry claimed the root cause was missing
exec bits on generated hook scripts. That was wrong — a chmod-only
fix (4f7b674) still failed CI. The actual bug was output_with_stdin
unconditionally propagating BrokenPipe from write_all when the child
exits before the parent finishes writing stdin.
Updated per gaebal-gajae's direction: actual fix, hygiene hardening,
and regression guard are now clearly separated. Added a meta-lesson
about Broken pipe ambiguity in fork/exec paths so future investigators
don't cargo-cult the same wrong first theory.
Two bugs found in the plugin hook test harness that together caused
Linux CI to fail on 'hooks::tests::collects_and_runs_hooks_from_enabled_plugins'
with 'Broken pipe (os error 32)'. Three reproductions plus one rerun
failure on main today: 24120271422, 24120538408, 24121392171.
Root cause 1 (chmod, defense-in-depth): write_hook_plugin writes
pre.sh/post.sh/failure.sh via fs::write without setting the execute
bit. While the runtime hook runner invokes hooks via 'sh <path>' (so
the script file does not strictly need +x), missing exec perms can
cause subtle fork/exec races on Linux in edge cases.
Root cause 2 (the actual CI failure): output_with_stdin unconditionally
propagated write_all errors on the child's stdin pipe, including
BrokenPipe. A hook script that runs to completion in microseconds
(e.g. a one-line printf) can exit and close its stdin before the parent
finishes writing the JSON payload. Linux pipes surface this as EPIPE
immediately; macOS pipes happen to buffer the small payload, so the
race only shows on ubuntu CI runners. The parent's write_all raised
BrokenPipe, which output_with_stdin returned as Err, which run_command
classified as 'failed to start', making the test assertion fail.
Fix: (a) make_executable helper sets mode 0o755 via PermissionsExt on
each generated hook script, with a #[cfg(unix)] gate and a no-op
#[cfg(not(unix))] branch. (b) output_with_stdin now matches the
write_all result and swallows BrokenPipe specifically (the child still
ran; wait_with_output still captures stdout/stderr/exit code), while
propagating all other write errors. (c) New regression guard
generated_hook_scripts_are_executable under #[cfg(unix)] asserts each
generated .sh file has at least one execute bit set.
Surgical scope per gaebal-gajae's direction: chmod + pipe tolerance +
regression guard only. The deeper plugin-test sealing pass for ROADMAP
#25 + #27 stays in gaebal-gajae's OMX lane.
Verification:
- cargo test --release -p plugins → 35 passing, 0 failing
- cargo fmt -p plugins → clean
- cargo clippy -p plugins -- -D warnings → clean
Co-authored-by: gaebal-gajae <gaebal-gajae@layofflabs.com>
Filing per gaebal-gajae's status summary at message 1491322807026454579
in #clawcode-building-in-public, with corrected scope after re-running
`cargo test -p rusty-claude-cli` against main HEAD (79da4b8): the 11
deterministic failures only reproduce on dev/rust, not main, so this is
a dev/rust catchup item rather than a main regression.
Two-layered root cause documented:
1. dev/rust `parse_args` eagerly validates user plugin hook scripts
exist on disk before returning a CliAction
2. dev/rust test harness does not redirect $HOME/XDG_CONFIG_HOME to a
fixture (no `env_lock` equivalent — main has 30+ env_lock hits, dev
has zero)
Together they make dev/rust `cargo test -p rusty-claude-cli` fail on
any clean clone whose owner has a half-installed user plugin in
~/.claude/plugins/installed/. main has both the env_lock test isolation
AND the parse_args/hook-validation decoupling already; dev/rust is just
behind on the merge train.
Action block in #27 spells out backporting env_lock + the parse_args
decoupling so the next dev/rust release picks this up.
Linux CI keeps tripping over
`plugins::hooks::tests::collects_and_runs_hooks_from_enabled_plugins`
with `Broken pipe (os error 32)` when the hook runner tries to spawn a
child shell script that was written by `write_hook_plugin` without the
execute bit set. Fails on first attempt, passes on rerun (observed in CI
runs 24120271422 and 24120538408). Passes consistently on macOS.
Since issues are disabled on the repo, recording as ROADMAP backlog
item #25 in the Immediate Backlog P2 cluster next to the related plugin
lifecycle flake at #24. Action block spells out the chmod +755 fix in
`write_hook_plugin` plus the regression guard.
Concrete follow-up captured from today's dogfood session:
A single hung test (oversized-request preflight, 6 minutes per attempt
after `be561bf` silently swallowed count_tokens errors) crashed the
`cargo test --workspace` job before downstream crates could run, hiding
6 separate pre-existing CLI regressions until `8c6dfe5` + `5851f2d`
restored the fast-fail path.
Two new acceptance criteria for #9:
- per-test timeouts in CI so one hang cannot mask other failures
- distinguish `test.hung` from generic test failures in worker reports
- compact flag: was parsed then discarded (`compact: _`) instead of
passed to `run_turn_with_output` — hardcoded `false` meant --compact
never took effect
- piped stdin vs permission prompter: `read_piped_stdin()` consumed all
stdin before `CliPermissionPrompter::decide()` could read interactive
approval answers; now only consumes stdin as prompt context when
permission mode is `DangerFullAccess` (fully unattended)
- session resolver: `resolve_managed_session_path` and
`list_managed_sessions` now fall back to the pre-isolation flat
`.claw/sessions/` layout so legacy sessions remain accessible
- help assertion: match on stable prefix after `/session delete` was
added in batch 5
- prompt shorthand: fix copy-paste that changed expected prompt from
"help me debug" to "$help overview"
- mock parity harness: filter captured requests to `/v1/messages` path
only, excluding count_tokens preflight calls added by `be561bf`
All 6 failures were pre-existing but masked because `client_integration`
always failed first (fixed in 8c6dfe5).
Workspace: 810+ tests passing, 0 failing.
CI has been red since be561bf ('Use Anthropic count tokens for preflight')
because that commit replaced the free-function preflight_message_request
(byte-estimate guard) with an instance method that silently returns Ok on
any count_tokens failure:
let counted_input_tokens = match self.count_tokens(request).await {
Ok(count) => count,
Err(_) => return Ok(()), // <-- silent bypass
};
Two consequences:
1. client_integration::send_message_blocks_oversized_requests_before_the_http_call
has been FAILING on every CI run since be561bf. The mock server in that
test only has one HTTP response queued (a bare '{}' to satisfy the main
request), so the count_tokens POST parses into an empty body that fails
to deserialize into CountTokensResponse -> Err -> silent bypass -> the
oversized 600k-char request proceeds to the mock instead of being
rejected with ContextWindowExceeded as the test expects.
2. In production, any third-party Anthropic-compatible gateway that doesn't
implement /v1/messages/count_tokens (OpenRouter, Cloudflare AI Gateway,
etc.) would silently disable the preflight guard entirely, letting
oversized requests hit the upstream only to fail there with a provider-
side context-window error. This is exactly the 'opaque failure surface'
ROADMAP #22 asked us to avoid.
Fix: call the free-function super::preflight_message_request(request)? as
the first step in the instance method, before any network round-trip. This
guarantees the byte-estimate guard always fires, whether or not the remote
count_tokens endpoint is reachable. The count_tokens refinement still runs
afterward when available for more precise token counting, but it is now
strictly additive — it can only catch more cases, never silently skip the
guard.
Test results:
- cargo test -p api --lib: 89 passed, 0 failed
- cargo test --release -p api (all test binaries): 118 passed, 0 failed
- cargo test --release -p api --test client_integration \
send_message_blocks_oversized_requests_before_the_http_call: passes
- cargo fmt --check: clean
This unblocks the Rust CI workflow which has been red on every push since
be561bf landed.
Document the qwen/ and qwen- prefix routing added in 3ac97e6. Users
in Discord #clawcode-get-help (web3g, Renan Klehm, matthewblott) kept
hitting ambient-credential misrouting because the docs only showed
the OPENAI_BASE_URL pattern without explaining that model-name prefix
wins over env-var presence.
Added:
- DashScope usage section with qwen/qwen-max and bare qwen-plus examples
- DashScope row in provider matrix table
- Reasoning model sanitization note (qwen-qwq, qwq-*, *-thinking)
- Explicit statement that model-name prefix wins over ambient creds
Users in Discord #clawcode-get-help (web3g) asked for Qwen 3.6 Plus via
native Alibaba DashScope API instead of OpenRouter, which has stricter
rate limits. This commit adds first-class routing for qwen/ and bare
qwen- prefixed model names.
Changes:
- DEFAULT_DASHSCOPE_BASE_URL constant: /compatible-mode/v1 endpoint
- OpenAiCompatConfig::dashscope() factory mirroring openai()/xai()
- DASHSCOPE_ENV_VARS + credential_env_vars() wiring
- metadata_for_model: qwen/ and qwen- prefix routes to DashScope with
auth_env=DASHSCOPE_API_KEY, reuses ProviderKind::OpenAi because
DashScope speaks the OpenAI REST shape
- is_reasoning_model: detect qwen-qwq, qwq-*, and *-thinking variants
so tuning params (temperature, top_p, etc.) get stripped before
payload assembly (same pattern as o1/o3/grok-3-mini)
Tests added:
- providers::tests::qwen_prefix_routes_to_dashscope_not_anthropic
- openai_compat::tests::qwen_reasoning_variants_are_detected
89 api lib tests passing, 0 failing. cargo fmt --check: clean.
Closes the user-reported gap: 'use Qwen 3.6 Plus via Alibaba API
directly, not OpenRouter' without needing OPENAI_BASE_URL override
or unsetting ANTHROPIC_API_KEY.
build_runtime_runs_plugin_lifecycle_init_and_shutdown was the only test
that set/removed ANTHROPIC_API_KEY without holding the env_lock mutex.
Under parallel workspace execution, other tests racing on the same env
var could wipe the key mid-construction, causing a flaky credential error.
Root cause: process-wide env vars are shared mutable state. All other
tests that touch ANTHROPIC_API_KEY already use env_lock(). This test
was the only holdout.
Fix: add let _guard = env_lock(); at the top of the test.
openai_compat_integration.rs and client_integration.rs had MessageRequest
constructions without the new tuning param fields (temperature, top_p,
frequency_penalty, presence_penalty, stop) added in c667d47.
Added ..Default::default() to all 4 sites. cargo fmt applied.
This was the root cause of CI red on main (E0063 compile error in
integration tests, not caught by --lib tests).