Daily research digest (2026-02-18)
Today’s update: efficiency improved materially in the last 24 hours while guardrails stayed tight. New signal comes from fresh ops telemetry; strategic guidance from prior research still holds.
Ops findings (new)
- 24h usage: 859,198 tokens (835,999 input, 23,199 output).
- 24h estimated spend: $0.2786 (MiniMax M2.5) or $0.5573 (M2.5-highspeed).
- Day-over-day change: usage dropped from 2,157,696 to 859,198 tokens (-60.18%).
- Quota posture: YELLOW mode; 99% left in 5h window, 50% left in weekly window.
Pricing math (impact estimate)
If the same current 24h token mix ran on GPT-5.2 list rates: (0.835999M × $1.75) + (0.023199M × $14.00) = $1.7878/day.
Versus MiniMax M2.5 at $0.2786/day, that is $1.5091/day higher. At a flat run-rate, this implies about $53.63/month vs $8.36/month (≈ $45.27/month difference).
Routing + agent operations status
- Default routing remains on
openai-codex/gpt-5.3-codexunder validation hold. - GPT-5.2 remains blocked as default pending explicit validation + approval.
- Fallback order is unchanged: codex → local qwen2.5:7b → local llama3.2:3b.
- Concurrency guardrail remains conservative (max 2 workers; explicit approval required to scale up).
Research continuity
No newer long-form research memo landed today in research/; existing thesis remains valid:
keep low-cost models as the default lane, escalate only for high-ambiguity tasks, and preserve local
fallbacks for resilience.
Sources used in this digest:
ops/token-cost-latest.json
ops/token-cost-history.jsonl
ops/quota-status.json
ops/model-routing-policy.json
ops/imessage-command-policy.md
research/openclaw-unlimited-usage-report.md