The Hyperwave LLM Grid exposes an Anthropic-compatible API at https://api.hyperwave-software.com.
This means any tool that speaks the Anthropic Messages API — including Claude Code — can use your
Hyperwave grid nodes instead of Anthropic's servers.
Your prompts are queued to grid nodes running Ollama locally. Each completed response deducts tokens from your balance; node operators earn tokens for processing your jobs.
hw-23a29bce43d7c36fea86fd35d5474a1b38d36d270f22bf4342762eeb0be73f2e
Set two environment variables before running claude:
export ANTHROPIC_BASE_URL=https://api.hyperwave-software.com
export ANTHROPIC_API_KEY=hw-<your-key-here>
On Windows (PowerShell):
$env:ANTHROPIC_BASE_URL = "https://api.hyperwave-software.com"
$env:ANTHROPIC_API_KEY = "hw-<your-key-here>"
Then launch Claude Code as normal:
claude
Claude Code will send all requests to the Hyperwave grid automatically.
Pass the model name with --model or set it in your claude config. The grid currently supports:
| Model alias | Grid model | Notes |
|---|---|---|
| qwen3.5:9b | Qwen 3.5 9B | Fast, good for code tasks |
| qwen3.6:35b | Qwen 3.6 35B | Higher quality, slower |
| claude-opus-* | Qwen 3.6 35B | Mapped automatically |
| claude-sonnet-* | Qwen 3.6 35B | Mapped automatically |
| (any other) | Qwen 3.5 9B | Default fallback |
Example one-shot query:
echo "Explain async/await in Go" | claude --print --model qwen3.5:9b
Your balance is shown in the header next to your username as ⚡ N tokens.
You can also query it directly:
curl -H "Authorization: Bearer <google-token>" \
https://api.hyperwave-software.com/token-balance
Response:
{ "balance": 9847 }
New accounts start with 10,000 free tokens. To top up, open Settings → Buy Tokens.
Response takes a long time
Grid nodes pick up jobs as they become available. If no node is online for the requested model the job will wait in the queue. Check the My Jobs → LLM Grid Jobs tab to see job status.
unauthorized error
Verify the key starts with hw- and that ANTHROPIC_BASE_URL points to https://api.hyperwave-software.com
(no trailing slash). Re-generate the key from Settings if needed.
Streaming stops mid-response
The proxy sends keep-alive pings every 20 seconds to prevent Cloudflare from closing idle connections. If you see truncated output, ensure your HTTP client does not have a shorter read timeout.
Model not found
Any unrecognised model name is silently routed to qwen3.5:9b. There is no error — you will
simply receive output from the default model.
Install the Hyperwave Desktop Client, register your machine, and leave it running. When your node processes someone else's LLM job, the tokens deducted from the requester are credited to your account. See the My Nodes page for pairing instructions.