157 TOPS. 16GB. No compromises.
The Claw Orin NX puts serious inference muscle on your desk. The Jetson Orin NX 16GB delivers up to 157 TOPS in Super Mode — over 2x the compute of the standard Claw — with 16GB of unified LPDDR5 memory, 1024 CUDA cores, an 8-core ARM CPU, and 102 GB/s bandwidth. Run 7B–13B parameter models comfortably — Llama 3, Mistral, Gemma, Phi-4 — with headroom for longer contexts and concurrent requests. For teams, power users, and workloads that push past what small models can handle, the…
157 TOPS. 16GB. No compromises.
The Claw Orin NX puts serious inference muscle on your desk. The Jetson Orin NX 16GB delivers up to 157 TOPS in Super Mode — over 2x the compute of the standard Claw — with 16GB of unified LPDDR5 memory, 1024 CUDA cores, an 8-core ARM CPU, and 102 GB/s bandwidth. Run 7B–13B parameter models comfortably — Llama 3, Mistral, Gemma, Phi-4 — with headroom for longer contexts and concurrent requests. For teams, power users, and workloads that push past what small models can handle, the NX is the machine that doesn't ask you to compromise.
Everything else stays the same: OpenClaw agents, 28 MCP tools, ThreadWeaver chat, LED feedback, encrypted storage, always-on at under 20W idle. The same plug-and-play experience, the same $2/month operating cost, the same guarantee that your data never leaves your desk.
The math still works.
Even at the NX price point, the payback math holds. If your team burns $200–400/month on cloud AI, the NX breaks even in 6–10 months and saves you money every month after. Add the models you couldn't justify running in the cloud — the experimental ones, the fine-tuned ones, the ones you'd run more often if tokens were free — and the value compounds beyond simple cost replacement. When inference is free, you experiment more. When you experiment more, you build better.