My Thoughts on OpenClaw: A Glimpse into the Agentic Future
I spent three days testing OpenClaw, a powerful new autonomous AI agentic assistant. From updating servers to writing blog posts, here is why it genuinely feels like magic - and why its staggering token usage is a major red flag.
I’ve spent the last three days immersed in OpenClaw, and honestly, it has been an eye-opening experience. If you’ve been wondering whether these new AI assistants are just hype or genuinely useful, I can confidently say it’s the latter. OpenClaw operates as an always-on assistant; you assign it a task, and it just goes off into the background and gets things done.
To test its capabilities, I handed it an SSH key with a straightforward brief: update the host, pull the latest Docker images, and configure fail2ban. It went away, did exactly what I asked, and came back with a comprehensive explanation of its actions. Its tenacity in problem-solving is impressive, too. If something is broken, OpenClaw will relentlessly chip away at the issue until it eventually fixes it, even if it takes a little time.
The system's memory is fantastic. It genuinely feels as though it retains knowledge, remembering specific details about my setup, cross-referencing them, and making highly relevant suggestions later in the conversation. The cron jobs function worked flawlessly, and I loved how easy it was to swap between different language models.
At times, using it feels like absolute magic. I fed it my Ghost blog API keys, and it automatically connected, taught itself how to query the API, and followed my instructions to draft a post and audit the site. Watching that unfold was a truly incredible moment.
However, it is not without its flaws, and there are some significant red flags. First and foremost: the token usage is staggering. I was running it through Anthropic OpenAuth on my Pro account, and OpenClaw burned through tokens at an astonishing rate. I hit my API limits incredibly quickly; even the simplest of tasks seemed to chew through a ridiculous amount of compute. What makes this more frustrating is the lack of transparency. You never quite know what it is doing in the background, and even when idling, it is slowly eating its way through your token allowance.
The setup process is also undeniably fiddly. I ended up running OpenClaw inside a Docker container on a virtual machine. Initially, I wrestled with some Telegram connection issues, though these were eventually resolved, and the system proved stable under Docker. The gateway, however, was unpredictable at times, leading to a few frustrating issues with pairing keys.
Because of the sheer cost of running the API, I decided to test OpenClaw with a local model via LM Studio. Surprisingly, this was incredibly easy to set up: in fact, OpenClaw practically configured it by itself with minimal instruction. I managed to get OpenAI’s GPT-OSS 20B running locally. Unsurprisingly, it was painfully slow, and the tooling integration didn't function nearly as well as it could have. To be fair, this was likely a bottleneck caused by the hardware I was using rather than a fault of the software.
There is also the elephant in the room: security. The system is incredibly powerful, which inherently makes it a security risk. I knew this going into the experiment, but having seen what it can do, I certainly wouldn't trust it with access to my main system.
Overall, it’s been a fascinating three days that gave me a brilliant glimpse into how useful agentic AI can be. I suspect this is just the beginning, and it won't be long before we see products like OpenClaw baked directly into the offerings of the large frontier AI labs.
If I could afford to keep it running, I absolutely would.