Clawdbot is an open-source, self-hosted AI assistant built to work where you already talk: messaging apps. Instead of another tab to babysit, you message it like a teammate.
At a high level, people use it for two things: continuity and leverage. Continuity means persistent memory and long-lived context, so you don’t re-explain the same stuff every day. Leverage means proactive tasks and automation – up to browser-based actions like filling forms, if you enable that.
Where it fits best:
Self-hosting is the default if you want full control. You decide where it runs, what it can access, and what gets stored. Your data, your tokens, your logs, your settings. And if something goes wrong, you can troubleshoot and fix it immediately.
Running Clawdbot at home works right up until you need it to work every day. A VPS is the fully predictable option: it stays on, it has a public IP, and it doesn’t share power with your coffee machine.
With a home box, you’ll eventually deal with:
With a VPS, you get the boring benefits that make tools usable:
And yes, cost matters. A Mac mini is a $700+ purchase before you count your time, backups, and maintenance. A VPS that matches Clawdbot’s recommended resources is $21/month on is*hosting. If you want a tool that runs 24/7, renting the correct compute is usually the smarter deal.
Clawdbot can run on 2 GB RAM for basic chat. If your goal is “respond in chats and keep some memory,” that can be enough.
But if you’re planning to use automation seriously – especially browser automation and skills – 4 GB+ is the sane baseline. The moment you add background tasks, logs, and anything headless-browser-shaped, memory becomes the first bottleneck.
A simple way to think about it:
And the practical baseline we see working well:
That’s why our Medium plan is the default starting point: 3 vCPU / 4 GB RAM / NVMe / 1 IPv4. It’s a balanced config that stays responsive and leaves room to grow without overpaying.
If you want speed, use the ready image and the onboarding flow.
Here’s the fast path:
Once you see a successful reply inside your chat app, you’re operational. Everything after that is optional tuning.
The ready image exists for a simple reason: you shouldn’t spend your evening rebuilding the same base stack from scratch.
What changes is not the end result, but the path:
A reliable workflow looks like this:
That “one messenger first” rule saves time. It keeps debugging linear. If something fails, you know exactly which step introduced it.
If you’re deploying on a generic Ubuntu VPS (or you prefer upstream setup), use the official installer and wizard:
curl -fsSL https://clawd.bot/install.sh | bash
exec bash
clawdbot setup --wizard
Use this route when:
The trade-off is simple: you save on rebuilding, but you spend a bit more time on setup. If your goal is “fastest time to first message,” the ready image is still the clean win.
The biggest rule: don’t expose admin/control interfaces directly to the public internet. If you need remote access, use an SSH tunnel, or put a reverse proxy in front with real authentication. A random port is not a lock.
Also:
Clawdbot is built for people who want an assistant that always on, living in chat, with memory and optional automation. A VPS makes it stable and predictable, and the ready image makes it fast.
If you want the quick path: start with is*hosting’s Medium plan, deploy Ubuntu 22 + Clawdbot (OpenClaw), SSH in, and run clawdbot onboard. You’ll have a self-hosted assistant responding in minutes.