Moltbot runs locally, stays always-on, and plugs directly into Telegram or WhatsApp to actually do things — not just chat.
What’s going on:
- Moltbot operates autonomously, keeps long-term context, and messages you when tasks are completed
- It was renamed after Anthropic raised trademark concerns; creator Peter Steinberger originally launched it as Clawdbot in December
- Viral demos show it negotiating and purchasing a car, and even calling a restaurant via ElevenLabs after OpenTable failed
- Unlike most “agent” demos, this one runs 24/7 and takes real actions across systems
The catch (and it’s a big one):
- Full device and account access means massive blast radius
- Risks include prompt injection, credential exposure, message hijacking, and lateral movement if misconfigured
- One exploit could compromise everything it touches
Why it matters:
Moltbot feels like a genuine step forward in agentic AI — autonomous, stateful, and operational. But it also highlights the uncomfortable truth: the more useful agents become, the more they resemble privileged infrastructure.
Power without guardrails isn’t innovation — it’s an incident waiting to happen.
If you’re experimenting with tools like this, think zero trust, scoped permissions, isolation, and auditability — not convenience-first setups.
🚨 Agentic AI is no longer theoretical. Now the real work begins: making it safe.

Add to favorites
