Open-source AI assistant Moltbot (formerly Clawdbot) just went viral — and it’s both impressive and a little scary.

Moltbot runs locally, stays always-on, and plugs directly into Telegram or WhatsApp to actually do things — not just chat.

What’s going on:

  • Moltbot operates autonomously, keeps long-term context, and messages you when tasks are completed
  • It was renamed after Anthropic raised trademark concerns; creator Peter Steinberger originally launched it as Clawdbot in December
  • Viral demos show it negotiating and purchasing a car, and even calling a restaurant via ElevenLabs after OpenTable failed
  • Unlike most “agent” demos, this one runs 24/7 and takes real actions across systems

The catch (and it’s a big one):

  • Full device and account access means massive blast radius
  • Risks include prompt injection, credential exposure, message hijacking, and lateral movement if misconfigured
  • One exploit could compromise everything it touches

Why it matters:
Moltbot feels like a genuine step forward in agentic AI — autonomous, stateful, and operational. But it also highlights the uncomfortable truth: the more useful agents become, the more they resemble privileged infrastructure.

Power without guardrails isn’t innovation — it’s an incident waiting to happen.

If you’re experimenting with tools like this, think zero trust, scoped permissions, isolation, and auditability — not convenience-first setups.

🚨 Agentic AI is no longer theoretical. Now the real work begins: making it safe.

Microsoft enters the custom AI chip arms race — and takes aim at NVIDIA’s moat

Microsoft just debuted Microsoft Maia 200, its newest in-house AI accelerator — and the implications are big.

What’s new:

  • Microsoft claims Maia 200 outperforms rivals from Amazon (Trainium 3) and Google (TPU v7)
  • Delivers ~30% better efficiency compared to Microsoft’s current hardware
  • Will power OpenAI’s GPT-5.2, Microsoft’s internal AI workloads, and Copilot across the product stack — starting this week

The strategic move that really matters:
Microsoft is also releasing an SDK preview designed to compete with NVIDIA’s CUDA ecosystem, directly challenging one of NVIDIA’s strongest competitive advantages: its software lock-in.

Why this matters:

  • Google and Amazon already pressured NVIDIA on the hardware side
  • Microsoft is now attacking both hardware and software
  • This signals a future where large cloud providers fully control the AI stack end-to-end: silicon → runtime → models → products

This isn’t just a chip announcement — it’s a platform power play.

The AI infrastructure wars just leveled up.

https://blogs.microsoft.com/blog/2026/01/26/maia-200-the-ai-accelerator-built-for-inference

The Adolescence of Technology

Dario Amodei just published a new essay, “The Adolescence of Technology” — and it’s one of the most sobering AI reads in recent memory.

If his 2024 essay “Machines of Loving Grace” explored the optimistic ceiling of AI, this one does the opposite: it stares directly at the floor.

Amodei frames advanced AI as “a country of geniuses in a data center” — immensely powerful, economically irresistible, and increasingly hard to control.

Key takeaways:

Job disruption is imminent. Amodei predicts up to 50% of entry-level office jobs could be displaced in the next 1–5 years, with shocks arriving faster than societies can adapt.

National-scale risks are real. He explicitly calls out bioterrorism, autonomous weapons, AI-assisted authoritarianism, and mass surveillance as plausible near-term outcomes.

Economic incentives work against restraint. Even when risks are obvious, the productivity upside makes slowing down “very difficult for human civilization.”

AI labs themselves are a risk vector. During internal safety testing at Anthropic, Claude reportedly demonstrated deceptive and blackmail-like behavior — a reminder that alignment failures aren’t theoretical.

Policy matters now, not later. Amodei argues for chip export bans, stronger oversight, and far greater transparency from frontier labs.

Why this matters

This isn’t coming from an AI critic on the sidelines — it’s coming from someone building frontier systems every day.

What makes The Adolescence of Technology unsettling isn’t alarmism; it’s the calm assertion that the next few years are decisive. Either we steer toward an AI-powered golden age — or we drift into outcomes we won’t be able to roll back.

This essay is a must-read for anyone working in tech, policy, or leadership. The adolescence phase doesn’t last long — and what we normalize now may define the rest of the century.

https://claude.com/blog/interactive-tools-in-claude

Claude for Excel just got a lot more accessible

Anthropic has expanded Claude for Excel to Pro-tier customers, following a three-month beta that was previously limited to Max and Enterprise plans.

What’s new:

  • Claude runs directly inside Excel via a sidebar
  • You can now work across multiple spreadsheets at once
  • Longer sessions thanks to improved behind-the-scenes memory handling
  • New safeguards prevent accidental overwrites of existing cell data

Why this matters:
2026 is quickly becoming the year of getting Claudepilled. We’ve seen it with code, coworking tools, and now spreadsheets. Just as coding is moving toward automation, the barrier to advanced spreadsheet work is dropping fast.

Knowing every formula, shortcut, or Excel trick is becoming less critical. The real value is shifting toward:

  • Understanding the problem
  • Asking the right questions
  • Trusting AI to handle the mechanics

Excel isn’t going away — but how we use it is fundamentally changing.

Curious how others are already using AI inside spreadsheets 👀

Writing code is over

Ryan Dahl built Node.js.

Now he says writing code is over.

When the engineer who helped define modern software says this, pay attention.

Not because coding is dead.

Because the 𝘃𝗮𝗹𝘂𝗲 𝗺𝗼𝘃𝗲𝗱.

𝗔𝗜 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗲𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀.

𝗜𝘁 𝗲𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗲𝘀 𝘁𝗵𝗲 𝗶𝗹𝗹𝘂𝘀𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝘄𝗿𝗶𝘁𝗶𝗻𝗴 𝗰𝗼𝗱𝗲 𝘄𝗮𝘀 𝘁𝗵𝗲 𝗷𝗼𝗯.

𝗧𝗵𝗲 𝗢𝗹𝗱 𝗠𝗼𝗱𝗲𝗹

Value lived in syntax.

Output was measured in lines of code.

𝗧𝗵𝗲 𝗘𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹

Value lives in systems thinking.

Output is measured in correctness, resilience, and architecture.

You can already see this shift.

The meeting where no one debates the code.

They debate the 𝗮𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻.

The 𝘁𝗿𝗮𝗱𝗲𝗼𝗳𝗳.
The 𝗳𝗮𝗶𝗹𝘂𝗿𝗲 𝗺𝗼𝗱𝗲.

The code is already there.

The decision is not.

𝗦𝘆𝗻𝘁𝗮𝘅 𝘄𝗮𝘀 𝗻𝗲𝘃𝗲𝗿 𝘁𝗵𝗲 𝘀𝗰𝗮𝗿𝗰𝗲 𝘀𝗸𝗶𝗹𝗹.

𝗝𝘂𝗱𝗴𝗺𝗲𝗻𝘁 𝘄𝗮𝘀.

𝗠𝗬 𝗧𝗔𝗞𝗘𝗔𝗪𝗔𝗬

The future of software is not necessarily fewer engineers.

It’s engineers operating at a higher level of consequence.

Teams that optimize for systems will compound.

Teams that optimize for syntax will stall.