Perplexity Introduces “Personal Computer” — A Local AI Agent System Powered by Mac mini

Artificial intelligence agents are rapidly evolving from cloud tools into persistent assistants that can operate directly on personal hardware. In a notable step toward this future, Perplexity AI has unveiled Personal Computer, a dedicated local version of its AI agent system designed to run on a Apple Mac mini.

The move positions Perplexity as a security-focused alternative to experimental agent systems such as OpenClaw, which recently gained attention for enabling autonomous computer control.

The concept signals a broader shift: instead of AI agents living purely in the cloud, they may soon operate continuously on small, always-on devices sitting on our desks.


A Local AI Agent With Persistent Access

Perplexity’s Personal Computer is built around the company’s Comet assistant, an AI agent capable of interacting directly with a machine’s files, applications, and active sessions.

By running the agent on a dedicated Mac mini, the system maintains persistent local access to the environment. Users can manage and interact with the agent remotely while it operates on the device.

This setup enables the AI to perform tasks such as:

  • Managing files and folders
  • Interacting with applications
  • Monitoring and executing workflows
  • Maintaining long-running sessions
  • Acting as an always-available automation assistant

Essentially, the Mac mini becomes a permanent AI workstation, continuously running the agent in the background.


Designed as a Safer Alternative to Autonomous Agents

While agentic AI systems promise powerful automation, they also raise concerns around security and control.

Perplexity is positioning its Personal Computer as a safer alternative to more experimental frameworks like OpenClaw by incorporating several safeguards.

These include:

  • Tracked activity logs for visibility into agent behavior
  • User sign-off requirements for sensitive actions
  • A kill switch to immediately stop the system
  • Controlled access to applications and files

This emphasis on control is critical as AI agents gain the ability to manipulate real systems rather than simply generate text or code.


Built on Perplexity’s Multi-Model Agent Platform

Personal Computer builds on the broader Perplexity Computer system launched in late February.

That platform orchestrates multiple AI models simultaneously, allowing the agent to select the best model for a given task.

The enterprise version includes:

  • Access to 20 AI models
  • Integration with 400+ applications
  • Workflow automation via Slack integrations
  • Cross-application task orchestration

This architecture reflects a growing industry trend: AI agents acting as coordinators across multiple tools, services, and models.


Early Access for Power Users and Enterprises

Access to Personal Computer is currently limited.

Early availability is being offered to Perplexity Max subscribers, who can join a waitlist to participate in the first rollout.

Perplexity says it plans to provide dedicated support and resources to this initial group of users as the system evolves.

At the same time, enterprise deployments are already underway, where organizations can use the system to automate internal workflows and connect AI agents directly into operational tools.


The Mac mini Is Becoming the Default AI Agent Hardware

Ironically, the hardware enabling this new generation of agents is not an AI-specific device but a compact desktop computer.

The Mac mini is emerging as the preferred host for local AI agents because it offers:

  • Always-on reliability
  • Strong CPU and GPU performance
  • Quiet operation and low power usage
  • A small physical footprint

Between OpenClaw experiments, Perplexity’s Personal Computer, and a wave of similar projects, the Mac mini is quietly becoming the default hardware platform for local AI agents.


Why This Matters

AI assistants are evolving into autonomous digital operators capable of interacting directly with computers and software environments.

Running these agents locally — instead of entirely in the cloud — offers several advantages:

  • Greater privacy and control
  • Persistent always-on automation
  • Reduced reliance on external infrastructure
  • Better integration with personal and enterprise systems

If the current trajectory continues, it is likely that within a few years many professionals will have a dedicated AI agent running continuously on a small local device, managing workflows, coordinating tasks, and acting as a personal digital operator.

What began as experimental automation may soon become a standard computing model.

https://www.perplexity.ai/personal-computer-waitlist

Yann LeCun’s $1B Bet on “World Models”: A Different Future for AI

The artificial intelligence landscape has been dominated by large language models (LLMs) over the past few years. Systems like OpenAI’s GPT models and research from Google DeepMind have shaped how the public and enterprises think about AI. But one of the field’s most influential researchers is taking a very different path.

Yann LeCun—the Turing Award-winning scientist and former chief scientist at Meta’s AI research division—has launched a new startup called Advanced Machine Intelligence (AMI). The company has emerged with an eye-catching $1.03 billion seed round, immediately valuing it at $3.5 billion.

This is not just another AI startup chasing the LLM wave. Instead, AMI represents LeCun’s long-held belief that the future of AI lies in “world models.”


A Break from Meta and the LLM Wave

After more than a decade leading AI research at Meta’s Facebook AI Research (FAIR), LeCun stepped away in November, reportedly telling Mark Zuckerberg that he believed he could build more capable AI systems faster, cheaper, and better outside the company.

LeCun has been one of the most vocal critics of the industry’s heavy focus on large language models. While he acknowledges their usefulness, he argues that LLMs alone cannot achieve human-level intelligence because they primarily learn patterns from text rather than understanding how the real world works.

His new company is designed to pursue a different research direction—one that attempts to give AI systems a deeper understanding of the physical and causal structure of reality.


The World Model Approach

AMI’s central focus is building AI systems that learn “world models.”

A world model is an internal representation that allows an AI system to:

  • Understand how the physical world behaves
  • Predict what might happen next
  • Maintain persistent memory over time
  • Plan actions based on simulated outcomes

Instead of simply predicting the next word in a sentence (the core task of LLMs), world models attempt to simulate environments and reason about them.

If successful, this approach could dramatically expand AI’s usefulness in real-world applications such as:

  • Manufacturing and industrial automation
  • Robotics and autonomous systems
  • Wearables and smart devices
  • Healthcare diagnostics and monitoring

These are domains where physical reasoning and long-term memory matter more than language fluency.


A Massive Seed Round and Powerful Backers

The scale of AMI’s funding signals strong investor confidence in LeCun’s vision. The $1.03B seed round includes backing from several major players in technology and venture capital, including:

  • Nvidia
  • Samsung
  • Bezos Expeditions
  • Eric Schmidt
  • Mark Cuban

With a $3.5 billion valuation at inception, AMI immediately enters the ranks of the most well-funded AI startups in the world.

Such a large early investment reflects a broader realization among investors: the next wave of AI breakthroughs may not come from scaling language models alone.


Why Paris?

Interestingly, LeCun chose Paris as AMI’s headquarters instead of Silicon Valley.

He has joked that Silicon Valley has become “LLM-pilled,” suggesting that too much of the AI ecosystem is currently focused on language models.

AMI will operate as a global research network, with additional hubs in:

  • New York City
  • Montreal
  • Singapore

This reflects LeCun’s long history with international AI research communities, particularly in Europe and Canada.


Why This Matters

LeCun’s move signals a potential shift in the AI research narrative.

For the past several years, the industry has largely converged around scaling LLMs—bigger models, more data, and more compute. But the limitations of this approach are becoming clearer, especially when it comes to reasoning, memory, and interaction with the physical world.

If AMI’s world-model strategy succeeds, it could represent a new architecture for AI systems, potentially enabling machines that:

  • Understand cause and effect
  • Plan actions over long time horizons
  • Interact more naturally with real environments

In other words, the future of AI may not be just better chatbots—but systems that understand and simulate reality itself.


Final Thoughts

With over $1 billion in funding, a Turing Award-winning founder, and a bold technical vision, Advanced Machine Intelligence is one of the most ambitious AI bets in recent years.

The question now is whether LeCun’s long-standing argument is correct: that true machine intelligence requires world models, not just language models.

If he’s right, this startup could shape the next era of artificial intelligence.

https://amilabs.xyz

Anthropic Challenges Federal Blacklist: A Defining Moment for AI Policy in the U.S.

Anthropic has taken the unusual step of suing the U.S. government in two separate federal courts, pushing back against a Pentagon designation that labeled the company a “supply chain risk” and a White House directive instructing federal agencies to drop the use of Claude AI systems.

The company argues the actions amount to retaliation for its public positions on AI safety — particularly around the use of artificial intelligence in military and surveillance contexts.

What Happened

Anthropic filed lawsuits seeking to overturn the Pentagon’s “supply chain risk” designation and block enforcement of the federal directive requiring agencies to cut ties with Claude.

According to the filings, the company believes the designation was misused. The supply chain risk framework was originally created to protect government systems from foreign adversaries, not to penalize U.S. companies for policy disagreements.

Anthropic claims the government’s actions violate constitutional protections by retaliating against the company for speaking publicly about limits on the military use of artificial intelligence.

Support from the AI Community

The case has already drawn attention from across the AI industry.

More than 30 employees from OpenAI and Google reportedly signed a legal brief supporting Anthropic’s challenge. Their filing warns that blacklisting domestic AI companies over policy positions could undermine the United States’ leadership in artificial intelligence.

For many researchers, the concern goes beyond a single company. The broader issue is whether AI labs can safely speak about risks and ethical guardrails without fear of government retaliation.

The Legal Argument

Anthropic’s lawsuits make two central claims:

  1. Misuse of the Supply Chain Risk Label
    The company argues that the label was intended to address national security threats from foreign entities, not domestic firms engaged in policy debate.
  2. Violation of Free Speech Protections
    The filings contend that government agencies retaliated against the company for advocating restrictions on AI use in weapons systems and surveillance.

If proven, this could raise serious constitutional questions about how federal agencies regulate emerging technologies.

Why This Case Matters

Regardless of where one stands on AI’s role in warfare or surveillance, the dispute touches on a fundamental issue: Can the U.S. government punish a domestic technology company for publicly advocating safety policies?

The outcome could shape the relationship between AI companies and federal regulators for years to come.

A ruling in Anthropic’s favor might reinforce the ability of companies and researchers to advocate for safety standards without political consequences. A ruling against the company could give the government broader authority to restrict vendors it sees as misaligned with national security priorities.

Either way, the case will likely set a precedent that every major AI lab — from startups to Big Tech — will be watching closely.

The Bigger Picture

Artificial intelligence is quickly becoming a strategic technology at the center of economic competition, national security, and global influence.

As governments and AI companies navigate this rapidly evolving landscape, conflicts like this highlight a growing tension: balancing national security interests with open debate about the risks and governance of powerful technologies.

The Anthropic lawsuit may ultimately become one of the first major legal tests of how those boundaries are defined.

https://www.courtlistener.com/docket/72379655/1/anthropic-pbc-v-us-department-of-war

OpenAI Releases GPT-5.4 — A Major Leap in Reasoning, Coding, and Desktop AI

OpenAI has released GPT-5.4, its newest flagship AI model, bringing major improvements across reasoning, coding, scientific tasks, mathematics, and real-world desktop interactions. According to OpenAI VP of Science Kevin Weil, the new release represents “our best model ever.”

The launch follows closely behind the release of GPT-5.3 Instant, which was introduced only two days earlier as the default chat model. GPT-5.4 is currently available as GPT-5.4 Thinking for Plus, Team, and Pro users.

Strong Performance on Real-World Tasks

One of the most notable benchmarks for GPT-5.4 is its performance on OSWorld-V, a test designed to evaluate how effectively AI agents can navigate and complete tasks on a desktop environment.

GPT-5.4 scored 75%, outperforming the human baseline of 72.4% and delivering double the performance of GPT-5.2 on the same benchmark.

This improvement signals a major step forward in AI systems capable of interacting with real software environments rather than just generating text.

Larger Context and Deeper Reasoning

The new model introduces several technical upgrades designed for more complex workflows:

  • Up to 1 million tokens of context
  • A new “x-high reasoning effort” mode
  • Improved planning and long-running task execution

These capabilities allow GPT-5.4-based agents to plan and execute multi-step tasks that may run for hours, opening the door for more sophisticated automation across research, software development, and knowledge work.

Knowledge-Work Benchmark Gains

GPT-5.4 also demonstrated strong results on GDPval, a benchmark designed to measure AI performance across 44 real-world knowledge-worker roles.

The model matched or outperformed professionals 83% of the time, a significant improvement from the 71% score achieved by GPT-5.2.

This jump highlights continued progress toward AI systems capable of assisting — and in some cases competing with — human expertise across a wide range of professional tasks.

Why This Release Matters

The release comes at an important moment for OpenAI following a week of mixed sentiment around the AI industry. GPT-5.4 appears to represent a strong response, delivering meaningful gains across reasoning, automation, and real-world task execution.

Perhaps the most striking signal of confidence came from OpenAI researcher Noam Brown, who stated:

“We see no wall.”

If that assessment holds true, GPT-5.4 may mark another step toward increasingly capable agentic AI systems — models that do more than generate answers and instead actively plan, navigate software, and execute complex workflows.

As AI systems continue expanding into real desktop environments, the line between tool and autonomous digital worker may become increasingly thin.

https://openai.com/index/introducing-gpt-5-4

Anthropic vs OpenAI: Pentagon AI Deal Sparks Public Rift

The AI rivalry between Dario Amodei and Sam Altman just escalated publicly — and the dispute centers on the growing role of artificial intelligence in U.S. defense.

According to a 1,600-word internal memo obtained by The Information, Amodei sharply criticized OpenAI and its recent Pentagon partnership, describing the situation as “maybe 20% real and 80% safety theater.” The memo pulls back the curtain on tensions between the two leading AI labs and highlights how competition, politics, and national security are becoming increasingly intertwined in the AI race.


What Triggered the Dispute

The controversy began when the United States Department of Defense reportedly labeled Anthropic a potential “supply chain risk.”

Shortly after, OpenAI moved forward with its own defense-related agreement with the Pentagon under similar terms.

Amodei’s memo suggests Anthropic believes the process was inconsistent and politically influenced. He also pushed back against the narrative that Anthropic failed to cooperate with defense officials.


Direct Criticism of OpenAI Leadership

Amodei didn’t stop at policy disagreements. The memo included unusually direct criticism of OpenAI leadership.

He accused Sam Altman of “gaslighting” competitors and pointed to a reported $25 million political donation by Greg Brockman connected to former president Donald Trump, contrasting it with Anthropic’s refusal to offer what he described as “dictator-style praise.”

According to Amodei, OpenAI has repeatedly tried to portray Anthropic as:

  • Uncooperative
  • Difficult to work with
  • Less flexible in negotiations

He characterized this messaging as part of a broader pattern he says he has observed from Altman over time.


A Softer Tone Toward the Pentagon

Interestingly, just days after the memo, Amodei publicly softened his stance toward the Pentagon.

He acknowledged that Anthropic and the Department of Defense “have much more in common than we have differences.”

This suggests the dispute may be less about whether AI should support national security — and more about how those partnerships are structured and communicated.


Why This Matters

The conflict reveals several deeper trends shaping the AI industry:

1. Defense AI is becoming a strategic battleground
Major AI labs increasingly see government and defense contracts as critical to scale and influence.

2. Competition between frontier labs is intensifying
What was once a technical rivalry is now becoming personal and political.

3. Trust and governance remain unresolved issues
As governments integrate AI into national security, questions about safety, transparency, and corporate influence will only grow.


The Bigger Picture

The public tone of Amodei’s memo suggests long-standing tensions dating back to his departure from OpenAI in 2020, when he went on to co-found Anthropic.

Between this memo, earlier disagreements over AI safety frameworks, and high-profile marketing moves by major labs, the frontier AI rivalry is no longer just technical — it’s becoming geopolitical.

And with the Pentagon now entering the picture, the stakes for AI leadership have never been higher.

https://www.theinformation.com/articles/read-anthropic-ceos-memo-attacking-openais-mendacious-pentagon-announcement