The artificial intelligence agents are coming, and OpenClaw may be the clearest signal yet that they have arrived. Early this year, the open-source project went from relative obscurity to surpass Linux on the GitHub all-time star leaderboard within months.
OpenClaw is now the most-starred non-aggregator software project—that is, a tool that generates original value and content rather than filtering and displaying existing information—on GitHub, the developer platform that allows developers to create, store, manage, and share their code.
OpenClaw’s unprecedented level of productivity is what made it so viral. Where earlier AI agents responded to prompts and then stopped, OpenClaw agents receive more permission from the user before they run persistently, act proactively, and produce real-world consequences. The architecture behind it has forced technologists, venture capital investors, and lawyers to rethink what “agent” means in the AI age.
Trusted Action Engines
For most of 2025, even advanced “agentic AI” systems remained confined inside browser sandboxes. They functioned primarily as sophisticated content generators and reactive tool callers—producing text, summaries, or suggestions on demand but rarely executing sustained, autonomous real-world actions that users felt safe delegating.
OpenClaw changes that equation. For example, users may connect OpenClaw to messaging apps, email, calendars, and application programming interfaces via standardized protocols, and simply say “manage my inbox for the week” or “negotiate this vendor contract.”
The system then operates persistently and proactively. The OpenClaw agents wake themselves on a timer—a “heartbeat” scheduler—to check their objectives and act, often while the human user sleeps.
Legal Agency Applies
To understand what OpenClaw has actually built, it helps to reach back into a body of law that predates software by centuries. In the traditional civil-law concept of agency, a principal authorizes an agent to pursue a defined goal on their behalf, using the agent’s own means and judgment, within an authorized scope. The agent’s acts create legal consequences that bind the principal. Three elements are essential: delegation of a goal, autonomy over method, and real-world consequences that flow back to the authorizing party.
What is striking is how precisely OpenClaw satisfies all three. The user sets a goal—typically via WhatsApp, Telegram, or another familiar messaging interface. Without user’s prompting as with chatbots, the agent is designed for 24/7 autonomous operation, which pursues that goal using whatever combination of tools, memory, and sub-agents it judges appropriate. The consequences are binding: emails sent, files modified, transactions executed—all in the principal’s name.
Leapfrogging Into Trustee Territory
From an AI technology perspective, this isn’t a fundamental model breakthrough. OpenClaw used components that already existed, but its agents hit a new capability threshold by combining them to give users a seamless way to get tasks done autonomously. Most of this iterative improvement has to do with giving AI agents more access—mostly in three high-risk categories: identity and credential, transactional data, and local system.
If OpenClaw’s architecture maps cleanly onto traditional agency, its most novel features map onto something more specific and legally demanding: the trustee. A trustee is a special class of agent, distinguished by standing authority (the duty runs continuously, not just when instructed), discretionary judgment, fiduciary loyalty to a beneficiary, and accountability to parties who may never have directly authorized the relationship. OpenClaw’s architecture exhibits trustee-like attributes in each of these dimensions.
Widening Accountability Gap
The agentic AI model’s draw is obvious. For individuals, it promises a “personal operating system” that quietly handles the administrative fatigue of modern life. For organizations, it offers a 24/7 digital proxy workforce that can monitor, triage, and act across functions—compliance, customer service, treasury—with minimal intervention.
But can you trust them as your trustees? Unlike a true trustee, the AI agent has no legal personhood and can’t be sued in its own name; it holds no legal title to assets; its “fiduciary duty” is ultimately a system prompt rather than a binding legal obligation enforceable by courts (for now).
Complex questions will arise as to which existing touch points—Know Your Customer, informed consent, contract signing—require a human in the loop, and which can be satisfied by a certified, auditable agent action log.
The iPhone 1.0 Moment
The answers will shape whether the 2026 agents become a trusted pillar of the digital economy. When Apple Inc. launched the original iPhone in 2007, the device worked beautifully, but the legal and economic infrastructure around it was almost entirely absent. The questions it raised—about app-developer liability, platform responsibility, data privacy, and user consent—took multiple years and considerable regulatory improvisation to begin answering.
Now it’s a similar era of AI agent platforms.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Winston Ma is the executive director of the Global Public Investment Funds Forum and adjunct professor at NYU School of Law.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.