
Almost everyone has heard of OpenClaw, the late-2025 agentic AI release which takes artificial intelligence technology one step further. Rather than leaving the user to turn answers given in a dialogue box into real life action, the latest AI iteration promises to take the final leap of executing tasks autonomously.
Users have raved about the productivity gains, likening the tool to employing a tireless secretary. Equally, there are horror stories of raised “lobsters” – the nickname for OpenClaw bots – running wild. They send disastrous emails, initiate bogus financial transactions and mess up the lives of their users in ways previously only imagined in science fiction.
The future, therefore, is here. There is no arresting the relentless march of technology – it is for us humans to adapt. The law, as part of the broader societal construct, should do so as well.
Applying a somewhat linguistic-centric philosophical approach, immediate questions jump to the mind of a lawyer. One, why does society readily invoke anthropomorphism by describing this type of AI as an “agent”? Two, does the law, as it currently stands, map to this description? Three, if it does not, should it?
The first question is easy. Humans inevitably reason by analogy to known concepts. Borrowing from Yuval Noah Harari, AI, by its very nature, has hacked the operating system of human civilisation – language – thus justifying the mapping of its role to a human. The human role that maps most accurately to agentic AI is the “agent”. It maps well because it possesses the same two characteristics of “agency” as understood by humans in common parlance.
First, its core role is that of a representative, that is, it acts on behalf of another – the “principal”. Second, it is obliged to follow the principal’s instructions but is not a complete puppet – it preserves a degree of autonomy and discretion.