• security
  • news
  • 1 hour

Analysts Identify Critical Vulnerabilities in OpenClaw

A CertiK report has highlighted risks of full system control through AI agents.

0

nft.eu
  • rating +26
  • subscribers 113

Security specialists at CertiK have published an analysis of OpenClaw, an open-source framework for autonomous AI agents that has quickly gained over 300,000 stars on GitHub. The study covers the period up to mid-March 2026 and identifies a number of vulnerabilities affecting the system’s core mechanisms.

These issues allow unauthorized access, data extraction, and command execution on behalf of a user through the agent.

Breakdown of the Identified Vulnerabilities

Over several months, CertiK experts identified more than 280 security warnings and over 100 vulnerabilities in OpenClaw.

“The project’s architecture was originally designed to operate in a trusted environment, which led to problems when scaling and deploying it in more open infrastructures,” the report states.

In some cases, the system allowed attackers to bypass access controls and take control of agent functions, including executing commands and accessing sensitive data.

Additional risks are linked to extensions: researchers found malicious modules and counterfeit packages capable of manipulating the agent’s behavior.

Misconfigurations during deployment further increase risk. CertiK identified more than 135,000 deployed instances of OpenClaw, some of which operate with excessive privileges and disabled security mechanisms.

Implications for the Market

The report highlights growing risks in the autonomous AI agent segment, which is already used to manage services and interact with external platforms. Without strict safeguards, such systems gain access to critical functions and can become a point of compromise.

Read also:

This post is for informational purposes only and does not constitute advertising or investment advice. Please do your own research before making any decisions.

0

Comments

0