• security
  • ai
  • articles
  • 1 hour

Moltbot and the Rise of the Shadow Web: From Chat Agents to Autonomous Execution

Investigation into the Moltbot reverse proxy vulnerabilities and how default trust settings lead to the leakage of sensitive corporate data and API tokens.

0

The open-source AI agent known as Moltbot has emerged as one of the most popular and talked-about tools in the AI space this year. Initially, the tool emerged as Clawdbot, after which it rebranded into Moltbot. Despite certain Moltbot security issues, the tool still managed to captivate the attention of users, and eventually, into rebranded again into OpenClaw.

The AI agent has seen rapid growth due to its advanced capabilities, as well as major attention on social media. But, the real reason behind its fast rise is the growing interest in AI agents that can complete tasks, make decisions, and take action autonomously on behalf of their users, without needing constant human intervention and guidance.

Still primarily known as Moltbot among its users, the tool started out as a developer assistant, where its role was to provide conversational help. However, it quickly expanded its capabilities, adding deeper system access and task-execution. Soon after that, it also embraced open distribution and local deployment, which allowed users to run it on their own devices.

The tool’s main purpose shifted from chatting to doing, no longer just responding to props but being able to execute actions.

Architectural Anatomy

Source: Pexels
Source: Pexels

Moltbot was designed with the local-first approach, which means that it runs on the user’s machine, instead of using company-owned servers to process requests and tasks. In other words, if you use it, everything happens on your device, from executing code to accessing files and automating various tasks that you give it.

There are users who prefer things to be this way, as they get greater control over their own data, but also increase speed, and they no longer depend on someone else to provide infrastructure. In other words, for those who want to be independent, the local-first design is a perfect way to achieve it. You control everything, from the environment and configuration to the runtime behavior.

Compare that to cloud-based AI systems like ChatGPT - while revolutionary when it first emerged, ChatGPT operates on a centralized infrastructure, which means that all of its processes take place on servers managed by OpenAI. This means that someone else is in charge of security, monitoring, and access governance. For the user, this means exchanging control for managed reliability and enterprise oversight.

To some, this might be a preferred method, but the important part is to understand that the difference between the two approaches is structural. Cloud AI is centralized, while local-first AI distributes both risk and oversight. Local deployment is better for privacy, in theory, even if there have been certain Moltbot security concerns. What this means is that all the responsibility in the local-first AI model is entirely shifted to the user, and there is no safety net in case of misconfiguration.

Difference between cloud-based AI assistants and Moltbot

Cloud-based AI assistants, such as ChatGPT, run on centralized servers, meaning that the provider is in charge of processing, storage, and security. In comparison, Moltbot follows a local-first model, where all of these processes - as well as accompanying responsibilities - happen on the user’s machine, and it only interacts with local systems and files.

Privacy Is Under Threat

Source: Pexels
Source: Pexels

As mentioned previously, Moltbot, as well as its successor, OpenClaw, were created to be privacy-focused, only to end up being known for the Moltbot security risks. The initial logic was to have everything run locally, so that users’ data never leaves their machine, which would remove the risk of third parties collecting users' data and using it without consent for any purpose.

The thing with local-first systems, however, is that they typically require API integrations to achieve full functionality. The required credentials are stored on the user’s system, and if there aren’t strict safeguards set into place, it can make the user and their data vulnerable. In other words, instead of the provider holding the keys, they are now on the endpoint device, and as such, they can be in danger of being exposed due to malware or even configuration mistakes.

As a result, a tool meant to avoid exposing data ends up doing just that, as potential attackers no longer need to go against a professionally secured provider, but simply scan misconfigured devices instead.

Cleartext Credential Risks

OpenClaw, the newest form of Moltbot, was designed to store API keys and service tokens in configuration files on the user’s machine. This is typically located at ~/.clawdbot. The problem is that these credentials are not encrypted, and are instead saved in a cleartext form, making them extremely vulnerable if the machine is attacked.

As a result, the credentials can be extracted, and once the attacker has them, they can keep reusing them until they get manually revoked by the user. Not only that, but the more services the AI agent is in charge of, the more risk it carries for the user, and the more valuable the credentials become to the bad actor.

The Backup File Loophole

Another risk comes from the fact that some data may stick around in the .bak backup file even after the user tries to delete it. If this data includes sensitive configuration files, that means that the backup could keep old API keys and other sensitive information even after the primary file is changed or completely removed from the device, giving the user a false sense of security.

To add to the problem, backup files are not typically audited, so such an error can go unnoticed and give attackers an opportunity to obtain old credentials, eliminating the need for them to find active ones.

The “Shadow Web” Protocols

Due to the mentioned Moltbot security issues, local-first AI agents ended up exposing many users’ interfaces to the public when AI agents first started becoming popular. As a result, reverse proxy setups ended up publishing web dashboards, remote shells, and even admin panels, all because speed was given higher priority than security. In the end, over 1,200 publicly-reachable admin interfaces connected to Moltbot and OpenClaw were deployed.

This, in turn, led to the concept of Shadow Web. It should be noted that these are not dark web marketplaces, but ordinary HTTP endpoints that were exposed by accident. They had poor protection, and while they technically sit outside the formal infrastructure governance, they still control automation agents, many of which were granted high privileges on their local devices.

The most common cause was reverse proxy misconfigurations, as certain tools and tunneling services were created to send the traffic from public URLs to local agent ports. But, if the authentication systems are not set up properly or are missing altogether, the proxy will still forward all requests without identity verifications. This resulted in the interfaces being exposed while still being fully functional.

Simply put, the problem resulted in backdoors, and anyone who discovered them could give instructions to the agent, which would obey them without question, unaware that they came from an unauthorized source.

Supply Chain & Vibe-Coding

Source: Pexels
Source: Pexels

OpenClaw’s popularity increased quickly, which brought new developers to the tool, but not discipline in how its development is proceeding. These days, it has over 300 unvetted contributors who help its growth and evolution, which resulted in a fast increase in code velocity. However, it comes at a cost, which is review depth and quality review.

This is a common problem with large open projects that are not strictly coordinated, as not all contributions receive formal security audits. Known as vibe-coding, this approach brings additional risk, especially when those contributing to it value speed and experimentation more than security.

In such an environment, developers are in a rush to add features because they work, not because they have been properly tested and secured.

Modern Mitigation

In short, if local AI agents are to be allowed to run with access to the system, they must be properly isolated to avoid opening the system to security risks. The first line of defense is the use of virtual machines, since having Molbot and OpenClaw in a virtual machine can limit the exposure.

This means that, even if the agent is compromised, the main operating system will be separated, and the bad actor will not be able to access it through the agent.

Beyond that, firewalling is another recommended practice, as AI agent interfaces should never be allowed to become publicly exposed. Local ports need to be restricted to internal traffic only, and any outbound APIs should also be limited to known endpoints. If you configure your firewall properly, it will make sure that the agents can only communicate within the system and have no loose ends leading outside of it, and potentially acting as a backdoor for attackers.

If enabling remote access is necessary, there is IP allow-listing, which is a practice that adds another layer of control. Simply put, access will be restricted to a handful of trusted IP addresses, rather than leaving the dashboard open to the entire internet. This matters because authentication on its own is not good enough. Employing network-level restrictions is far more efficient at reducing the risk of an attack.

Is Moltbot safe for enterprise use in 2026?

Moltbot can be used by enterprises in 2026, but it is not ready for adoption out of the box. In order to make it truly safe for use, it requires strict isolation through VMs and a lockdown that would prevent its exposure to the network. Credentials should also be encrypted, and its code should be reviewed by professional auditors before it gets deployed.

Conclusion

Moltbot and OpenClaw have achieved high popularity during the rise of AI agents as they offer benefits that you don’t get from using cloud-based AI, such as the ability to run it on your own machine. This grants users greater control over data, and makes all processes run faster, ultimately bettering security.

With that said, this level of freedom comes with certain risks where the user is responsible for safety and security. This is why it is important to be aware of the downsides of Moltbot and OpenClaw, and to know how to reduce risks by eliminating loose ends that could be used against you by bad actors. Unprotected API keys, leftover backup information, exposed admin panels, and other mentioned risk factors need to be addressed before the tool can be safely deployed.

By taking these precautions, you can end up with a powerful local-first AI, even if setting it up requires some work.

0

Comments

0