Discussion about this post

User's avatar
richardstevenhack's avatar

Do be careful with OpenClaw! As I've mentioned to you in emails, it is a massive security disaster waiting to happen (although there are security fixes to help minimize the problem.)

Just yesterday I read there are 135,000 OpenClaw installations open to the Internet - because it listens on all ports by default, and most users don't know how to change it.

Worse, there are still tons of "AI influencers" on YouTube recommending this thing WITHOUT telling people how to set it up securely (only a few influencers are.)

I will not touch this thing until I have a dedicated, locked-down separate machine (or VPS) to work on it. I have a separate mini-PC but I'm dedicating that to running a cybersecurity lab.

Here's a recent discussion on the topic with Shoshana Cox, who has a Substack that you should read:

The AISec Intelligence Brief

substack.com/@disesdi

From OpenAI Ads to Rogue Agents: AI’s Trust Collapse

https://www.youtube.com/watch?v=khqApWKMaAk

From the description:

Are AI agents getting dangerous before they get useful?

From OpenAI ads to rogue agents and Moltbot-style exploits, this Singularity Chat digs into AI’s emerging trust crisis and what comes next.

In this episode of ‪@overthehorizon‬, I am joined by Shoshana Cox, Brian Wang, and Kian Konrad Tajbaksh for a wide-ranging discussion on where today’s AI trajectory is really heading.

The conversation spans three critical fault lines now emerging in the AI ecosystem.

šŸ“Œ First, the rise of AI agents and Moltbot-style behaviour. What looks like hype on the surface masks a deeper issue around prompt injection, agent autonomy, and systems that can act in the world without reliable safeguards.

šŸ“Œ Second, the growing concern around ads, bias, and incentives inside AI systems. As ads and monetisation enter the assistant layer, what happens to trust, neutrality, and user intent? Can AI still function as a reliable interface to knowledge?

šŸ“Œ And finally, Elon Musk’s idea of ā€œemulated humansā€. Agents that do not rely on APIs but learn to operate software the way humans do. Is this a safer path to automation, or the next major leap in AI capability?

This is not a debate about sentience or science fiction. It is a grounded discussion about power, control, incentives, and the systems we are building right now.

If AI is becoming the new operating layer for society, this episode asks a simple question:

Who do we trust, and why?

richardstevenhack's avatar

Watched your latest livestream - great that you covered the security issues with OpenClaw!

I sent you an email suggesting that a topic for a future livestream might be an interview with a cybersecurity expert like Jason Haddix or John Hammond - or an AI cybersecurity expert like Shoshana Cox - that covers the intersection of agentic AI and cybersecurity might be very valuable for many people building this stuff.

We don't want them to be like the guy who created OpenClaw - who shipped AI generated code that he didn't even read!

No posts

Ready for more?