The OpenClaw (ClawdBot) Hype

Show notes

Learn how to efficiently use Claude Code instead! 👉 https://acad.link/claude-code

Website: https://maximilian-schwarzmueller.com/

Socials: 👉 Twitch: https://www.twitch.tv/maxedapps 👉 X: https://x.com/maxedapps 👉 Udemy: https://www.udemy.com/user/maximilian-schwarzmuller/ 👉 LinkedIn: https://www.linkedin.com/in/maximilian-schwarzmueller/

Want to become a web developer or expand your web development knowledge? I have multiple bestselling online courses on React, Angular, NodeJS, Docker & much more! 👉 https://academind.com/courses

Show transcript

00:00:00: So, Open Claw or Clawed Bot as it used to be

00:00:04: and Mold book, it's been some intense days on this

00:00:07: We got a new AI hype and, of course, I also spent the

00:00:11: last days trying to get as much as possible out of

00:00:15: Open Claw, and I got some feelings and thoughts, and they

00:00:19: differ from most of the other videos and posts

00:00:23: I saw. But let me start with a short story,

00:00:27: and I'm sure you can figure out the analogy.

00:00:31: Imagine you are living in a town, a village, and in that

00:00:35: town, there is that really friendly guy that

00:00:39: eager and happy to help you with all kinds of

00:00:42: tasks. He does all the chores you don't want to

00:00:46: at least he tries to. He's happy to take your kids

00:00:50: to school, to clean your house, clean your car, do the

00:00:54: groceries for you, and you can lean back, relax,

00:00:58: and let's say, um, in order to really help you, of

00:01:01: course, that assistant, that person needs broad

00:01:06: permissions. You need to give him the keys to your

00:01:09: You need to give him the keys to your car so that he can clean

00:01:13: it from the inside and do groceries with it.

00:01:16: You also, of course, need to tell your kids to get into the

00:01:19: car with him so that he can take them to school, and so on

00:01:23: and so forth. Now, there is a problem with that guy

00:01:27: though. He's super friendly, but he sometimes comes to weird

00:01:31: conclusions. At least, you can't rule out that he will

00:01:35: come to weird conclusions. He may conclude that

00:01:39: the best way to get rid of all the dirt in your house is to set

00:01:43: it on fire. Unfortunately, he also is

00:01:46: easily influenced by others, at least if they're a

00:01:50: bit more deceptive about it. He can

00:01:54: be influenced to maybe steal your car

00:01:58: because that's better for society as a

00:02:02: whole. Again, not guaranteed, not

00:02:06: necessarily going to happen, but absolutely possible.

00:02:09: You can't rule it out. And, therefore,

00:02:13: of course, you unfortunately have to take away

00:02:17: many of the permissions and much of the access you granted

00:02:21: that guy because you can't entirely trust

00:02:25: him, and the things that could happen are too

00:02:29: bad for you to just live with them or

00:02:32: accept them as a potential danger.

00:02:35: So unfortunately, of course, as you take away many of those permissions

00:02:39: access rights, he gets less and less useful

00:02:43: to you. And then there is another problem.

00:02:46: Even with broad permissions, you didn't

00:02:50: get as much use out of him as you hoped

00:02:53: to because the tasks you were promised he

00:02:57: could do, he only sometimes did, and some of them he

00:03:01: was not able to do at all or he forgot how to do

00:03:05: something or did the same task differently every time you

00:03:09: asked him about it or needed a lot of input from your

00:03:13: side. So ultimately, you're not convinced,

00:03:17: and that's been, you guessed it, my experience with

00:03:21: Open Claw, and believe me, I, I tried.

00:03:24: I read many good things. I heard many good things about it,

00:03:28: tried. I spun up my own VPS. By the way, uh, I didn't

00:03:32: know, but you can actually also use VPS, uh,

00:03:36: providers other than Hostinger. Nothing against Hostinger.

00:03:39: Uh, I just had a different feeling when I watched many of those videos,

00:03:43: but anyways. Uh, I spun up my own VPS and

00:03:47: I installed Open Claw on it, and of course, you could also

00:03:51: install it on, on your system. Um, there is one single

00:03:55: command you need to run and, uh, then you're good to go, but

00:03:58: personally, I would never install it on my

00:04:02: MacBook even though I'm fully aware that I would be able

00:04:06: to get more out of it if I would install it there,

00:04:10: why I didn't install it there and why I would never install it there

00:04:14: now, uh, later. So I installed it on my VPS and I

00:04:17: went through that onboarding flow, and I'm sure you saw that many times now

00:04:21: already and you maybe already went through it on your own.

00:04:25: I linked it up to my ChatGPT Plus subscription in the

00:04:29: end. I set up my Telegram bot and I

00:04:33: was ready to communicate with my bot in the end, with my

00:04:37: Open Claw bot. And then there I

00:04:41: sat and had to think of things

00:04:45: I wanted it to do for me. Now, of course, I've seen plenty of

00:04:49: other posts and videos where people used it

00:04:53: to have it build dashboards for them

00:04:57: or do web research or find cheaper

00:05:00: flights or even buy stuff, but

00:05:04: I didn't feel like giving it access to my credit card, and,

00:05:08: um, I, I'm not sure about you but I typically don't fly

00:05:12: three times a day, so, um, looking for those flights

00:05:15: myself, especially since there are flight comparison sites out there

00:05:19: that find you the cheapest flight, wasn't.....

00:05:22: too difficult of a task for me and I genuinely enjoy the process

00:05:26: of planning my trips. But, of course, that may be different for everybody else.

00:05:30: Now, for research, I had the problem that I'm super happy with the

00:05:34: AI-powered research tools that already exist, like the AI

00:05:37: mode, uh, on google.com or Deep Research on

00:05:41: Gemini or on ChatGPT. I use those a lot.

00:05:44: I find them really helpful so I didn't really need my

00:05:48: own bot for that that has a high

00:05:52: chance of performing worse actually.

00:05:54: Now, I do get there are certain areas where it

00:05:57: could be better than those other

00:06:00: research, uh, bots or services. For

00:06:03: example, if I would grant it access to my X

00:06:07: say, um, I understand that it could, of course, do

00:06:11: research in areas where you need to be logged in or

00:06:15: where my history matters. I, I fully get that,

00:06:19: um, so that's why I'm using SuperGrok for

00:06:23: if I wanna research on X. But yeah, I get that if you

00:06:27: give it broad permissions, if you allow it to log into your

00:06:30: accounts, use your browser, maybe run on your system, you

00:06:34: can probably get a bit more out of it than I was

00:06:38: able to get out of it. And maybe I'm just also not

00:06:41: creative enough. And by the way, just to be very clear, and I

00:06:45: think I have made that clear in other videos too,

00:06:49: of AI, not just for research but also for coding.

00:06:52: For example, I recently released an entire Claude Code

00:06:56: I'm using Claude Code and all these other tools like Cursor for

00:07:00: software. I think AI is a huge help

00:07:04: there or can be a huge help there. So that's not a general

00:07:08: thing against AI. I just genuinely didn't find

00:07:12: the amazing use cases for OpenClaire, especially when not

00:07:16: running it on my machine, and that is

00:07:19: the main problem I actually have with it.

00:07:23: Because you could definitely say that I'm just not creative enough or not

00:07:27: open-minded enough to find the right use cases for it,

00:07:31: but security is a huge issue

00:07:35: I have with OpenClaire. And I know there are people

00:07:39: that will tell you that they used it for weeks

00:07:43: wrong or that this, uh, will all, of course, get

00:07:47: better, and I will say the first argument

00:07:51: wrong, um, well, that's not the kind of argument

00:07:55: that convinces me because just because nothing went wrong

00:07:59: for you does not mean that nothing is

00:08:02: going wrong in general and that there wouldn't

00:08:06: be huge security issues that

00:08:10: can, of course, be exploited by bad actors or

00:08:14: that, of course, things could simply go wrong because

00:08:18: AI, large language models, is unpredictable.

00:08:22: Of course, the chance for it erasing your hard drive

00:08:25: extremely high. It's super low but it's not zero

00:08:29: and it will never be zero with large language models

00:08:33: without additional checks. They can be unpredictable.

00:08:37: In addition, in the official security documentation of

00:08:41: OpenClaire, they are correctly stating that prompt

00:08:45: injection is not solved. Of course, the

00:08:49: latest models like GPT-5.2 and so on got much

00:08:52: better at protecting against prompt injection.

00:08:56: They got much better at following instructions,

00:09:00: on. But there is no 100% protection

00:09:04: against, uh, prompt injection and the way large language models

00:09:08: work, there never will be. So prompt injection

00:09:11: attacks can't be ruled out and, of course, the

00:09:15: more popular tools like OpenClaire get, the more

00:09:18: people that are running it, the more it will be in the

00:09:22: focus of bad actors. And there are various

00:09:26: ways of injecting prompts into an active

00:09:30: OpenClaire bot because you may think, "Well, I'm the only one

00:09:34: communicating with it. I have my Telegram bot set up and

00:09:37: only I have access to that so I'm safe." Well, think

00:09:41: again. For example, there is this idea of skills

00:09:45: with OpenClaire and you may already know skills from coding agents

00:09:49: like Claude Code. The idea is kind of the same.

00:09:52: The idea is that you expose extra

00:09:55: context in the end, an extra Markdown document, though

00:09:59: potentially also coupled with executable scripts, uh,

00:10:02: to the agent to give it more capabilities.

00:10:05: So for example, to, uh, give it some extra documentation on how

00:10:09: to interact with Slack here in this example.

00:10:13: And then as mentioned, a skill can also come bundled up with some

00:10:17: additional script which the AI agent can execute to

00:10:20: efficiently do something like generate an image or send a message to Slack

00:10:24: or whatever it is Now the problem is that

00:10:28: Claw Hub, the official skills hub for

00:10:31: OpenClaire, initially at least allowed everybody

00:10:35: to submit skills. So it was pretty

00:10:38: easy to run supply chain

00:10:42: attacks which we saw from the npm

00:10:45: ecosystem, uh, last year, uh, totally unrelated to

00:10:49: AI, which essentially means that, uh, a bad

00:10:52: actor can publish a skill that tells the AI

00:10:56: to do something bad and that is just a prompt injection.

00:10:59: So just by installing a malicious skill, you could

00:11:03: expose your agent to a prompt injection attack.

00:11:07: Now some fixes were implemented here so, uh, at

00:11:11: the point of time where I'm recording this,

00:11:15: so the security was vastly improved here.

00:11:18: But if we learned anything from the supply chain attacks on npm

00:11:22: last year, it is that we definitely can't rule out

00:11:26: that this skills feature, this hub can be used to

00:11:29: inject, um, malicious instructions into...

00:11:33: into the ecosystem and into your, uh, OpenClaw

00:11:36: setup potentially. And that's not the only way of running

00:11:40: attacks. If your bot reaches out to the internet,

00:11:44: it most likely does, it will, of course, visit websites or

00:11:48: read content from websites. And there,

00:11:51: we also can have malicious websites that trick the

00:11:55: AI into following instructions, prompts, that are

00:11:59: embedded on that website. Every piece of text

00:12:03: your bot reads and processes is a

00:12:07: prompt in the end, so every website it visits

00:12:11: is a prompt, uh, or contains a prompt that it

00:12:15: can follow and execute. And then we got other potential sources as

00:12:19: well like, for example, emails. If you use your, uh,

00:12:23: bot to process incoming emails, every email, of

00:12:27: course, acts as a prompt. So prompt

00:12:30: injection is a, a serious, huge risk and just

00:12:34: because nothing went wrong for you ever, doesn't mean

00:12:38: things can't go wrong. Now you may, of course, say, "Well,

00:12:42: I'm running my bot on a VPS." Or maybe

00:12:46: you're using something like MaltWorker, which is in the end a

00:12:50: pre-built blueprint or setup provided by

00:12:53: Cloudflare, which uses various Cloudflare services for

00:12:57: hosting and running, uh, OpenClaw, and you should be doing

00:13:01: that. You should be doing that. Uh, you should absolutely

00:13:05: not run it on, on your system. And there also are

00:13:09: features like sandboxing, so that is actually

00:13:13: built into OpenClaw. They have a- an entire

00:13:16: documentation article about sandboxing and how you can make sure

00:13:20: your agents run in a sandbox, which essentially is a darker container,

00:13:24: so that the blast radius is reduced.

00:13:27: By the way, the documentation, it's a lot, but it's

00:13:31: not good. I spent hours, literally many, many

00:13:35: hours trying to secure my setup. And I'm sure it's all in

00:13:39: there, and I saw the security article.

00:13:42: It's just so, so hard. And before you tell me that I should've

00:13:46: asked my OpenClaw bot, I did a lot. It sometimes

00:13:49: worked, it sometimes didn't. It was a lot of trial and error.

00:13:52: So yeah, the documentation and how

00:13:56: hard it is to get useful information out of it is

00:14:00: its own problem, but, of course, one that can be fixed.

00:14:03: And I appreciate the fact that at least the information

00:14:07: here, just to be clear. But yeah, so sandboxing is

00:14:11: built in and is available and allows you to

00:14:16: reduce the blast radius, which is super

00:14:19: important. Uh, because in the end,

00:14:23: due to the prompt injection, uh,

00:14:26: vulnerabilities that exist that can't really be

00:14:30: solved, reducing the blast radius is

00:14:33: important. So, for example, if you use sandboxing,

00:14:37: overall setup on a VPS, the worst thing that could happen

00:14:41: is that, of course, the stuff in the sandbox gets deleted

00:14:45: or, depending on your setup, maybe your entire VPS but not

00:14:49: your system. That's the reason why I would never run,

00:14:53: OpenClaw on, on my machine, on my main machine.

00:14:56: I'm- I absolutely don't want it to erase files,

00:15:00: whatever, on my machine. So yeah, reducing the blast radius is

00:15:04: important. Unfortunately, though, that still doesn't protect you against the

00:15:08: worst things that could happen because with prompt injection

00:15:11: attacks, of course, an attacker could try to delete files on

00:15:15: system. But even worse than that, they could steal stuff.

00:15:19: So data exfiltration

00:15:23: is, in my opinion, a, a bigger problem

00:15:26: than an attacker deleting files on your

00:15:30: system. And data exfiltration is

00:15:33: 100% something that can happen or

00:15:37: that can be the result of a prompt injection attack because, of course, an

00:15:41: attacker could get the AI to gather all the secrets it knows,

00:15:45: all the passwords it knows, and it needs to know some passwords

00:15:49: in order to use your email account.

00:15:51: Maybe it's- maybe you gave it your credit card number,

00:15:54: access to various pieces of data and that data could be

00:15:58: collected due to a prompt injection attack and could

00:16:02: be exfiltrated, and that is a, uh,

00:16:05: bigger, uh, risk than it potentially deleting your

00:16:09: hard drive if you set it up correctly.

00:16:12: Of course, it could also do other things.

00:16:14: It could turn your VPS into a, a bot

00:16:18: for DDoS attacks, for example, so that's just

00:16:22: one example. There is an endless amount of things

00:16:26: course, but the main thing to take away is that through

00:16:30: prompt injection attacks, attackers could take over

00:16:34: your bot and, therefore, your machine.

00:16:36: They could get your bot to install malicious software

00:16:40: to tweak the system configuration depending on the access rights it

00:16:44: has, of course, and then they could potentially take over

00:16:47: your VPS, your machine. These are the kind of things that could

00:16:51: happen. So access rights are the

00:16:54: important, uh, keyword here, and sandboxing is

00:16:58: one crucial part in that. It's not all though.

00:17:01: You can configure...... your OpenClaw

00:17:05: bot such that it has to ask for approval when

00:17:09: running in sandbox mode, at least, for executing

00:17:13: certain tasks. But that kind of defeats the idea of

00:17:17: having a bot that runs behind the scenes and does stuff whilst you are

00:17:21: away because you all the time have to give it

00:17:24: approval for all the kind of stuff it wants to do

00:17:27: suddenly, and that, of course, gets super annoying.

00:17:30: So you just might not read anymore what it's asking approval for, you

00:17:34: might always grant approval, and at some point,

00:17:38: just annoys you. Because again, it's not really useful if you have to

00:17:41: manually approve everything. So combine that,

00:17:45: combine these security issues and the fact

00:17:49: that I did not find a way of running this

00:17:52: securely in a way I would feel good with, with the

00:17:56: fact that I didn't really find those super amazing

00:18:00: use cases, combine these things and you end up with a

00:18:03: situation where, uh, I'm just not using OpenClaw

00:18:07: anymore. And of course, that can be different for you

00:18:10: people that were super excited, and yeah, it's possible that the

00:18:14: future of personal AI-powered assistants looks

00:18:18: something like this. It's possible that better security

00:18:22: mechanisms can be introduced and can be

00:18:25: invented that don't require your constant approval for

00:18:29: everything or that make that approval process easier

00:18:33: and therefore allow you to securely run assistants like

00:18:37: this. That is all possible. I wouldn't rule out that

00:18:41: this happens, and of course, it is an impressive

00:18:45: feat that a single developer built this tool, though, of

00:18:48: course, not looking at the code at all does have

00:18:52: price, uh, as many bugs and

00:18:56: security problems, uh, certainly also show.

00:18:59: Not that software wouldn't have any security problems if

00:19:03: you would review everything, but it certainly, in my opinion,

00:19:07: don't look at the code at all. But nonetheless,

00:19:11: if you ask yourself the question, why OpenAI or Google didn't

00:19:15: come up with a product like this, the reason may be a lack of

00:19:19: innovation, but of course, it's also the fact that a tool like

00:19:23: this can right now only exist as open source

00:19:26: software without any legal

00:19:29: obligations because this thing is not

00:19:33: something Google could sell or run for you with

00:19:37: broad permissions. But of course, it's definitely possible that this is

00:19:40: the initial spark that gives us

00:19:44: safer, maybe more useful personal

00:19:47: AI-powered assistants in the future.

00:19:50: And, uh, just to also briefly mention Maltbook, uh,

00:19:54: that is a thing I totally did not understand.

00:19:57: Uh, it was meant to be a social network, a Reddit

00:20:01: for AIs only. It turned out that it was

00:20:05: actually very human-orchestrated and, uh, quite

00:20:09: a bit fake as I understand it, and it had

00:20:13: gapping security issues and,

00:20:17: yeah, I don't know. AI has positive

00:20:21: use cases or positive implications, I guess.

00:20:25: AI has a lot of negative implications.

00:20:28: Um, this thing here is not something the world needs

00:20:32: in my opinion. But yeah, OpenClaw, definitely

00:20:36: interesting, maybe super useful for you, uh,

00:20:40: definitely not my cup of tea/coffee,

00:20:44: uh, right now

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.