From reading the paper I'm not sure which is more egregious, the frameworks that pass code and/or use exec directly without checking, or the ones that rely on the LLM to do the checking (based on the fact that some of the CVEs require LLM prompt jailbreaking)
If you wanted to be exceedingly charitable, you could try and make the maintainers of said framework claim that "of course none of this should be used with unsanitized inputs open to the public, it's merely a productivity boost tool that you would run on your own machine, don't worry about possible prompts being evaluated by our agent from top bing results, don't use this for anything REAL."
Not every rationalist I've met has been nice or smart ^^.
I think it's hard to grow up in our society, without harboring a kernel of fascism in our hearts, it's easy to fall into the constantly sold "everything would work better if we just put the right people in charge". With varying definitions of who the "right people" are:
Do they deserve better? Absolutely, but you can't remove their agency, they ultimately chose this. The world is messy and broken, it's fine not to make too much peace with that, but you have to ponder your ends and your means more thoughtfully than a lot of EAs/Rationalists do. Falling prey to magical thinking is a choice, and/or a bias you can overcome (Which I find extremely ironic given the bias correction advertising in Rationalists spheres)