• 0 Posts
  • 6 Comments
Joined 3 years ago
cake
Cake day: July 4th, 2023

help-circle
  • You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check

    Read what I wrote.

    Its not a matter of “rules” it “obeys”

    Its a matter of literally not it even having access to do such things.

    This is what Im talking about. People are complaining about issues that were solved a long time ago.

    People are running into issues that were solved long ago because they are too lazy to use the solutions to those issues.

    We now live in a world with plenty of PPE in construction and people are out here raw dogging tools without any modern protection and being ShockedPikachuFace when it fails.

    The approach of “Im gonna tell the LLM not to do stuff in a markdown file” is tech from like 2 years ago.

    People still do that. Stupid people who deserve to have it blow up in their face.

    Use proper tools. Use MCP. Use a sandbox environment. Use whitelist opt in tooling.

    Agents shouldn’t even have the ability to do damaging actions in the first place.


  • The only people who have these issues, are people who are using the tools wrong or poorly.

    Using these models in a modern tooling context is perfectly reasonable, going beyond just guard rails and instead outright only giving them explicit access to approved operations in a proper sandbox.

    Unfortunately that takes effort and know-how, skill, and understanding how these tools work.

    And unfortunately a lot of people are lazy and stupid, and take the “easy” way out and then (deservedly) get burned for it.

    But I would say, yes, there are safe ways yo grant an llm “access” to data in a way where it does not even have the ability to muck it up.

    My typical approach is keeping it sandbox’d inside a docker environment, where even if it goes off the rails and deletes something important, the worst it can do is cause its docker instance to crash.

    And then setting up via MCP tooling that commands and actions it can prefer are explicit opt in whitelist. It can only run commands I give it access to.

    Example: I grant my LLMs access to git commit and status, but not rebase or checkout.

    Thus it can only commit stuff forward, but it cant even change branches, rebase, nor push either.

    This isnt hard imo, but too many people just yolo it and raw dawg an LLM on their machine like a fuckin idiot.

    These people are playing with fire imo.