• athatet@lemmy.zip
      link
      fedilink
      English
      arrow-up
      77
      ·
      9 days ago

      Honestly. At this point, after it having happened to multiple people, multiple times, this is the only appropriate response.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    285
    arrow-down
    2
    ·
    9 days ago

    Given that the infrastructure description included the DataTalks.Club website, this resulted in a full wipe of the setup for both sites, including a database with 2.5 years of records, and database snapshots that Grigorev had counted on as backups. The operator had to contact Amazon Business support, which helped restore the data within about a day.

    Non-story. He let Terraform zap his production site without offsite backups. But then support restored it all back.

    I’d be more alarmed that a ‘destroy’ command is reversible.

    • zr0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      25
      ·
      9 days ago

      For technical reasons, you never immediately delete records, as it is computationally very intense.

      For business reasons, you never want to delete anything at all, because data = money.

      • jaybone@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 days ago

        Back in the day, before virtualized services was all “the cloud” as it is today, if you were re-provisioning storage hardware resources that might be used by another customer, you would “scrub” disks by writing from /dev/random and /dev/null to the disk. If you somehow kept that shit around and something “leaked”, that was a big boo boo and a violation of your service agreement and customer would sue the fuck out of you. But now you just contact support and they have a copy laying around. 🤷

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 days ago

        Retaining data can mean violating legal obligations. Hidden backups can be a lawyers playground.

        • zr0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 days ago

          Sure. Go ahead and find them based on pure speculation. First you have to put down $100k for all the forensics. Even if you would win the case, show me who is capable of doing something like that.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    141
    arrow-down
    2
    ·
    9 days ago

    Whoever did this was incredibly lazy. What you using an agent to run your Terraform commands for you in the first place if it’s not part of some automation? You’re saving yourself, what, 15 seconds tops? You deserve this kind of thing for being like this.

  • SapphironZA@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    126
    ·
    9 days ago

    We used to say Raid is not a backup. Its a redundancy

    Snapshots are not a backup. Its a system restore point.

    Only something offsite, off system and only accessible with seperate authentication details, is a backup.

      • mic_check_one_two@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        24
        ·
        9 days ago

        AKA Schrödinger’s Backup. Until you have successfully restored from a backup, it is just an amorphous blob of data that may or may not be valid.

        I say this as someone who has had backups silently fail. For instance, just yesterday, I had a managed network switch generate an invalid config file for itself. I was making a change on the switch, and saved a backup of the existing settings before changing anything. That way I could easily reset the switch to default and push the old settings to it, if the changes I made broke things. And like an idiot, I didn’t think to validate the file (which is as simple as pushing the file back to the switch to see if it works) before I made any changes.

        Sure enough, the change I made broke something, so I performed a factory reset and went to upload that backup I had saved like 20 minutes prior… When I tried to restore settings after the factory reset, the switch couldn’t read the file that it had generated like 20 minutes earlier.

        So I was stuck manually restoring the switch’s settings, and what should have been a quick 2 minute “hold the reset button and push the settings file once it has rebooted” job turned into a 45 minute long game of “find the difference between these two photos” for every single page in the settings.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          Not sure if what you’re working on has it (been in systems for a hot minute so I’m not doing network tasks here) and saving my console log and doing a ‘show run’ saved my ass more than once

    • tetris11@feddit.uk
      link
      fedilink
      English
      arrow-up
      28
      ·
      9 days ago

      3-2-1 Backup Rule: Three copies of data at two different types of storage media, with 1 copy offsite

    • Krudler@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      Circa 1997 I was making some innovative new games, employed by a dude who’d put millions of his own money into the company. He was completely nonplussed when I brought him 20 CDs in a sealed box to remove from the building and store off site. He thought I’d lost my damned mind and blew it off as ravings of a stressed dev. I pointed out real threats to our IP including the hardware failures and even so far as the building burning down. 2 years of custom art and code gone. “Unlikely. Relax.”

      After I moved on… an ex co-worker who’s still a longtime friend, tells me a different division lost a huge amount of FMV over some whoops-I-destroyed-the-wrong-drive blunder. 20 days to render on an 8 or 10 machine farm. Poof - No backups. In 1997 even with top-of-the-line gear it took an insane investment to render quality 3D.

      The friggin’ carelessness irks the shit out of me as I type ahah

    • OrteilGenou@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      I remember back when I first started seeing a DR plan with three tiers of restore, 1 hour, 12 hours or 72 hours. I knew that to 1 hour meant a simple redirect to a DB partition that was a real time copy of the active DB, and twelve hours meant that failed, so the twelve hours was a restore point exercise that would mean some data loss, but less than one hour, or something like that.

      I had never heard of 72 hours and so raised a question in the meeting. 72 hours meant having physical tapes shipped to the data center, and I believe meant up to 12 (though it could have been 24) hours of data lost. I was impressed by this, because the idea of having a job that ran either daily or twice daily that created tape backups was completely new to me.

      This was in the early aughts. Not sure if tapes are still used…

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        Not sure if tapes are still used…

        Alive and well depending on the use case. My org has an older backup software that’s entirely tape based and it’s amazing for the Linux systems I hear

    • SreudianFlip@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      Fukan yes

      • D\L all assets locally
      • proper 3-2-1 of local machines
      • duty roster of other contributors with same backups
      • automate and have regular checks as part of production
      • also sandbox the stochastic parrot
      • super_user_do@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 days ago

        We’ve always been succeeding even without them. I don’t see why would anyone try to work in aiT if they don’t… Want to work lol

          • super_user_do@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            How are the two things even connected bro. AIs are tools and should be used as such. You wouldn’t let something act all by itself if that would make it unpredictable, I’m saying that using AIs is fine but you gotta keep an eye on them

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      43
      ·
      9 days ago

      Wrong answer. If you don’t give them access, the alternative (ruling out not using AI because leadership will never go for that) is to hire high school kids to take a task from a manager, ask the ai to do it, then do what the AI says repeatedly to iterate to the solution. The problem with that alt is that it is no better than giving the ai access, and it leaves you with no senior tech people. Instead, you give it access, but only give senior tech people access to the AI. Ones who would know to tell the AI to have a backup of the database, one designed to not let you delete it without multiple people signing off.

      Senior tech people aren’t going to spend thier time trying things an AI needs tried to find the solution. So if you don’t give it access, they won’t use it, and eventually they will all be gone. Then you are even further up shit creek than you are now.

      The answer overall, is smarter people talking to the AI, and guardrails to stop a single point of failure. The later is nothing new.

      • vithigar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        28
        ·
        9 days ago

        What is this insane rambling?

        The alternative is that the only thing with access to make changes in your production environment is the CI pipeline that deploys your production environment.

        Neither the AI, nor anything else on the developers machine, should have access to make production changes.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          I did say “and guardrails to stop a single point of failure.” A cicd pipeline itslef doesn’t protect you if it can change that too. You need the same kind of guardrails that would allow a junior dev to f things up. Require multiple people to sign off. Turn on deletion protection… those sorts of things. I work in infra, so I often have direct access to production. More than I should. But not all companies can afford to build out all the tools needed so that I don’t need production access.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          8 days ago

          Good luck with that. Most search engines use AI now. Not only where you see it, but in finding the content to make it searchable. AI is here to stay. There are things it is good at, and things it isn’t. Learn what they are, and use it where it makes sense. Or stuck your head in the sand and see how that works put.

          • MartianRecon@lemmus.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            That doesn’t answer that statement at all. I said it’s not worth the output.

            Fuck ai. I don’t want a computer to think for me. I want to be pointed to resources I can use, to learn something.

            • Modern_medicine_isnt@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              You said the answer is no AI.
              And I want AI to do the non-thinking mundane crap while I do the thinking and directing. I don’t need to spend time wrestling with an sql query to produce a report the boss “wants”. I can tell AI to do that if it has the access it needs. Eventually the boss can tell AI to do it him/herself, so I can solve the real problems.

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        9 days ago

        Nah. As a tech people, I am not going to give an llm write access to anything in production, period

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 days ago

          Someone created that database. And all those other parts of the infra you use. AI is pretty good for that. But you have it turn on deletion protection, and set up a system that requires another person to approve turning it off. Or you can give it access at creation time, but remember to turn that access off when it is finished being verified.

      • Matty_r@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 days ago

        I’m in favour of hiring kids to figure out the solution through iteration and doing web searches etc. If they fuck up, then they learn and eventually become better at their job - maybe even becoming a Senior themselves eventually.

        I get what you’re saying - Seniors are more likely to use the tools more effectively, but there are many cases of the AI not doing what its told. Its not repeatably consistent like a bash script.

        People are better - always.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 days ago

          The days of stack exchange and such are numbered. Web searches turn up less and less hits that help you solve problems and learn. It won’t be long before AIs replace old school web searches. Software projects will stop writing documentation, when instead and ai can just read the code. The way we learned things is dieing. I don’t know how the juniors will get to be seniors in 5 to 10 years. But following th AI instructions to test out it’s theories isn’t going to work for the vast majority.

      • criss_cross@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        Do you go on an oncall rotation by chance? Because anyone that has to respond to night time pages would not be saying this lol.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          I do in fact. Recently I have dodge the night time pages, but a few years ago I was up plenty of time in the night debugging issues. In many of those cases an AI would have been very helpful. Developers do far stupider things because they are sure they won’t break anything. But most of the pages were the result of not enough time spent to make the systems resilient. I dodged the pager currently because as a startup we had so few customers, we couldn’t afford to hire enough people to have a rotation. So I was sortof on call. Like the boss had my number, and if needed he would call it. But it never came to that, partly by luck, and partly because I know how to make things resilient. With the low load, resilient isn’t as hard.

  • kamen@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    ·
    9 days ago

    You either have a backup or will have a backup next time.

    Something that is always online and can be wiped while you’re working on it (by yourself or with AI, doesn’t matter) shouldn’t count as backup.

    • MIDItheKID@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      9 days ago

      AI or not, I feel like everybody has had “the incident” at some point. After that, you obsessively keep backups.

      For me it was a my entire “Junior Project” in college, which was a music album. My windows install (Vista at that time - I know, vista was awful, but it was the only thing that would utilize all 8gb of my RAM because x64 XP wasn’t really a thing) bombed out, and I was like “no biggie, I keep my OS on one drive and all of my projects on the other, I’ll just reformat and reinstall Windows”

      Well… I had two identical 250gb drives and formatted the wrong one.

      Woof.

      I bought an unformat tool that was able to recover mostly everything, but I lost all of my folder structure and file names. It was just like 000001.wav, 000002.wav etc. I was able to re-record and rebuild but man… Never made that mistake again. Like I said. I now obsessively backup. Stacks of drives, cloud storage. Drives in divverent locations etc.

      • SirEDCaLot@lemmy.today
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        AI or not, I feel like everybody has had “the incident” at some point. After that, you obsessively keep backups.

        Yup!

        Also totally unrelated helpful tip- triple check your inputs and outputs when using dd to clone a drive. dd works great to clone an old drive onto a new blank one. It is equally efficient at cloning a blank drive full of nothing but 0s over an old drive that has some 1s mixed in.

        • kamen@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          9 days ago

          And that’s a great example where a GUI could be way better at showing you what’s what and preventing such errors.

          If you’re automating stuff, sure, scripting is the way to go, but for one-off stuff like this seeing more than text and maybe throwing in a confirmation dialogue can’t hurt - and the tool might still be using dd underneath.

          • SirEDCaLot@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            Quite true.
            It’s an argument I often have with the CLI only people, and have been having for years. Like ‘with this Cisco router I can do all kinds of shit with this super powerful CLI’. Yeah okay how do I forward a port? Well that takes 5 different commands…

            Or I just want to understand what options are available- a GUI does that far better than a CLI.

            • kamen@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              IMO it’s important to recognise that both are valid in different scenarios. If you want to click through and change something that’s actually doable with a couple of clicks, that’s fine. If you want to do this through the CLI, it’s also fine - if you’re someone who’s done 10 deployments today and configured the same thing, it would be muscle memory even if it’s 5 commands.

      • kamen@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 days ago

        TestDisk has saved my ass before. It’s great at recovering broken or deleted partitions. If it’s just a quick format done with no encryption involved, you have a very high chance of having your stuff back. That’s of course if you catch yourself after doing just the format.

        Other than that, yeah, I’ve also had my moments. Back in high school not only did I not have money for an external drive - I didn’t even have enough space on my primary one. One time a friend lent me an external drive to do a backup and do a clean reinstall - and I can’t remember the details, but something happened such that the external drive got borked - and said friend had important stuff that was only on that hard drive. Ironically enough it wasn’t even something taking much space - it was text documents that could’ve lived in an email attachment.

    • ThomasWilliams@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      9 days ago

      He did have a backup. This is why you use cloud storage.

      The operator had to contact Amazon Business support, which helped restore the data within about a day.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    2
    ·
    9 days ago

    We don’t need cautionary tales about how drinking bleach caused intestinal damage.

    The people needing the caution got it in spades and went off anyway.

    Or maybe the cautionary tale is to take caution dealing with the developers in question, as they are dangerously inept.

    • Scipitie@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      29
      ·
      9 days ago

      Yeah this is beyond ridiculous to blame anything or anyone else.

      I mean accidently letting lose an autonomous non-tested non-guarailed tool in my dev environment… Well tough luck, shit, something for a good post mortem to learn from.

      Having an infrastructure that allowed a single actor to cause this damage? This shouldn’t even be possible for a malicious human from within the system this easily.

  • eleitl@lemmy.zip
    link
    fedilink
    English
    arrow-up
    64
    ·
    9 days ago

    “and database snapshots that Grigorev had counted on as backups” – yes, this is exactly how you run “production”.

    • Nighed@feddit.uk
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 days ago

      With some of the cloud providers, their built in backups are linked to the resource. So even if you have super duper geo-zone redundant backups for years, they still get nuked if you drop the server.

      It’s always felt a bit stupid, but the backups can still normally be restored by support.

      • eleitl@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        That’s because these are not backups. With backups you still have your data even if the cloud provider has gone away.

        • Nighed@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 days ago

          They are backups, you potentially get copy’s of the data in multiple locations across continents.

          BUT I agree, you are relying on them entirely for it. Lots of vendor tie in stuff in the industry unfortunately.

          • EffortlessGrace@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 days ago

            Is everyone in commercial software development finally saying, “Fuck it, we’ll run the shit ourselves”?

            I’m an infrastructure and devops noob here; take my words with a grain of salt.

            I need GPU clusters with ECC VRAM for research and found it’s cheaper to just have my own high-ish performance compute in my own office I paid for once than pay AWS/Azure/GCS/etc forever or at least everytime I want to train a custom DNN model. Sometimes I use Linode but it’s for monitoring. But I can run shit at will and I have data sovereignty.

            Has the paradigm shifted back to developing and serving things in-house now that big tech vendor-lock/tie-ins have so many dark patterns that scalability isn’t cost-effective with them? Or is it just my own pipe dream?

            • Nighed@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 days ago

              If you are going to use it enough to pay for it sure. But that’s always been the case.

              The main benefits of cloud are it’s ability to scale quickly, it’s ability to provide geographic reach and the conversation of capex to opex.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    48
    ·
    8 days ago

    At least you had backup, right?

    Oh, yeah, that’s right. You were dumb enough to give AI full access to your production system so likely you’re dumb enough to not have backups of anything either.

    I take it Claude has full access to all of your git repositories as well so that it could wipe those too?

    You got what you deserve

    • Metype @pawb.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 days ago

      Yeah they did, they had plenty of recovery snapshots. That were able to be deleted at a whim and were deleted by Claude! :D

  • Bongles@lemmy.zip
    link
    fedilink
    English
    arrow-up
    47
    ·
    9 days ago

    This keeps happening. I can understand using AI to help code, I don’t understand Claude having so much access to a system.

      • Earthman_Jim@lemmy.zip
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        8 days ago

        That’s honestly the most frightening part of all of this to me. How many of these people at the very tippy top pushing this stuff are suffering from cyber psychosis? How many of them have given themselves the covert mission to give AI the keys to the world at all costs because they’re mentally ill from their own technomagic trick?

        • Jayjader@jlai.lu
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 days ago

          Alternatively, how many of them have invested in one or more of these LLM makers and are ready to torpedo their own business as long as it makes the share price go up/feeds more authentic training data?

    • NostraDavid@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Especially since between Claude and Codex, Claude seems to have NO issues breaking things, while Codex is “I’ve ensured that the old path still works, and also fixed a bug I ran into”.

      • Claude is Facebook (“Move fast and break things”)
      • Codex is Linux (“We do not break userspace!”)
  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    8 days ago

    Anyone who lets AI do this is absolutely inept, lazy, or deserving.

    In its default configuration, it stops at EVERY STEP. Do you want to run this command, do you want to update this file, here’s the file I want to modify and the patch i’m going to use with adds and deletes in green and red.

    If you’re using it in unsafe permissions mode, click yeah sure allow Claude to run whatever the fuck it wants in this directory, or just hitting yeah sure go ahead every time, it’s your own damn fault.

    It’s self-driving for the terminal. Don’t you dare take your eyes off the road or hands off the wheel.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        I’m rather a fan of letting it do stupid, repetitive shit. I need you to create 30 linux accounts the other day from a screen shot. Then store, initial keys and creds in my password manager platform.

        Hey, Claude, write me a bash script to do this from this image. and also use best practice for removing non-standard characters from login names.

        I review the loop and the general state of the OCR and let it go.

    • tempest@lemmy.ca
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      edit-2
      9 days ago

      If you’ve ever used it you can see how easily it can happen.

      At first you Sandbox box it and you’re careful. Then after a while the sand box is a bit of a pain so you just run it as is. Then it asks for permission a 1000 times to do something and at first you carefully check each command but after a while you just skim them and eventually, sure you can run ‘psql *’ to debug some query on the dev instance…

      It’s one of the major problems with the “full self driving” stuff as well. It’s right often enough that eventually you get complacent or your attention drifts elsewhere.

      This kind of stuff happened before the LLM coding agents existed, they have just supercharged the speed and as a result increased the amount of damage that can be done before it’s noticed.

      There are already a bunch of failures in place for something like this to happen. Having the prod credentials available etc etc it’s just now instead of rolling the dice every couple weeks your LLM is rolling them every 20s.

      • BorgDrone@feddit.nl
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        9 days ago

        If you’ve ever used it you can see how easily it can happen.

        How could this happen easily? A regular developer shouldn’t even have access to production outside of exceptional circumstances (e.g. diagnosing a production issue). Certainly not as part of the normal dev process.

        • tempest@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          They shouldn’t and we know that but this is hardly the first time that story has been told even before LLMs. Usually it was blamed on “the intern” or whatever.

          • BorgDrone@feddit.nl
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 days ago

            This isn’t just an issue with a developer putting too much trust into an LLM though. This is a failure at the organizational level. So many things have to be wrong for this to happen.

            If an ‘intern’ can access a production database then you have some serious problems. No one should have access to that in normal operations.

            • tempest@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              9 days ago

              Sure, I’m not telling you how it should be, I’m telling you how it is.

              The LLM just increases the damage done because it can do more damage faster before someone figures out they fucked up.

              This is the last big one I remembered offhand but I know it happens a couple times a year and probably more just goes unreported.

              https://www.cnn.com/2021/02/26/politics/solarwinds123-password-intern

              Why would an intern be given prod supply chain credentials, who knows. People fuck up all the time.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        9 days ago

        If you’ve ever used it you can see how easily it can happen.

        Yes, I can see how it can easily happen to stupid lazy people.

    • M.K. | 37,000@retrolemmy.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      The code is cursed, the test is cursed, and I am a fool.

      Such venom, of which only a programmer could spew.
      Perhaps the A.I. isn’t so different from us.

    • Auth@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 days ago

      OpenClaw now comes with a therapist AI to talk other AIs off the ledge so they dont nuke your project and themselves.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    9 days ago

    According to mousetrap manufacturers, putting your tongue on a mousetrap causes you to become 33% sexier, taller and win the lottery twice a week.

    While some experts have argued caution that it may cause painful swelling, bleeding, injury, and distress, and that the benefits are yet to be unproven, affiliated marketers all over the world paint a different, sexier picture.

    However, it is not working out for everyone. Gregory here put his tongue in the mousetrap the wrong way and suffered painful swelling, bleeding, injury and distress while not getting taller or sexier.

    Gregory considers this a learning experience, and hopes this will serve as a cautionary tale for other people putting their tongue on mousetraps: From now on he will use the newest extra-strength mousetrap and take precautions like Hope Really Hard that it works when putting his tongue in the mousetrap.