AskLemmy temporarily closed - eviltoast

We have temporarily locked posting on AskLemmy until the CSAM posting stops.

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      73
      ·
      9 months ago

      I feel like this is an underrated idea. Resonates with the whole thing of making a subset of the internet simpler and just like documents, as with the simpler protocols like Gemini etc.

      • Davel23@kbin.social
        link
        fedilink
        arrow-up
        25
        ·
        9 months ago

        That would still allow links to be posted. Better than allowing image posts, but not a complete solution.

        • maegul (he/they)@lemmy.ml
          link
          fedilink
          English
          arrow-up
          42
          ·
          9 months ago

          It prevents concerns about hosting CSAM posted by someone else. A categorical improvement I’d say. But yes, nothing’s perfect.

        • rar@discuss.online
          link
          fedilink
          arrow-up
          18
          ·
          9 months ago

          Still better than nothing. Easier for mods of text-only communities to only have text-only posts submitted.

          • Iron Lynx@lemmy.world
            link
            fedilink
            arrow-up
            10
            ·
            9 months ago

            If we then add a few conditions: “no links in the root message” and “OP may not be the first to comment within some unspecified amount of time,” that could make it even easier to limit CSAM.

        • testeronious@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          9
          ·
          9 months ago

          You’re right. I thought of another idea: use karma to decide who can post links, images and/or videos. 50 general karma for links, 100 for pictures, you get the idea.

          • DaleGribble88@programming.dev
            link
            fedilink
            English
            arrow-up
            14
            ·
            9 months ago

            Not an effective solution for a federated service. Just spin up a new instance and give yourself karma. Shoot, there is no centralized service for validating accounts, so just set up 50 alts across 50 instances.

  • whaleross@lemmy.world
    link
    fedilink
    arrow-up
    105
    arrow-down
    6
    ·
    9 months ago

    CSAM? What is CSAM? Is it a rewrite of “scam”?

    Googles…

    Oh no. Oh no no no. Why are people so fucking shit?

      • Chozo@kbin.social
        link
        fedilink
        arrow-up
        30
        ·
        9 months ago

        Also important to note: this feature will only really work against real CSAM. The images that were posted to this community weren’t real CSAM but were pictures/gifs of adult models, with titles/captions that would imply they were CSAM. I don’t think Cloudflare can do much about those.

        At least, the handful of posts that I saw were like this. I’m doubtful that the guy doing this is uploading actual CSAM to the clearnet.

        • Catoblepas@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          8
          ·
          9 months ago

          I hope you’re right, because as someone that sometimes browses by new I keep seeing it and it’s upsetting as fuck to think it could be real.

          It’s weird they’re targeting asklemmy communities in particular, I don’t think the .ml and .world communities are even related are they?

          • Chozo@kbin.social
            link
            fedilink
            arrow-up
            16
            ·
            9 months ago

            Nah, they’re completely separate communities, so no real link that I can see there.

            I dug around a bit, and one of the sites he was using to host the images was some weird 4chan-like image board. But it seems like he may have been trolling them, too, because even though it’s a degenerate board full of racist garbage, it’s not otherwise full of CSAM, and his posts were also deleted from that board eventually, too. So I don’t think they were willingly hosting those images, either. I mention this because I saw some people calling to ban links to that domain, which probably should still be done because it’s a trash website, but not because it’s a CSAM haven of any sort.

            It makes me think that this isn’t targeted at any one community, just some random weirdo trying to make the internet a worse place wherever he can.

  • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    46
    ·
    9 months ago

    Well… it seems there’s some issue with post removal federation. There’s still 2 posts visible from my home instance.

    And now it’s definitely cached on our instance. And every other instance with pict-rs enabled.
    This is what makes me scared of self hosting an instance. I would basically be hosting it. And I would be responsible for such content.

  • Corroded@leminal.space
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    3
    ·
    edit-2
    9 months ago

    Is there a way AskLemmy and other major communities could prevent new users from making posts in the future?

    Like an account has to be over a month old to post for example. Maybe that could help prevent these kinds of disgusting attacks

    I don’t know if Lemmy has a moderator tool available that could do something like that though.

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      7
      ·
      9 months ago

      I don’t quite like that idea. It’s something I really hated on Reddit. It just discourages new people from joining. Besides, you could self host an instance with accounts claiming to be made in 1970.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        21
        ·
        9 months ago

        Unfortunately there aren’t many great options right now. No one likes it, but people posting CSAM are the ones to blame there. They quite literally ruin it for everyone because they’re butthurt about something happening they didn’t like

        • BigMoe@lemmy.zip
          link
          fedilink
          English
          arrow-up
          10
          ·
          9 months ago

          Do we know what they are butthurt about? There is never an excuse for what they are doing, but I’m curious what happened to set it off if a reason I known

          • Scrubbles@poptalk.scrubbles.tech
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            9 months ago

            Nope, they’re too cowardly to use their actual accounts and are making them anonymously. All we know is that rather than being mature about a mod action and simply leaving and creating an account elsewhere they decided to do this.

      • Corroded@leminal.space
        link
        fedilink
        English
        arrow-up
        14
        ·
        9 months ago

        Good point. I didn’t think about how easy that would be to fake.

        That said I would still prefer it to some subreddit’s cryptic karma requirements. If it worked I mean.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          13
          arrow-down
          6
          ·
          9 months ago

          And here’s the spot where I point out that using a blockchain for recording accounts would be a good technological fit for a decentralized system like the Fediverse, and then get pilloried for being a “cryptobro” or whatever.

          Seriously, all that you’d need to use the blockchain for would be a basic record of “this account holder has this name on that instance” and you get all sorts of unspoofable benefits from that. No tokens, no fancy authentication if you don’t want it, just a distributed database that you can trust.

          • infinitepcg@lemmy.world
            link
            fedilink
            arrow-up
            12
            ·
            9 months ago

            this account holder has this name on that instance

            How would that help? A spam bot could just make lots of blockchain wallets.

            you get all sorts of unspoofable benefits from that

            what are the benefits? I struggle to come up with any benefits.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              6
              arrow-down
              2
              ·
              9 months ago

              The issue that was being discussed was blocking accounts from posting if they were younger than a certain age. The blockchain has an unspoofable timestamp on its records.

              • infinitepcg@lemmy.world
                link
                fedilink
                arrow-up
                7
                ·
                9 months ago

                I see. I’m not convinced that proving the account creation date makes much of a difference here. Obviously the instance records when you sign up, so you would only need this to protect against malicious instances. But if a spammer is manipulating their instance to allow them to spam more, you have a much bigger problem than reliably knowing their account creation date.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  9 months ago

                  It’s a matter of trust. A random instance can always lie and you can only determine “that was a malicious instance that was lying to me” in hindsight after it’s broken that trust. Since a malicious instance-runner can spin up new instances almost as easily as creating new fake accounts you end up with a game of whack-a-mole where the malicious party can always get a few bad actions through before getting whacked. Whereas if user account creation was recorded on a blockchain you don’t need to ever trust the instance in the first place. You can always know for sure that an account is X days old.

                  A malicious instance-runner could still spin up fresh instances and fake accounts ahead of time, but it forces them to do it X days in advance and now if they want to keep attacking they have a longer delay time on it. A community that’s under attack could set the limit to 30 days, for example, and now the attacker is out of action for a full month until their next crop of fake instances is “ripe.” As always with these sorts of decentralized systems there’s tradeoffs and balances to be struck. The idea is to make things as hard for malicious users as possible without making it harder for the non-malicious ones in the process. Right now the cycle time for the whack-a-mole is “as fast as the attacker wants it to be” whereas with a trustworthy account age authentication layer the cycle time becomes “as slow as the target wants it to be.”

          • ALostInquirer@lemm.ee
            link
            fedilink
            arrow-up
            12
            ·
            9 months ago

            Instead of preempting criticism/downvotes, perhaps it would help to more clearly describe what kind of implementation of blockchain you mean?

            If it would still involve some questionable consent mechanism that either consumes a large amount of energy (Proof-of-Work) or may benefit larger stakeholders (Proof-of-Stake), then even setting aside the cryptocurrency associations, I’m not sure it’s necessarily worth it. However, if I’m not mistaken, there are implementations that may not require those, but may still provide the sort of benefit you’re suggesting, aren’t there?

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              9 months ago

              I’ve elaborated in some of the subsequent comments. I guess I wanted to “test the waters” a bit, if I got a strong negative reaction for simply mentioning a blockchain-based solution I would have sighed and moved on.

              Proof-of-stake doesn’t benefit larger stakeholders any more than it benefits smaller stakeholders, the common “rich-get-richer” objection is based on a misunderstanding of how the economics of staking actually operates. Since every staker gets rewarded in exact proportion to the size of their stake the large stakers and small stakers grow at the same relative rates. It’s actually proof-of-work that has an inherent centralization pressure due to the economies of scale that come from running large mining farms.

              • ALostInquirer@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                9 months ago

                Proof-of-stake doesn’t benefit larger stakeholders any more than it benefits smaller stakeholders, the common “rich-get-richer” objection is based on a misunderstanding of how the economics of staking actually operates.

                That wasn’t what I was referring to, but I should have phrased that part of my comment better. When I wrote that it may benefit larger stakeholders more what I had meant was that, by my rough understanding, larger stakeholders have more influence or sway over the consent mechanism. It’s been awhile since I looked into it last, so I can’t remember the details exactly, but that’s what I recall of what I read.

                It wasn’t the rich-get-richer problem, so much as the rich-hold-outsized-influence problem. Similar but distinct.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  9 months ago

                  It may be counterintuitive, but stakers don’t actually have influence over the consensus mechanism. It’s actually the other way around. Consider it this way; the stake that a staker puts up is a hostage that the staker is providing to the blockchain. If I stake a million dollars worth of Ether, I’m basically telling the blockchain’s users “you can trust me to process blocks correctly because if I fail to do so you can destroy my million dollar stake.” I have a million dollars riding on me following the blockchain’s rules. That’s literally why it’s called a “stake.”

                  The people who are actually “in charge” of which consensus rules are in use are the userbase as a whole, the ones who pay transaction fees and give Ether value by purchasing it from the validators. If some validators were to go rogue and create a fork that was to their liking but not to the liking of the userbase, the rogue validators would be holding worthless tokens on a blockchain that nobody is using. You can see the effects of this by the way the blockchain is continuing to update in ways that are good for the general userbase but not necessarily for the validators - MEV-burn, for example, is a proposal that would reduce the amount of money that validators could make but there’s no concern that I’ve seen about the validators somehow “rejecting” it. If the userbase wants it the validators can’t reject it without losing much more than they could hope to gain.

                  Ironically, proof-of-work is more vulnerable to this kind of thing. If a proof-of-work chain were to fork and a substantial majority of the validators didn’t agree with the fork then they could attack it with 51% attacks. The forked chain would need to change its PoW algorithm to stop the attacks, and that would destroy all the “friendly” miners along with the attackers.

                  Validators in a PoS blockchain could also launch attacks at a contentious fork, but they’d burn their stake in the process whereas the validators that did what the userbase wanted would keep theirs. So there’s a powerful incentive to just go along with the userbase’s desires.

          • ShittyBeatlesFCPres@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 months ago

            I’m not saying you’re wrong but why would this be the first time blockchain stopped illegal activity instead of facilitating it? It like 15 year-old tech and hasn’t made a significant impact outside of niche projects like cryptocurrencies.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              9 months ago

              To the first, there are a vast number of legal applications for blockchains.

              To the second, it’s not the same tech as it was 14 years ago. There have been a lot of advancements over that period.

              If you trace ActivityPub’s lineage back to its origin, it’s 14 years old too - it started as OpenMicroBlogging in 2009. It then became OStatus, which became standardized as ActivityPub. It’s barely the same thing any more. The same thing has happened with blockchains, the version of Bitcoin that launched in 2009 is nothing like the cutting-edge stuff like Ethereum is these days.

          • Phil K@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            9 months ago

            Putting aside that this use case doesn’t meet the five requisites for block chain use, the fediverse in general and Lemmy is already struggling with too much data being stored and moved.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              6
              arrow-down
              2
              ·
              9 months ago

              Searching for “the five requisites for blockchain use” isn’t finding anything relevant, what requisites do you mean?

              This wouldn’t be storing more data, it would be storing existing data. It would just be putting it somewhere that can be globally read and verified.

              • my_hat_stinks@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                9 months ago

                How do you store data in a decentralised way without have many redundant copies? The decentralisation of Blockchain is from many machines maintaining their own copy of the entire history. The entire xo dept I herebtly stores more data. Your suggestion is to literally store more data, claiming it won’t store more data only suggests you don’t know how blockchain works.

                And that’s not even including any overhead of implementing a Blockchain in the first place. Or the fact you’ll be storing data on literally every user even if they never interact with your instance, pr even if their instance is entirely blocked from yours. And there’s no way around that, if you do manage to selectively store some subset of users then when you do need to include that data you’re trusting the subset of maintainers who do have that user’s data which, initially, is only the user’s home instance so we’re back to square one.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  9 months ago

                  Yes, my point is that that sort of thing is exactly what blockchains are for. They handle all of that already. So there’s no need for Fediverse servers to reinvent all of that, they can just use existing blockchains for it.

          • maegul (he/they)@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            9 months ago

            As someone (who’s not a fan of the fediverse) put it to me:

            Fediverse is web2.5, worst of both web2.0 and web3.0.

            I think there’s something to that. So instilled in the fediverse’s makers is web2.0 that I’m not entirely sure their solutions can be trusted in the long term.

            It makes sense that down the line, when bitcoin and crypto hype finally settles into knowing what’s actually useful, some sort of cryptographic mechanisms will become normal in decentralised tech. BlueSky may make this mainstream.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              6
              arrow-down
              2
              ·
              9 months ago

              That’d be nice. Personally, I think the tech is just about ready - Ethereum has solved its environmental issues with proof-of-stake, and has solved its transaction cost issues with rollup-based “layer 2” blockchains. At this point I think the main obstacle is the knee-jerk popular reaction to anything blockchain-related as being some kind of crypto scam. I’m actually quite pleasantly surprised that I haven’t been downvoted through the floor for talking about this here so perhaps there’s a light at the end of the tunnel.

              • maegul (he/they)@lemmy.ml
                link
                fedilink
                English
                arrow-up
                4
                ·
                9 months ago

                I personally have the knee-jerk reaction. I don’t understand anything you’re saying about blockchain. I’ve heard of farcaster (if you haven’t you might be interested) and nostr (ditto) but don’t know how they work.

                The lack of mega downvotes, I’d guess, comes from the fact that people here appreciate the value of decentralisation and also can imagine from experience that a better system is possible than the relatively clumsy “let’s just send copies and requests everywhere”.

                In the end I don’t know. But I can see the decentralised social web being where cryptographic technology finds its mainstream landing (BlueSky, like I said, being an interesting space to watch as its the middle ground on that front).

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  9 months ago

                  I could try explaining in more accessible terms, if you like. I actually enjoy discussing this stuff but I don’t want to derail the thread or sound like I’m evangelizing.

                  I think solutions like this are best handled entirely on the back end, the general user wouldn’t even need to know a blockchain was involved. The blockchain would just be a data provider that the instance software is using behind the scenes to track stuff. Just like how a general user has no need to understand how the HTTPS protocol actually operates, they just point their web browser at an address and the technical details are handled behind the scenes.

      • logicbomb@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        9 months ago

        Are new instances automatically federated? If not, then it seems like making an instance, then hosting content enough to be federated, would be an awful waste of time and money, as I’d expect an instance like that would be quickly defederated.

        • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          11
          ·
          9 months ago

          Somewhat. All the communities have to be looked up manually by users, and followed to continue federating the content into that instance.

          But for this purpose the answer is yes. At least as far as I know, you can immediately start posting to other instances. Otherwise private instances would be of no use.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        4
        ·
        9 months ago

        I don’t like it either. Age/karma requirements work under an inherently flawed idea, that you’re guilty (i.e. a shitposter) unless proved contrariwise (by using an old or karma-ful enough account), and damn easy to avoid if you’re determined to shit on a community.

        IMO better ideas revolve around

        • Decreasing the surface of attack. In this case: only text posts allowed, there’s barely any legitimate reason to allow image posts here anyway.
        • Proper tools so mods can upstream rule violation to the admins. I’m almost certain that admins can see the IP of the posters, they should use that info to ban the posters alongside it. Perhaps in some situations the mods could even be granted temporary rights to see the IP of the posters? (Just an idea.)
        • Proper tools so mods have an easier time spotting potentially problematic content.

        Sadly they all depend on the software, and Lemmy isn’t exactly known for having good mod tools.

          • Lvxferre@mander.xyz
            link
            fedilink
            arrow-up
            4
            ·
            9 months ago

            I’m aware that IP bans inconvenience users who did nothing wrong. But I feel like this can be alleviated:

            • make the IP ban temporary. The idea is to force the spammer to get another IP or give up, not to use the IP itself as the enforcement.
            • IP-ban only account creation, not activity. So even if you’re using the same IP, as long as you already have an account, you should be unaffected.

            But… well, we’re back into “lemmy needs better built-in mod tools” territory.

          • pinkdrunkenelephants@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 months ago

            Then hosts need to ban VPNs.

            They need to use cookies that attach a unique identifier to each machine to enforce bans per machine. Hash the cookie so it can’t be edited. If a user clears their cookies, they need to put in a special private key to get back into their account.

            Or just make users scan in ID or pay with a credit card to gain membership.

            None of those ideas are perfect but they are needed for better ban enforcement overall anyway.

      • Corroded@leminal.space
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        Understood. Is that an option for moderators though?

        Like I said I don’t know if Lemmy gives you that option or if you’d need to setup some kind of bot or an instance level option.

        • Thekingoflorda@lemmy.world
          link
          fedilink
          arrow-up
          17
          ·
          edit-2
          9 months ago

          That would need to be a bot. The problem is that the spammer would just move on to the next community (which they have just done by moving to askLemmy@lemmy.ml I just put a tool up that automatically notifies a bunch of admins, mods and community team members when a post get’s reported more than 3 times, so please report the posts if you see them.

    • Preventing any posting in general might be a bit too restrictive IMO. However I think new users, or users using VPNs probably should not be allowed to post images in general so freely.

      I believe lemm.ee has a minimum account age limit before users can upload directly to the instance, and dbzer0 scans all user uploaded images for anything that could be questionable.

      Perhaps there should be additional restrictions on stuff linking to images outside of lemmy? I blocked the domain within moments of it appearing on my feed, absolutely disgusting

    • Asidonhopo@lemmy.world
      link
      fedilink
      arrow-up
      29
      ·
      9 months ago

      Payoff: could be related to the coming Reddit IPO, to make alternatives unappealing or unsustainable.

  • Sybil@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    4
    ·
    9 months ago

    i don’t think that’s going to be very effective. i havent seen any of this but it sounds like a sybil attack. asklemmy isn’t the only vector. lemmy.world is going to need to do something, possibly drastic.

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    13
    arrow-down
    5
    ·
    edit-2
    9 months ago

    I suggest limiting new accounts from uploading photos for 3 days, to prevent abuse.

    3 days should be enough to make most people think twice before doing something so stupid, harmful and illegal. Most users don’t upload photos right as they sign up anyway so this effect to legitimate use should be negligible.

    • example@reddthat.com
      link
      fedilink
      arrow-up
      20
      ·
      9 months ago

      that doesn’t do anything, they’ll just register accounts in advance and wait some days.

      we’ve even had spam recently from accounts that had been dormant for months, although it was a different kind of spam.

      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        9 months ago

        I’m not saying it will prevent everything, including those with longstanding grudges, but especially if the period is not publicly announced/varies from server to server, then it will stop the impulsive trolls who can’t just make a bunch of accounts.

        Similar to mandatory wait laws for guns and ID creation wait period for Wiimfi community-run online service.

        • example@reddthat.com
          link
          fedilink
          arrow-up
          8
          ·
          9 months ago

          at that point you’ll just discourage any new users if they have to gamble on whether or not their content is actually seen by anyone. account age really isn’t a good indicator of anything other than soemone being dedicated enough to spam. considering this isn’t the first wave of csam attacks, i can assure you that whoever is targeting lemmy with this is determined enough that account age won’t deter them for long, they’ll just have to slightly adjust their playbook.