Google decimates Twitter search results after Elon Musk imposes limits on reading tweets - eviltoast

Google has reportedly removed much of Twitter’s links from its search results after the social network’s owner Elon Musk announced reading tweets would be limited.

Search Engine Roundtable found that Google had removed 52% of Twitter links since the crackdown began last week. Twitter now blocks users who are not logged in and sets limits on reading tweets.

According to Barry Schwartz, Google reported 471 million Twitter URLs as of Friday. But by Monday morning, that number had plummeted to 227 million.

“For normal indexing of these Twitter URLs, it seems like these tweets are dropping out of the sky,” Schwartz wrote.

Platformer reported last month that Twitter refused to pay its bill for Google Cloud services.

  • darkevilmac@vlemmy.net
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    5
    ·
    1 year ago

    I feel like Google is going to have to find a way to effectively index federated content at some point. The only way to really get human information is from sites like Reddit and Twitter. And both of those platforms seem to be dedicated to completely imploding at the moment.

    • imaqtpie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      10
      ·
      1 year ago

      Fuck Google, if Lemmy continues to take off we can just develop better search tools within the fediverse. The wider internet has been colonized, the path forward cannot rely on big tech corporations.

      I’m not a programmer/developer so I don’t even understand the scale of the work that has yet to be done. But I am deeply committed to upsetting the status quo, and this platform feels distinctly revolutionary. Can’t wait to see what the future holds for Lemmy.

      • darkevilmac@vlemmy.net
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        2
        ·
        edit-2
        1 year ago

        It’s all well and good to have a revolution, but if nobody knows you’re having one then nothing really changes. There are still benefits to centralised services, one of which being scale. To effectively index so much data you need scale, which is why smaller search engines tend to be just white labels of things like Bing.

        • imaqtpie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          1 year ago

          100k people isn’t nobody. Centralized services can be useful at times, but there is no fundamental law preventing a decentralized system from providing the same functionality.

          The value of indexing data drops drastically when much of that data is junk, as is the case in the wider internet. Because Lemmy is a federation, there is a built in system to filter the junk.

    • FlagonOfMe@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      1 year ago

      There’s nothing about the content being federated that makes it hard or impossible to index. Each instance is just a website with a public webpage that a bot can read. That all a search engine needs to index it. The worst case scenario is the bot will find the same content on multiple instances.

      I did read that the website is loaded entirely through JavaScript and that maybe the Google bot doesn’t execute JavaScript so can’t see the text. I don’t know if that’s still a problem in 2023, though.

      This article says it’s not a problem, but I didn’t read past the tl;dr, so maybe there’s a caveat. Like maybe it has to use a popular framework like React or something to work.

      https://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157

      • void_wanderer@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Googlebot does execute Javascript, but since rendering JS needs much more resources, JS crawling will happen significantly less then simple http crawling. That’s why all big sites still return server side rendered content.

      • darkevilmac@vlemmy.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Rendering with JS definitely makes a difference, it’s part of the reason SSR is such a big deal for SEO.

      • varsock@programming.dev
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        1 year ago

        duckduckgo (who uses Microsoft’s index I believe) is able to find Lemmy instances already.

        problem is since every instance has its own domain you cannot search all of Lemmy or the more obscure fediverse. lemmy.world, beehaw.org, programming.dev are all different “websites”.

        I append “reddit” to my query when I want to search reddit for a human answer to a question. Can’t do that with Lemmy, unless the instance is branded as Lemmy.

        Unless there will be an org or volunteers that indexes federated instances and makes them available to search engines to they can be differentiated, finding stuff in the fediverse might be difficult…

    • DM_ME_SQUIRRELS@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Isn’t it automatically indexed? I mean, I can go to lemmy.world in a browser and see the content, wouldn’t Google’s indexing bots do the same?

    • zuccs@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It already is.

      Just put ‘site:lemmy.world’ into Google to see what it has indexed on that instance for example. I don’t think Lemmy is optimised for search yet, but I saw some GitHub threads around the topic.

    • DM_ME_SQUIRRELS@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Isn’t it automatically indexed? I mean, I can go to lemmy.world in a browser and see the content, wouldn’t Google’s indexing bots do the same?

      • Pika@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        yes, they have web scrapers that auto index according to the sites robot.txt, you can see what twitter asks to allow scraped by visiting here

        that being said some sites would be prioritized over others, so it’s possible that they just deprioritized twitter on it since it’s now not as friendly for them. But the current rules are super strict as well so it could just be self imposed

    • DM_ME_SQUIRRELS@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Isn’t it automatically indexed? I mean, I can go to lemmy.world in a browser and see the content, wouldn’t Google’s indexing bots do the same?

    • Feweroptions@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      7
      ·
      1 year ago

      Honestly, and I hate this, but I doubt they will. The majority of people will never go federated, even though it’s so easy, because they suck.

      • Temple Square@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        It’s the difference between a mom and pop restaurant and McDonald’s.

        We don’t need everybody to go to the mom and pop restaurant. Just enough of us to keep it afloat.

        • Marxine@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          People at large really need to remember not every kind of growth is good: it has to be sustainable, and only happen until where it’s needed.

          Unlimited growth is basically cancer, and that’s what big corpos feel to society tbh.

      • darkevilmac@vlemmy.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Maybe, I’m a bit more optimistic though. I think even if they just did something like a read only service that pulls from other federated sources like their web crawlers do for regular sites they would basically be done.

        The only concern there would be people trying to block them like everyone has been doing to Meta.

    • ZodiacSF1969@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      1 year ago

      You can tell from how many upvotes this has that there are just as many idiots here as on reddit lol

      Google can definitely index Lemmy