Google won’t comment on a potentially massive leak of its search algorithm documentation - eviltoast

A purported leak of 2,500 pages of internal documentation from Google sheds light on how Search, the most powerful arbiter of the internet, operates.

The leaked documents touch on topics like what kind of data Google collects and uses, which sites Google elevates for sensitive topics like elections, how Google handles small websites, and more. Some information in the documents appears to be in conflict with public statements by Google representatives, according to Fishkin and King.

  • zutto@lemmy.fedi.zutto.fi
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Hi!

    Great question! I don’t crawl reddit, but this applies to other large sites as well. reddit themselves they have at this very moment banned the ip range where I host my Yacy at (Hetzner). I just looked up from my index that I do have 257k pages indexed from reddit through teddit I used to run, this is from before reddit api-enshittification, going to delete those right now.

    And the way how the crawling is done is you define crawling depth, which limits how much content is crawled from the site.

    • 0 crawling depth = only the page you send Yacy to, nothing more.
    • 1 crawling depth = all the links on the page you send Yacy to
    • 2 crawling depth = all links on the page you send Yacy to, and all links on the pages crawled…
    • 3 …
    • n …

    … etc.

    I have my tampermonkey scripts set to only crawling depth of 1 at the moment (Just set them to 2 actually, kinda curious how much more I will be crawling), I’ve manually crawled some local news sites as a curiosity at the beginning. And my database is currently relatively small, only around ~86.38 gigabytes according to Yacy. This stores aproximately 2.6 million documents in Yacy’s Solr.

    Yacy memory & disk usage. Yacy solr index size

    Yacy has tons of options for crawling, so you can customize how much it crawls and even filter out overly large sites with maximum number of documents set when you send Yacy there.

    Large picture of Yacy's interface for starting a crawl.

    The tampermonkey script I’ve been talking about in these posts, it’s very simple script: https://github.com/JeremyRand/YaCyIndexerGreasemonkey

    Hit me up if you guys have more questions! I’m by no means an expert on Yacy, but I will do my best to answer.