How hard can generating 1024-bit primes really be? - eviltoast
  • solrize@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    4
    ·
    7 months ago

    This is a pretty lame article. The idea is just use a bignum library, or a language with native bignums. While a few optimizations help, basically just generate random 1024 bit random numbers until you something that passes a pseudoprime test, and call it a day. The rest of the article converts the above into a beginning Rust exercise but I think it’s preferable to not mix up the two.

    From the prime number theorem, around 1/700th of numbers at that size are prime. By filtering out numbers with small divisors you may end up doing 100 or so pseudoprime tests, let’s say Fermat tests (3**n mod n == 3). A reasonable library on today’s machines can do one of those tests in around 1ms, so you are good.

    RSA is deprecated in favor of elliptic curve cryptography these days anyway.

    • farcaster@lemmy.worldOP
      link
      fedilink
      arrow-up
      11
      ·
      7 months ago

      The author pointed out they also could’ve just called openssl prime -generate -bits 1024 if they weren’t trying to learn anything. Rebuilding something from scratch and sharing the experience is valuable.

      • solrize@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        7 months ago

        There’s two things going on in the exercise: 1) some introductory Rust programming; 2) some introductory math and crypto.

        Maybe it’s just me but I think it’s better to separate the two. If you’re going to do a prime number generation exercise, it will be easier in (e.g.) Python since the bignum arithmetic is built in, you don’t have all the memory management headache, etc. If you’re going to do a Rust exercise, imho it is better to focus on Rust stuff.

        • farcaster@lemmy.worldOP
          link
          fedilink
          arrow-up
          4
          ·
          7 months ago

          There isn’t even any memory management in their code. And arguably the most interesting part of the article is implementing a bignum type from scratch.

  • Bazebara@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Nice article, I enjoyed it. Why float sqrt has been used? Integer sqrt is way faster and easily supports integers of any lengths

        • farcaster@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          6 months ago

          Well, yeah, but you asked why they didn’t use integer sqrt. It’s something many programming languages just don’t have. Or if they do, it’s internally implemented as a sqrt(f64) anyway, like C++ does.

          Most CPUs AFAIK don’t have integer sqrt instructions so you either do it manually in some kind of loop, or you use floating point…

          • Bazebara@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            Integer sqrt is usually not a library function and it’s very easy to implement, just a few lines of code. Algorithm is well defined on Wikipedia you read a lot. And yes, it doesn’t use FPU at all and it’s quite fast even on i8086.

            • farcaster@lemmy.worldOP
              link
              fedilink
              arrow-up
              1
              ·
              6 months ago

              I doubt doing it in software like that outperforms sqrtss/sqrtsd. Modern CPUs can do the conversions and the floating point sqrt in approximately 20-30 cycles total. That’s comparable to one integer division. But I wouldn’t mind being proven wrong.