Mind-bending new programming language for GPUs just dropped... - Code Report - eviltoast
  • sus@programming.dev
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    6 months ago

    the “will linearly speedup anything [to the amount of parallel computation available]” claim is so stupid that I think it’s more likely they meant “only has a linear slowdown compared to a basic manual parallel implementation of the same algorithm”

    • Superb@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Good thing they don’t claim that. Read the README, they make very nuanced and reasonable claims about their very impressive language

    • eveninghere@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 months ago

      Yeah, and still… the example code in github is also bad. The arithmetic is so tiny that the performance of the execution can be worse than the serial execution. It makes the impression that the language parallelizes everything possible, in which case the execution would possibly get stuck at some parallel parts that’s not worth parallelizing.

      There’s a huge chunk of technical information missing for an expert to imagine what’s going on. And too many comments here still praise the language. They don’t mention anything concrete in those texts. This makes me REALLY skeptical of this post.

      Edit: there are many posts that make up BS for job interviews. I sure hope this is not one of those.