Reasoning failures highlighted by Apple research on LLMs - eviltoast
  • Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    17 days ago

    Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.

    That’s why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      17 days ago

      Yep. In cryptography there was a moment when cryptographers realized that the key must be secret, the message should be secret, but the rest of the system can not be secret. For the social purpose of refining said system. EDIT: And that these must be separate entities.

      These guys basically use lots of data instead of algorithms. Like buying something with oil money instead of money made on construction.

      I just want to see the moment when it all bursts. I’ll be so gleeful. I’ll go and buy an IPA and will laugh in every place in the Internet I’ll see this discussed.