ChatGPT is losing some of its hype, as traffic falls for the third month in a row - eviltoast

ChatGPT is losing some of its hype, as traffic falls for the third month in a row::August marked the third month in a row that the number of monthly visits to ChatGPT’s website worldwide was down, per data from Similarweb.

  • ribboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    So you can feed a weather model weather data, but you cannot feed a language model, programming languages and get accurate predictions?

    Basically no one is saying that “yeah I just go off the output, it’s perfect”. People use it to get a ballpark and then they work off that. Much like a meteorologist would do.

    It’s not 100% or 0%. With imperfect data, we get imperfect responses. But that’s no difference from a weather model. We can still get results that are 50% or 80% accurate with less than 100% correct information. Given that a large enough amount of the data is correct.

    • Prandom_returns@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Yeah, no difference between real-life physical measurements & data calculations made from proven formulas, and random shit collected of random places on the internet (even, possibly, random “LLM” generated sentences).

      People do “just go off the output”. There are people like that in this very thread.

      Statements like “no difference” are just idiotic.

      • ribboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Of course there is. But weather forecasting have also gotten ridiculously much more accurate with time. Better data, better models. We’ll get there with language models as well.

        I’m not arguing language models of today are amazingly accurate, I’m arguing they can be. That they are statistical models is not the problem. That they are new statistical models are.

        • Prandom_returns@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I’m not arguing language models of today are amazingly accurate, I’m arguing they can be. That they are statistical models is not the problem. That they are new statistical models are.

          A broken clock is accurate twice in a day.

          I’m arguing that they will never be accurate, because accuracy is not possible. I mean, look at Wikipedia. At least it’s written by people.

          Full self driving next year, right?