ChatGPT In Trouble: OpenAI may go bankrupt by 2024, AI bot costs company $700,000 every day - eviltoast
  • SirGolan@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Yeah, I generally agree there. And you’re right. Nobody knows if they’ll really be the starting point for AGI because nobody knows how to make AGI.

    In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn’t make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I’d been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there’s a much higher rate of failure, and I suspect a lot of people who use the “it gets stuff wrong” argument probably haven’t spent much time with GPT4. Not saying it’s perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.

    Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn’t understand how they work.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 year ago

      That’s a good point.

      I’ve been thinking about the possibility of LLM revolutionizing search (basically search engines) which are not autoritative sources of information (far from) but they’ll get you much faster to those.

      LLM’s do have most of the same information as they do, add the whole extra level of being able to use natural language to query it in a more natural way and due to their massive training sets, even if one’s question is slightly incorrect the nearest cluster of textual tokens in the token space (an oversimplified descriptions of how LLMs work, I know) to said incorrect question might very well be were the correct questions and answers are, so you get the correct answer (and funnilly enough the more naturally one poses the question the better).

      However as a direct provider of answers, certainly in a professional setting, it quickly becomes something that produces more work than it saves, because you always have to check the answers since there are no cues about how certain or uncertain that result was.

      I suspect many if not most of us also had human colleagues who were just like that: delivering even the most “this is a wild guess” answer to somebody’s question as an assured “this is the way things are”, and I suspect also that most of of those who had such colleagues quickly learned to not go to them for answers and always double check the answer when they did.

      This is why I doubt it will do things like revolutionizing programming or in fact replace humans in producing output in hard-knowledge domains that operate mainly on logic, though it might very well replace humans whose work is to wrap things up in the appropriate language for the target audience (I suspect it’s going to revolutionize the production of highly segmented and even individually targetted propaganda in social networks)