It doesn’t understand what it’s outputting, it’s just outputting a long string that is gibberish to it.
Which is obviously nonsense, as I can ask it questions about its output. It can find mistakes in its own output and all that. It obviously understands what it is doing.
Than why do you keep talking such bullshit? You sound like you never even tried ChatGPT.
Yes, that’s understanding. What do you think your brain does differently? Please define whatever weird definition you have of “understand”.
You are aware of Emergent World Representations? Or have a listen to what Ilya Sutskever has to say on the topic, one of the people behind GPT-4 and AlexNet.
Which is obviously nonsense, as I can ask it questions about its output. It can find mistakes in its own output and all that. It obviously understands what it is doing.