24
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
Care to provide some proof of that? They did update their system prompt to include a few things like it is now GPT4 (it used to always say GPT3). Other than that, I don’t think it knows anything. But in general, I was more talking about developments in AI since it was trained which it certainly does not know.
Edit: hmm I just reviewed our discussion and I note you only provided one link which was to the psychological definition of intelligence. You otherwise are providing no sources to back up your claims while my responses are full of them. Please start backing up your assertions, or provide some evidence you are an expert in the field.
Sure, here’s a link for you: https://old.reddit.com/r/ChatGPT/comments/16m6yc7/gpt4_training_cutoff_date_is_now_january_2022/
I’m aware of that date.
The OpenAI GPT-4 video literally states that GPT-4 finished training in August 2022.
Either way, to clarify / reiterate, you’re refuting a different point than I’ve made. I said:
I’m not talking about whether it knows about its own training (I doubt that it does). I’m talking about it knowing about what’s happened in the broader AI landscape since.
I mean, your argument is still basically that it’s thinking inside there; everything I’ve said is germane to that point, including what GPT4 itself has said.
My argument?
I’m not saying it’s thinking or has thoughts. I’m saying I don’t know the answer to that, but if it is it definitely isn’t anything like human thoughts.