to fool into errors
tricking a kid
I’ve never tried to fool or trick AI with excessively complex questions. When I tried to test it (a few different models over some period of time - ChatGPT, Bing AI, Gemini) I asked stuff as simple as “what’s the etymology of this word in that language”, “what is [some phenomenon]”. The models still produced responses ranging from shoddy to absolutely ridiculous.
completely detached from how anyone actually uses
I’ve seen numerous people use it the same way I tested it, basically a Google search that you can talk with, with similarly shit results.
Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.