I don’t know what to think anymore. I guess I’ll have to ask ChatGPT what to do.
What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.
Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.
The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.
Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.
Although, I believe we can educate people about the truths of AI, but I am scared to trust the corporations or the governments with it.
Although, I can recommend a book called AI Snakeoil by Arvind Narayanan and Sayash Kapoor, which is written for a layman that might help to understand limitations of AI.
I use AI to make gtk shell widgets for my Linux rice. It’s definitely not as good as a experienced ricer but it can give good boilerplates. At the end I have to trouble shoot multiple logic errors but it’s better than writing all that spaghetti at to myself.
Other than that the only use case I find for AI in coding is to crosscheck my code or make it generate tests for me. Even that is very rare.
My justification: I use AI because I don’t want to write 1000-5000(combined) lines of code for a simple dock widget that can do couple of custom actions I use. Also the guarantee that the shell(ignis, ags) I use today can become the old thing very quickly, so I don’t like spending much time.
Well, long walk for a short drink of water… Moreover, the point around the anecdotal Cloudflare Developer’s self experiment seems to be “Don’t trust in hearsay” rather than “don’t self experiment”, as the major gripes with the experiment are all related to external lack of trust in the experimenter’s applied scientific methods. I get the point of not overestimating your own ability to think critically in this regard, but I wouldn’t discard self experimentation per se, as it could always lead to further study.