Are LLMs capable of writing *good* code? - eviltoast

By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

  • netvor@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    Also in my experience LLM can often propose solutions which are working but way too complex.

    Story time: just yesterday, in VueJS I was trying to iterate over a list of items and render .text of reach item as HTML, but I needed to process it first. Note that in VueJS this is done by adding eg. <span v-html="item.text"></span> where content of the attribute is the JavaScript expression needed to get the text.

    First I asked ChatGPT to write the function for processing the text. That worked pretty well and even used part of the JavaScript API which I was not aware about.

    Next, I had a “dumb moment” when I did not realize that as I’m iterating through items I can just say <span v-html="processHtml(item.text)"></span>, that’s all I really needed. Somehow I thought (or should I say, “hallucinated”, ba dum tsss) for a moment that v-html is special or something (it is used differently than the most abundant type of syntax). So I went ahead and asked ChatGPT how to render processed texts while iterating.

    It came with a rather contrived solution which involved creating another computed property containing a list of processed texts. I started to integrate it into the existing loop: I would have to add index and use that index to pull the code from the computed property, which already felt a little bit weird.

    That’s when it struck me: no, no, no, I can just f*ing use the function.

    TL; DR: The point is, while ChatGPT was helpful I still needed to babysit it. And if I didn’t snap from my lazy moment, or if I simply didn’t know better, I would end up with code which is more complex, more surprising, which means harder to reason about for both humans and LLM’s. (For humans because now it forces you to speculate about coder’s intent, and for LLM’s because it’s less likely to be reminiscent of surrounding code in its learning data.)