New GitHub Copilot Research Finds 'Downward Pressure on Code Quality' - eviltoast

Interesting to see the benefits and drawbacks called out.

  • uthredii@programming.dev
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    10 months ago

    In this regard, AI-generated code resembles an itinerant contributor, prone to violate the DRY-ness [don’t repeat yourself] of the repos visited.

    So I guess previously people might first look inside their repo’s for examples of code they want to make, if they find and example they might import it instead of copy and pasting.

    When using LLM generated code they (and the LLM) won’t be checking their repo for existing code so it ends up being a copy pasta soup.

    • Sentient Loom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      24
      ·
      10 months ago

      If you use AI to generate code, that should always be the first draft. You still have to edit it to make sure it’s good.

      • walter_wiggles@lemmy.nzOP
        link
        fedilink
        arrow-up
        18
        arrow-down
        1
        ·
        10 months ago

        I totally agree, but I don’t hear any discussion about how to incentivize developers to do it.

        If AI makes creating new code disproportionately easy, then I think DRY and refactoring will fall by the wayside.

          • MNByChoice@midwest.social
            link
            fedilink
            arrow-up
            9
            ·
            edit-2
            10 months ago

            Code review still exists.

            For now code reviews are done by competent people. What about once

            AI makes creating new code disproportionately easy

            ?

            Edit: Is it clear the quote, plus the items before and after are all one thought? I am hopeful, but not convinced.

            • Sentient Loom@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              edit-2
              10 months ago

              Yes your message is clear.

              To answer your original question, I have no idea what it will look like when software writes and reviews itself. It seems obvious that human understanding of a code base will quickly disappear if this is the process, and at a certain point it will go beyond the capacity of human refactoring.

              My first thought is that a code base will eventually become incoherent and irredeemably buggy. But somebody (probably not an AI, at first) will teach ChatGPT to refactor coherently.

              But the concept of coherence here becomes a major philosophical problem, and it’s very difficult to imagine how to make it practical in the long run.

              I think for now the practical necessity is to put extra emphasis on human peer review and refactoring. I personally haven’t used AI to write code yet.

              My dark side would love to see some greedy corporations wrecking their codebase by over-relying on AI to replace their coders. And debugging becomes a nightmare because nobody wrote it and they have to spend more time bug-fixing than they would have spent writing it in the first place.

              Edit: missing word

              • peopleproblems@lemmy.world
                link
                fedilink
                arrow-up
                5
                ·
                10 months ago

                And, while some of us may be out of a job temporarily, historically, when companies make these big brain decisions, we end up getting to come back and charge 4x what we used to get paid to get it working again.

                When I found out one of the contractors I worked with was not one of the cheap ones, but instead rehired after he retired at a 400% bump, I decided that maybe I needed to understand the business needs better

        • tatterdemalion@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          10 months ago

          Because it will lead to an incomprehensible mess. Ever heard the quote, “Programs are meant to be read by humans and only incidentally for computers to execute”? This is well-trodden ground in science fiction. If you have AI writing code that’s so lacking in abstraction (because machines require less of it to understand) then humans will become useless in maintaining it. Obviously this is a problem because it centralizes responsibility of maintenance onto machines who depend on this very code to operate.

          • Sentient Loom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Well that means it’s up to us to make it recognize non-DRY code and teach it to refactor while remaining coherent forever and ever, or else we’ll have to parachute into lands of alien code and try to figure out something nobody wrote and nobody understands.

      • hikaru755@feddit.de
        link
        fedilink
        arrow-up
        8
        ·
        10 months ago

        Yeah, but by generating with AI you’re incentivized to skip that initial research stage into your own code base, leading you to completely miss opportunities for consolidation or reuse

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      10 months ago

      Makes sense, even if it’s not good practice.

      It is really useful for hobby projects! I needed a recursive function to find a path between two nodes in a graph and it wrote me something that worked with my data in a few seconds, saved a bit of time