Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path - eviltoast

Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path::Don’t learn to code advises Jensen Huang of Nvidia. Thanks to AI everybody will soon become a capable programmer simply using human language.

  • TangledHyphae@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    10 months ago

    I use AI to write code for work every day. Many different models and services, including https://ollama.ai on my own hardware. It’s useful for a developer when they can take the code and refactor it to fit into large code-bases (after fixing its inevitable broken code here and there), but it is by no means anywhere close to actually successfully writing code all on its own. Eventually maybe, but nowhere near anytime soon.

    • Lmaydev@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 months ago

      Agreed. I mainly use it for learning.

      Instead of googling and skimming a couple blogs / so posts, I now just ask the AI. It pulls the exact info I need and sources it all. And being able to ask follow up questions is great.

      It’s great for learning new languages and frameworks

      It’s also very good at writing unit tests.

      Also for recommending Frameworks/software for your use case.

      I don’t see it replacing developers, more reducing the number of developers needed. Like excel did for office workers.

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I’m not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.

        I really need to get codeium or copilot working again just to see if anything has changed in the models (I’m sure they have.)

    • hitmyspot@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      It can’t tell yet when the output is ridiculous or incorrect for non coding, but it will get there. Same for coding. It will continue to grow in complexity and ability.

      It will get there, eventually. I don’t think it will be writing complex code any time soon, but I can see it being aware of all the libraries and foss that a person cannot be across.

      I would foresee learning to code as similar to learning to do accounting manually. Yes, you’ll still need to understand it to be a coder, but for the average person that can’t code, it will do a good enough job, like we use accounting software now for taxes or budgets that would have been professionally done before. For complex stuff, it will be human done, or human reviewed, or professional coders giving more technical instructions for ai. For simple coding, like you might write a python script now, for some trivial task, ai will do it.

    • Jolan@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I think this is going to age really badly and I don’t like LLMs but I think it will be soon. People also said that AI as we see it now is decades away but we got it quite quickly so I think it’s a very small step to go from writing fully grammatically correct English to fully correct code. It’s basically just a language the ai has to learn. But I guess what do I know. We’ll just have to wait and see

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        I’ve been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what’s happening behind the scenes.) The problem still remains the “context window.” Claude.ai is > 100k tokens now I think, but the context still limits an entire ‘session’ to only make so much code in that window. I’m still trying to push every model to its limits, but another big problem in the industry now is effectiveness via “perplexity” measurements given a context length.

        https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small

        This plot shows that as the window grows in size, “directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time” everything that it produces becomes less accurate and more perplexing overall.

        But you’re right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don’t get the feeling we’ll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.