Learn AI now or risk losing your job, experts warn - eviltoast

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • zogwarg@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    That’s the dangerous part:

    • The LLM being just about convincing enough
    • The language being unfamiliar

    You have no way of judging how correct or how wrong the output is, and no one to hold responsible or be a guarantor.

    With the recent release of the heygen drag-drop tool for video translating, and lip-syncing tool, I saw enough people say: “Look isn’t it amazing, I can speak Italian now”

    No, something makes look like you can, and you have no way of judging how convincing the illusion is. Even if the output is convincing/bluffing to a native speaker, you still can’t immediately check that the translation is correct. And again no one to hold accountable.

    • Lauchs@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 year ago

      I am talking about coding languages. There are many ways to verify that your solutions are correct.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 year ago

        We are over half a century into programming computers, and the industry still fights itself over basic implementations of testing and using that in-process with development.

        The very nature of software correctness is a fuzzy problem (because defining the problem from requirements to code also often goes awry with imprecise specification).

        Just because there exists some tooling or options doesn’t mean it’s solved

        And then people like you have/argue the magical thinking belief that slapping LLMs on top of all this shit will tooooooootally work

        I look forward to charging you money to help you fix your mess later.

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          Genuine Q: Do you think we’ll start to see llm-friendly languages emerge? Languages that consider the “llm experience” that fools like this will welcome. Or even a reversion back to low-level languages

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            anything LLM-friendly is likely to be even more high level and less flexible than any ordinary language; as of now, LLMs seem to do the best on something like Python (there’s a shit ton of it and plenty of Python programs can handle a little bit of lexical reorganization without catastrophically failing) but tend to get utterly disastrous results for languages like C, where something seemingly trivial and hard to spot if you don’t know the language like insisting the string “hello world” is 11 characters long (it’s 12 including the null character at the end, which ChatGPT used to consistently fail to do), fucking up the ordering of statements that allocate or free memory, or misindexing an array in memory (along with hundreds of other trivial instances of undefined behavior and a combinatorial explosion of non-trivial cases) can create code that might appear to run fine but has severe security vulnerabilities, or might just silently corrupt data. LLMs aren’t even useful to regurgitate toy code for systems languages like C.

          • 200fifty@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            The problem is I guess you’d need a significant corpus of human-written stuff in that language to make the LLM work in the first place, right?

            Actually this is something I’ve been thinking about more generally: the “ai makes programmers obsolete” take sort of implies everyone continues to use javascript and python for everything forever and ever (and also that those languages never add any new idioms or features in the future I guess.)

            Like, I guess now that we have AI, all computer language progress is just supposed to be frozen at September 2021? Where are you gonna get the training data to keep the AI up to date with the latest language developments or libraries?

            • gerikson@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 year ago

              Correct, it presumes that everyone will be eagerly learning new languages, and new features to existing languages, and writing about them, and answering questions about them, at the same rate as before, despite knowing that their work will be instantly ingested into LLM engines and resold as LLM output. At the same time, the audience for this sort of writing will disappear, because they’re all using LLMs instead of reading articles, blog posts, and Stackoverflow answers.

              It’s almost as if no-one has thought this through[1].

              Relatedly: https://gerikson.com/m/2023/09/index.html#2023-09-27_wednesday_04


              [1] unless the designers of LLMs actually fell for their own hype and believe they actually think.

              • 200fifty@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                edit-2
                1 year ago

                When you put it that way, I can’t help but notice the parallels to Google’s generative AI search feature, which suffers from a similar problem of “why would people keep writing posts as the source material for your AI if no one is gonna read it other than the AI web scraper”

              • Steve@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                this makes sense. It’s kind of like crypto being deflationary. There is no incentive to make something new just to feed it. Software has eaten the world and now all it can do is keep eating it’s own shit over and over

                • gerikson@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  1 year ago

                  Yes, with the difference that crypto was never realistically going to replace normal currency. There’s a real risk that LLM-generated content kills the open web, though. Both by flooding the zone with generated shit, and by destroying the motivation of humans to add to the inputs.

                  • Steve@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    1 year ago

                    do you have a rough outline of the steps you see toward the killing of the open web? Do you mean the effect of not realistically being able to stop the scraping of content?

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            short answer: unlikely on any nearby time horizon, because there’s a large impedance mismatch between the two applicable things at play. maybe some toy sub-examples can be created, but even that rapidly runs into scaling/scoping issues

            longer answer: I started typing it and in a thought pause I clicked the upvote arrow on your post and now that in-progress reply is gone (thanks lemmy). I’ll write that up in emacs later and then post it here

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        not if you don’t know the language, and not in any generalized way thanks to the halting problem