77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds - eviltoast

The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

  • barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    169
    ·
    4 months ago

    Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn’t help people finish their tasks.

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      ·
      4 months ago

      It’s like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!

    • demizerone@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      31
      ·
      4 months ago

      My dumbass friend who over confidently smart is switch to Linux bcz of open source AI. I can’t wait to see what he learns.

  • FartsWithAnAccent@fedia.io
    link
    fedilink
    arrow-up
    109
    arrow-down
    1
    ·
    edit-2
    4 months ago

    They tried implementing AI in a few our our systems and the results were always fucking useless. What we call “AI” can be helpful in some ways but I’d bet the vast majority of it is bullshit half-assed implementations so companies can claim they’re using “AI”

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      2
      ·
      4 months ago

      The one thing “AI” has improved in my life has been a banking app search function being slightly better.

      Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You’re telling me you can make infinite free pictures of big tittied goth girls and you only included a few?

      • MindTraveller@lemmy.ca
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        4 months ago

        Generating multiple pictures of the same character is actually pretty hard. For example, let’s say you’re making a visual novel with a bunch of anime girls. You spin up your generative AI, and it gives you a great picture of a girl with a good design in a neutral pose. We’ll call her Alice. Well, now you need a happy Alice, a sad Alice, a horny Alice, an Alice with her face covered with cum, a nude Alice, and a hyper breast expansion Alice. Getting the AI to recreate Alice, who does not exist in the training data, is going to be very difficult even once.

        And all of this is multiplied ten times over if you want granular changes to a character. Let’s say you’re making a fat fetish game and Alice is supposed to gain weight as the player feeds her. Now you need everything I described, at 10 different weights. You’re going to need to be extremely specific with the AI and it’s probably going to produce dozens of incorrect pictures for every time it gets it right. Getting it right might just plain be impossible if the AI doesn’t understand the assignment well enough.

        • This is fine🔥🐶☕🔥@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          4 months ago

          Generating multiple pictures of the same character is actually pretty hard.

          Not from what I have seen on Civitai. You can train a model on specific character or person. Same goes for facial expressions.

          Of course you need to generate hundreds of images to get only a few that you might consider acceptable.

        • okwhateverdude@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          4 months ago

          This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.

          You’re correct that it will struggle to give you exactly what you want because you need to have some “machine sympathy.” If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.

          These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you’ll get much better results.

    • speeding_slug@feddit.nl
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      To not even consider the consequences of deploying systems that may farm your company data in order to train their models “to better serve you”. Like, what the hell guys?

      • FartsWithAnAccent@fedia.io
        link
        fedilink
        arrow-up
        38
        arrow-down
        1
        ·
        4 months ago

        Looking like they were doing something with AI, no joke.

        One example was “Freddy”, an AI for a ticketing system called Freshdesk: It would try to suggest other tickets it thought were related or helpful but they were, not one fucking time, related or helpful.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          1
          ·
          4 months ago

          Ahh, those things - I’ve seen half a dozen platforms implement some version of that, and they’re always garbage. It’s such a weird choice, too, since we already have semi-useful recommendation systems that run on traditional algorithms.

        • Dave.@aussie.zone
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          4 months ago

          As an Australian I find the name Freddy quite apt then.

          There is an old saying in Aus that runs along the lines of, “even Blind Freddy could see that…”, indicating that the solution is so obvious that even a blind person could see it.

          Having your Freddy be Blind Freddy makes its useless answers completely expected. Maybe that was the devs internal name for it and it escaped to marketing haha.

          • FartsWithAnAccent@fedia.io
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            4 months ago

            I actually ended up becoming blind to Freddy because of how profoundly useless it was: Permanently blocked the webpage elements that showed it from my browser lol. I think Fresh since gave up.

            Don’t get me wrong, the rest of the service is actually pretty great and I’d recommend Fresh to anyone in search of a decent ticketing system. Freddy sucks though.

        • MentallyExhausted@reddthat.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 months ago

          That’s pretty funny since manually searching some keywords can usually provide helpful data. Should be pretty straight-forward to automate even without LLM.

          • FartsWithAnAccent@fedia.io
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            4 months ago

            Yep, we already wrote out all the documentation for everything too so it’s doubly useless lol. It sucked at pulling relevant KB articles too even though there are fields for everything. A written script for it would have been trivial to make if they wanted to make something helpful, but they really just wanted to get on that AI hype train regardless of usefulness.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          It’s bloody amazing, here I am, having all my childhood read about 20/80, critical points, Guderian’s heavy points, Tao Te Ching, Sun Zu, all that stuff about key decisions made with human mind being of absolutely overriding importance over what tools can do.

          These morons are sticking “AI”'s exactly where a human mind is superior over anything else at any realistic scale and, of course, could have (were it applied instead of human butt) identified the task at hand which has nothing to do with what “AI”'s can do.

          I mean, half of humanity’s philosophy is about garbage thinking being of negative worth, and non-garbage thinking being precious. In any task. These people are desperately trying to produce garbage thinking with computers as if there weren’t enough of that already.

    • The Menemen!@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      It is great for pattern recognition (we use it to recognize damages in pipes) and probably pattern reproduction (never used it for that). Haven’t really seen much other real life value.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    3
    ·
    4 months ago

    Large “language” models decreased my workload for translation. There’s a catch though: I choose when to use it, instead of being required to use it even when it doesn’t make sense and/or where I know that the output will be shitty.

    And, if my guess is correct, those 77% are caused by overexcited decision takers in corporations trying to shove AI down every single step of the production.

    • bitfucker@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      4 months ago

      I always said this in many forums yet people can’t accept that the best use case of LLM is translation. Even for language such as japanese. There is a limit for sure, but so does human translation without adding many more texts to explain the nuance in the translation. At that point an essay is needed to dissect out the entire meaning of something and not just translation.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        I’ve seen programmers claiming that it helps them out, too. Mostly to give you an idea on how to tackle a problem, instead of copypasting the solution (as it’ll likely not work).

        My main use of the system is

        1. Probing vocab to find the right word in a given context.
        2. Fancy conjugation/declension table.
        3. Spell-proofing.

        It works better than going to Wiktionary all the time, or staring my work until I happen to find some misspelling (like German das vs. dass, since both are legit words spellcheckers don’t pick it up).

        One thing to watch out for is that the translation will be more often than not tone-deaf, so you’re better off not wasting your time with longer strings unless you’re fine with something really sloppy, or you can provide it more context. The later however takes effort.

        • bitfucker@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Yeah, for sure since programming is also a language. But IMHO, for a machine learning model the best way to approach it is not as a natural language but rather as its AST/machine representation and not the text token. That way the model not only understands the token pattern but also the structure since most programming languages are well defined.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            Note that, even if we refer to Java, Python, Rust etc. by the same word “language” as we refer to Mandarin, English, Spanish etc., they’re apples and oranges - one set is unlike the other, even if both have some similarities.

            That’s relevant here, for two major reasons:

            • The best approach to handle one is not the best to handle the other.
            • LLMs aren’t useful for both tasks (translating and programming) because both involve “languages”, but because LLMs are good to retrieve information. As such you should see the same benefit even for tasks not involving either programming languages or human languages.

            Regarding the first point, I’ll give you an example. You suggested abstract syntax trees for the internal representation of programming code, right? That might work really well for programming, dunno, but for human languages I bet that it would be worse than the current approach. That’s because, for human languages, what matters the most are the semantic and pragmatic layers, and those are a mess - with the meaning of each word in a given utterance being dictated by the other words there.

            • bitfucker@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              Yeah, that’s my point ma dude. The current LLM tasks are ill suited for programming, the only reason it works is sheer coincidence (alright, maybe not sheer coincidence, I know its all statistics and so on). The better approach to make LLM for programming is a model that can transform/“translate” a natural language that humans use to AST, the language that computers use but still close to human language. But the problem is that to do such tasks, LLM needs to actually have an understanding of concepts from the natural language which is debatable at best.

  • GreatAlbatross@feddit.uk
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    1
    ·
    4 months ago

    The workload that’s starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.

    “But it works!”

    ‘It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library’

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      35
      ·
      4 months ago

      I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.

      • GreatAlbatross@feddit.uk
        link
        fedilink
        English
        arrow-up
        20
        ·
        4 months ago

        Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Apart from me not reading the manual (or skimming to quick) I might have asked the LLM to check the history file rather than the command. Idk. I honestly didn’t know the history command did anything different than just printing the history file

      • trolololol@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I don’t run crazy scripts in my machine. If I don’t understand it’s not safe enough.

        That’s how you get pranked and hacked

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 months ago

      TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do

      • rozodru@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        2012 me feels personally called out by this. fuck 2012 me that lazy fucker. stackoverflow was my “get out of work early and hit the bar” card.

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      I asked it to spot a typo in my code, it worked but it rewrote my classes for each function that called them

      • morbidcactus@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        I gave it a fair shake after my team members were raving about it saving time last year, I tried a SFTP function and some Terraform modules and man both of them just didn’t work. it did however do a really solid job of explaining some data operation functions I wrote, which I was really happy to see. I do try to add a detail block to my functions and be explicit with typing where appropriate so that probably helped some but yeah, was actually impressed by that. For generation though, maybe it’s better now, but I still prefer to pull up the documentation as I spent more time debugging the crap it gave me than piecing together myself.

        I’d use a llm tool for interactive documentation and reverse engineering aids though, I personally think that’s where it shines, otherwise I’m not sold on the “gen ai will somehow fix all your problems” hype train.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          I think the best current use case for AI when it comes to coding is autocomplete.

          I hate coding without Github Copilot now. You’re still in full control of what you’re building, the AI just autocompletes the menial shit you’ve written thousands of times already.

          When it comes to full applications/projects, AI still has some way to go.

          • morbidcactus@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            I can get that for sure, I did see a client using it for debugging which seemed interesting as well, made an attempt to narrow down where the error occurred and what actually caused it.

            • NιƙƙιDιɱҽʂ@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              4 months ago

              I’ll do that too! In the actual code you can just write something like

              // Q: Why isn't this working as expected?
              // A: 
              

              and it’ll auto complete an answer based on the code. It’s not always 100% on point, but it usually leads you in the right direction.

  • Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    2
    ·
    4 months ago

    You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!

    Do you think Sam Altman might have… gasp lied to his investors about its capabilities?

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 months ago

      Nooooo. I mean, we have about 80 years of history into AI research and the field is just full of overhyped promised that this particularly tech is the holy grail of AI to end in disappointment each time, but this time will be different! /s

      • Nobody@lemmy.world
        link
        fedilink
        English
        arrow-up
        41
        arrow-down
        1
        ·
        4 months ago

        Yeah, OpenAI, ChatGPT, and Sam Altman have no relevance to AI LLMs. No idea what I was thinking.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          7
          ·
          4 months ago

          I prefer Claude, usually, but the article also does not mention LLMs. I use generative audio, image generation, and video generation at work as often if not more than text generators.

          • Nobody@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            edit-2
            4 months ago

            Good point, but LLMs are both ubiquitous and the public face of “AI.” I think it’s fair to assign them a decent share of the blame for overpromising and underdelivering.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        22
        ·
        4 months ago

        Aha, so this must all be Elon’s fault! And Microsoft!

        There are lots of whipping boys these days that one can leap to criticize and get free upvotes.

  • cheddar@programming.dev
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Me: no way, AI is very helpful, and if it isn’t then don’t use it

    created challenges in achieving the expected productivity gains

    achieving the expected productivity gains

    Me: oh, that explains the issue.

    • Bakkoda@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      23
      ·
      4 months ago

      It’s hilarious to watch it used well and then human nature just kick in

      We started using some “smart tools” for scheduling manufacturing and it’s honestly been really really great and highlighted some shortcomings that we could easily attack and get easy high reward/low risk CAPAs out of.

      Company decided to continue using the scheduling setup but not invest in a single opportunity we discovered which includes simple people processes. Took exactly 0 wins. Fuckin amazing.

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 months ago

        Yeah but they didn’t have a line for that in their excel sheet, so how are they supposed to find that money?

        Bean counters hate nothing more than imprecise cost saving. Are they gonna save 100k in the next year? 200k? We can’t have that imprecision now can we?

      • dejected_warp_core@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Honestly, this sounds like the analysis uncovered some managerial failings and so they buried the results; a cover-up.

        Also, and I have yet to understand this, but selling “people space” solutions to very technically/engineering-inclined management is incredibly hard to do. Almost like there’s a typical blind spot for solving problems outside their area of expertise. I hate generalizing like this but I’ve seen this happen many times, at many workplaces, over many years.

        • Bakkoda@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          No I would think you are spot on. I’m constantly told I’m a type [insert fotm managerial class they just took term] and my conversations intimidate or emasculate people. They are probably usually correct but i find it’s usually just an attempt to cover their asses. I’m a contract worker, i was hired for a purpose with a limited time window and i fuckin deliver results even when they ignore 90% of the analysis. It’s gotta piss them off.

          • dejected_warp_core@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            It’s gotta piss them off.

            That’s not unusual, sadly. Sometimes, someone brings in a contractor in attempt to foist change, as they’re not tainted by loyalties or the culture when it comes to saying ugly things. So anger and disruption is the product you’ve actually been hired to deliver; surprise! What pains me the most here is when I see my fellow contractors walk into just such a situation and they wind up worse for wear as a result.

            Edit: the key here is to see this coming and devise a communication plan to temper your client’s desire to stir the pot, and get yourself out of the line of fire, so to speak.

  • مهما طال الليل@lemm.ee
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    7
    ·
    edit-2
    4 months ago

    The trick is to be the one scamming your management with AI.

    “The model is still training…”

    “We will solve this <unsolvable problem> with Machine Learning”

    “The performance is great on my machine but we still need to optimize it for mobile devices”

    Ever since my fortune 200 employer did a push for AI, I haven’t worked a day in a week.

  • Sk1ll_Issue@feddit.nl
    link
    fedilink
    English
    arrow-up
    31
    ·
    4 months ago

    The study identifies a disconnect between the high expectations of managers and the actual experiences of employees

    Did we really need a study for that?

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    4 months ago

    The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

    • Meron35@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

      FTFY

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    4 months ago

    because on top of your duties you now have to check whatever the AI is doing in place of the employee it has replaced

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    8
    ·
    4 months ago

    AI is stupidly used a lot but this seems odd. For me GitHub copilot has sped up writing code. Hard to say how much but it definitely saves me seconds several times per day. It certainly hasn’t made my workload more…

    • Cryophilia@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      4 months ago

      Probably because the vast majority of the workforce does not work in tech but has had these clunky, failure-prone tools foisted on them by tech. Companies are inserting AI into everything, so what used to be a problem that could be solved in 5 steps now takes 6 steps, with the new step being “figure out how to bypass the AI to get to the actual human who can fix my problem”.

      • jubilationtcornpone@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 months ago

        I’ve thought for a long time that there are a ton of legitimate business problems out there that could be solved with software. Not with AI. AI isn’t necessary, or even helpful, in most of these situations. The problem is that creatibg meaningful solutions requires the people who write the checks to actually understand some of these problems. I can count on one hand the number of business executives that I’ve met who were actually capable of that.

    • Cosmicomical@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      4 months ago

      For anything more that basic autocomplete, copilot has only given me broken code. Not even subtly broken, just stupidly wrong stuff.

    • HakFoo@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      4 months ago

      They’ve got a guy at work whose job title is basically AI Evangelist. This is terrifying in that it’s a financial tech firm handling twelve figures a year of business-- the last place where people will put up with “plausible bullshit” in their products.

      I grudgingly installed the Copilot plugin, but I’m not sure what it can do for me better than a snippet library.

      I asked it to generate a test suite for a function, as a rudimentary exercise, so it was able to identify “yes, there are n return values, so write n test cases” and “You’re going to actually have to CALL the function under test”, but was unable to figure out how to build the object being fed in to trigger any of those cases; to do so would require grokking much of the code base. I didn’t need to burn half a barrel of oil for that.

      I’d be hesitant to trust it with “summarize this obtuse spec document” when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn’t suitable.

      Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

      I can see the marketing and sales people love it, maybe customer service too, click one button and take one coherent “here’s why it’s broken” sentence and turn it into 500 words of flowery says-nothing prose, but I demand better from my machine overlords.

      Tell me when Stable Diffusion figures out that “Carrying battleaxe” doesn’t mean “katana randomly jutting out from forearms”, maybe at that point AI will be good enough for code.

      • okwhateverdude@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 months ago

        Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

        I, too, work in fintech. I agree with this analysis. That said, we currently have a large mishmash of regexes doing classification and they aren’t bulletproof. It would be useful to see about using something like a fine-tuned BERT model for doing classification for transactions that passed through the regex net without getting classified. And the PoC would be would be just context stuffing some examples for a few-shot prompt of an LLM and a constrained grammar (just the classification, plz). Because our finance generalists basically have to do this same process, and it would be nice to augment their productivity with a hint: “The computer thinks it might be this kinda transaction”

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I’d be hesitant to trust it with “summarize this obtuse spec document” when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn’t suitable.

        That’s why I have my doubts when people say it’s saving them a lot of time or effort. I suspect it’s planting bombs that they simply haven’t yet found. Like it generated code and the code seemed to work when they ran it, but it contains a subtle bug that will only be discovered later. And the process of tracking down that bug will completely wreck any gains they got from using the LLM in the first place.

        Same with the people who are actually using it on human languages. Like, I heard a story of a government that was overwhelmed with public comments or something, so they were using an LLM to summarize those so they didn’t have to hire additional workers to read the comments and summarize them. Sure… and maybe it’s relatively close to what people are saying 95% of the time. But 5% of the time it’s going to completely miss a critical detail. So, you go from not having time to read all the public comments so not being sure what people are saying, to having an LLM give you false confidence that you know what people are saying even though the LLM screwed up its summary.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 months ago

        Again, plausible bullshit isn’t suitable.

        It is suitable when you’re the one producing the bullshit and you only need it accepted.

        Which is what people pushing for this are. Their jobs and occupations are tolerant to just imitating, so they think that for some reason it works with airplanes, railroads, computers.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      4 months ago

      I’ll say that so far I’ve been pretty unimpressed by Codeium.

      At the very most it has given me a few minutes total of value in the last 4 months.

      Ive gotten some benefit from various generic chat LLMs like ChatGPT but most of that has been somewhat improved versions of the kind of info I was getting from Stackexchange threads and the like.

      There’s been some mild value in some cases but so far nothing earth shattering or worth a bunch of money.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        4 months ago

        I have never heard of Codeium but it says it’s free, which may explain why it sucks. Copilot is excellent. Completely life changing, no. That’s not the goal. The goal is to reduce the manual writing of predictable and boring lines of code and it succeeds at that.

        • rekorse@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Cool totally worth burning the planet to the ground for it. Also love that we are spending all this time and money to solve this extremely important problem of coding taking slightly too long.

          Think of all the progress being made!

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        I presume it depends on the area you would be working with and what technologies you are working with. I assume it does better for some popular things that tend to be very verbose and tedious.

        My experience including with a copilot trial has been like yours, a bit underwhelming. But I assume others must be getting benefit.

    • toddestan@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      4 months ago

      Github Copilot is about the only AI tool I’ve used at work so far. I’d say it overall speeds things up, particularly with boilerplate type code that it can just bang out reducing a lot of the tedious but not particularly difficult coding. For more complicated things it can also be helpful, but I find it’s also pretty good at suggesting things that look correct at a glance, but are actually subtly wrong. Leading to either having to carefully double check what it suggests, or having fix bugs in code that I wrote but didn’t actually write.

      • okwhateverdude@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        Leading to either having to carefully double check what it suggests, or having fix bugs in code that I wrote but didn’t actually write.

        100% this. Recent update from jetbrains turned on the AI shitcomplete (I guess my org decided to pay for it). Not only is it slow af, but in trying it, I discovered that I have to fight the suggestions because they are just wrong. And what is terrible is I know my coworkers will definitely use it and I’ll be stuck fixing their low-skill shit that is now riddled with subtle AI shitcomplete. The tools are simply not ready, and anyone that tells you they are, do not have the skill or experience to back up their assertion.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        Every time I’ve discussed this on Lemmy someone says something like this. I haven’t usually had that problem. If something it suggests seems like more than something I can quickly verify is intended, I just ignore it. I don’t know why I am the only person who has good luck with this tech but I certainly do. Maybe it’s just that I don’t expect it to work perfectly. I expect it to be flawed because how could it not be? Every time it saves me from typing three tedious lines of code it feels like a miracle to me.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      4 months ago

      Media has been anti AI from the start. They only write hit pieces on it. We all rabble rouse about the headline as if it’s facts. It’s the left version of articles like “locals report uptick of beach shitting”

  • alienanimals@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 months ago

    The billionaire owner class continues to treat everyone like shit. They blame AI and the idiots eat it up.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 months ago

      Except it didn’t make more jobs, it just made more work for the remaining employees who weren’t laid off (because the boss thought the AI could let them have a smaller payroll)

  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    17
    ·
    4 months ago

    I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Cool, enjoy your entire industry going under thanks to cheap and free software and executives telling their middle managers to just shoot and cut it on their phone.

          Sincerely,

          A former video editor.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            If something can be effectively automated, why would I want to continue to invest energy into doing it manually? That’s literal busy work.

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  4 months ago

                  Video editing is not busy work. You’re excusing executives telling middle managers to put out inferior videos to save money.

                  You seem to think what I used to do was just cutting and pasting and had nothing to do with things like understanding film making techniques, the psychology of choosing and arranging certain shots, along with making do what you have when you don’t have enough to work with.

                  But they don’t care about that anymore because it costs money. Good luck getting an AI to do that as well as a human any time soon. They don’t care because they save money this way.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            4 months ago

            “Soup to nuts” just means I am responsible for the entirety of the process, from pre-production to post-production. Sometimes that’s like a dozen roles. Sometimes it’s me.

    • themurphy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      17
      ·
      4 months ago

      Same, I’ve automated alot of my tasks with AI. No way 77% is “hampered” by it.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        2
        ·
        4 months ago

        I dunno, mishandling of AI can be worse than avoiding it entirely. There’s a middle manager here that runs everything her direct-report copywriter sends through ChatGPT, then sends the response back as a revision. She doesn’t add any context to the prompt, say who the audience is, or use the custom GPT that I made and shared. That copywriter is definitely hampered, but it’s not by AI, really, just run-of-the-mill manager PEBKAC.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          16
          ·
          edit-2
          4 months ago

          Voiceover recording, noise reduction, rotoscoping, motion tracking, matte painting, transcription - and there’s a clear path forward to automate rough cuts and integrate all that with digital asset management. I used to do all of those things manually/practically.

          e: I imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            arrow-down
            5
            ·
            edit-2
            4 months ago

            imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

            They’re right IMO. Practical effects still look and age better than (IMO very obvious) digital effects. Oh and digital deaging IMO looks like crap.

            But, this will always remain an opinion battle anyway, because quantifying “artistry” is in and of itself a fool’s errand.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              5
              ·
              4 months ago

              Digital video, not digital effects - I mean the guys I went to film school with that refused to touch digital videography.

          • WalnutLum@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 months ago

            All the models I’ve used that do TTS/RVC and rotoscoping have definitely not produced professional results.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              4 months ago

              What are you using? Cause if you’re a professional, and this is your experience, I’d think you’d want to ask me what I’m using.

              • WalnutLum@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 months ago

                Coqui for TTS, RVC UI for matching the TTS to the actor’s intonation, and DWPose -> controlnet applied to SDXL for rotoscoping

                • Hackworth@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 months ago

                  Full open source, nice! I respect the effort that went into that implementation. I pretty much exclusively use 11 Labs for TTS/RVC, turn up the style, turn down the stability, generate a few, and pick the best. I do find that longer generations tend to lose the thread, so it’s better to batch smaller script segments.

                  Unless I misunderstand ya, your controlnet setup is for what would be rigging and animation rather than roto. I do agree that while I enjoy the outputs of pretty much all the automated animators, they’re not ready for prime time yet. Although I’m about to dive into KREA’s new key framing feature and see if that’s any better for that use case.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        8
        arrow-down
        4
        ·
        4 months ago

        A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.

        • themurphy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          4 months ago

          I’m not working in tech either. Everyone relying on a computer can use this.

          Also, medicin and radiology are two areas that will benefit from this - especially the patients.

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      3
      ·
      4 months ago

      I mean if it’s easy you can probably script it with some other tool.

      “I have a list of IDs and need to make them links to our internal tool’s pages” is easy and doesn’t need AI. That’s something a product guy was struggling with and I solved in like 30 seconds with a Google sheet and concatenation

      • silasmariner@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        4 months ago

        Yeah but the idea of AI in that kind of workflow is so that the product guy can actually do it themselves without asking you and in less than 30 mins

        • jjjalljs@ttrpg.network
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          4 months ago

          Yeah but that’s like using an entire gasoline powered car to play a CD.

          Competent product guy should be able to learn some simpler tools like Google sheets.

          • silasmariner@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            4 months ago

            No arguments from me that it’s better if people are just better at their job, and I like to think I’m good at mine too, but let’s be real - a lot of people are out of their depth and I can imagine it can help there. OTOH is it worth the investment in time (from people who could themselves presumably be doing astonishing things) and carbon energy? Probably not. I appreciate that the tech exists and it needs to, but shoehorning it in everywhere is clearly bollocks. I just don’t know yet how people will find it useful and I guess not everyone gets that spending an hour learning to do something that takes 10s when you know how is often better than spending 5 mins making someone or something else do it for you… And TBF to them, they might be right if they only ever do the thing twice.

            • Balder@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              4 months ago

              I think the actual problem here is that if the product people can’t learn such a simple thing by themselves, they also won’t be able to correctly prompt the LLM to their use case.

              They said, I do think LLMs can boost productivity a lot. I’m learning a new framework and since there’s so much details to learn about it, it’s fast to ask ChatGPT what’s the proper way to do X on this framework etc. Although that only works because I already studied the foundation concepts of that framework first.

              • silasmariner@programming.dev
                link
                fedilink
                English
                arrow-up
                4
                ·
                4 months ago

                I think the actual problem is that they won’t know when they’ve got something that compiles but is wrong… I dunno though. I’ve never seen someone doing this and I can only speculate tbh. I only ever asked ChatGPT a couple of times, as a joke to myself when I got stuck, and it spouted completely useless nonsense both times… Although on one occasion the wrong code it produced looked like it had the pattern of a good idiom behind it and I stole that.

    • hswolf@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 months ago

      It also helps you getting a starting point when you don’t know how ask a search engine the right question.

      But people misinterpret its usefulness and think It can handle complex and context heavy problems, which must of the time will result in hallucinated crap.

    • captainlezbian@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 months ago

      And are those use cases common and publicized? Because I see it being advertised as “improves productivity” for a novel tool with myriad uses I expect those trying to sell it to me to give me some vignettes and not to just tell my boss it’ll improve my productivity. And if I was in management I’d want to know how it’ll do that beyond just saying “it’ll assist in easy and menial tasks”. Will it be easier than doing them? Many tools can improve efficiency on a task at a similar time and energy investment to the return. Are those tasks really so common? Will other tools be worse?

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Well yes, but it’s not often I encounter an easy or menial task for which AI is the best solution.

      For example, searching documentation us usually more informative than asking a bot trained on said documentation.