I thought it was satire, before I look to the source. - eviltoast
  • bizarroland@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    Yes and no. They can do the job but they are too easily tricked and too quick to hallucinate to be able to reliably do the job.

    Compared to a human after 8 hours of continuous customer support, you’re going to have far more errors of a much greater variety and risk with any current llm models compared to any human that isn’t actively attempting to destroy your company