Amazon builds AI model to optimize packaging - eviltoast
  • Syntha@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Yeah, well it’s not the same. Models are wrong all the time, why use a different term at all when it’s just “being wrong”?

    • polygon6121@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      The model makes decisions thinking it is right, but for whatever reason can’t see a firetruck or stopsign or misidentifies the object… you know almost like how a human hallucinating would perceive something from external sensory that is not there.

      I don’t mind giving it another term, but “being wrong” is misleading. But you are correct in the sense that it depends on every given case…

      • Syntha@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        No, the model isn’t “thinking”, no model in use today has anything resembling an internal cognitive process. It is making a prediction. A covid test is predicting whether you have the Covid-19 virus inside you or not. If its prediction contradicts your biological state, it is wrong. If an object recognition algorithm does not predict there being a firetruck, how is that not being wrong in the same way?