Its not wrong though - eviltoast
  • amki@feddit.de
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    Sure but until I see such a thing I chose not to believe in fairy tales.

    Decompiling arbitrary architecture machine code is quite a few levels above everything I’ve seen so far which is generally pretty basic pattern recognition paired with statistics and training reinforcement.

    I’d argue decompiling arbitrary machine code into either another machine code or legible higher level code is in a whol other league than what AO has proven to be capable of.

    Especially because with this being 90% accurate is useless.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Again you aren’t seeing this because these models are being developed for private enterprise purposes.

      Regarding deep machine code analysis, sure, that’s gonna take work but the whole hallucination thing is an off the shelf, rookie problem these days

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s not, though. Hallucinations are inherent to the technology, it’s not a matter of training. Good training can greatly reduce the likelihood, but cannot solve it.