Floating-point arithmetic - eviltoast
  • Kajika@lemmy.mlOP
    link
    fedilink
    arrow-up
    85
    ·
    9 months ago

    Took me 2 hours to find out why the final output of a neural network was a bunch of NaN. This is always very annoying but I can’t really complain, it make sense. Just sucks.

    • flying_sheep@lemmy.ml
      link
      fedilink
      arrow-up
      18
      arrow-down
      2
      ·
      9 months ago

      I guess you can always just add an assert not data.isna().any() in strategic locations

      • Kajika@lemmy.mlOP
        link
        fedilink
        arrow-up
        31
        ·
        9 months ago

        That could be a nice way. Sadly it was in a C++ code base (using tensorflow). Therefore no such nice things (would be slow too). I skill-issued myself thinking a struct would be 0 -initialized but MyStruct input; would not while MyStruct input {}; will (that was the fix). Long story.

        • fkn@lemmy.world
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          9 months ago

          I too have forgotten to memset my structs in c++ tensorflow after prototyping in python.

        • TheFadingOne@feddit.de
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          9 months ago

          If you use the GNU libc the feenableexcept function, which you can use to enable certain floating point exceptions, could be useful to catch unexpected/unwanted NaNs