@JFranek - eviltoast
  • 0 Posts
  • 13 Comments
Joined 1 month ago
cake
Cake day: October 9th, 2024

help-circle

  • That article gave me a whiplash. First part: pretty cool. Second part: deeply questionable.

    For example these two paragraphs from sections ‘problem with code’ and ‘magic of data’:

    “Modular and interpretable code” sounds great until you are staring at 100 modules with 100,000 lines of code each and someone is asking you to interpret it.

    Regardless of how complicated your program’s behavior is, if you write it as a neural network, the program remains interpretable. To know what your neural network actually does, just read the dataset

    Well, “just read the dataset bro” sound great sounds great until you are staring at a dataset with 100 000 examples and someone is asking you to interpret it.





  • Yeah, neural network training is notoriously easy to reproduce /s.

    Just few things can affect results: source data, data labels, network structure, training parameters, version of training script, versions of libraries, seed for random number generator, hardware, operating system.

    Also, deployment is another can of worms.

    Also, even if you have open source script, data and labels, there’s no guarantee you’ll have useful documentation for either of these.