@zogwarg - eviltoast
  • 5 Posts
  • 237 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle


  • Screaming at the void towards Chuunibyou (wiki) Eliezer: YOU ARE NOT A NOVEL CHARACTER, THINKING OF WHAT BENEFITS THE NOVELIST vs THE CHARACTER HAS NO BEARING ON REAL LIFE.

    Sorry for yelling.

    Minor notes:

    But <Employee> thinks I should say it, so I will say it. […] <Employee> asked me to speak them anyways, so I will.

    It’s quite petty of Yud to be so passive-aggressive towards his employee insisted he at least try to discuss coping. Name dropping him not once but twice (although that is also likely to just be poor editing)

    “How are you coping with the end of the world?” […Blah…Blah…Spiel about going mad tropes…]

    Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

    Alternatively it’s also a question to gauge how full of shit you may be. (By gauging how emotionally invested you are)

    The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.

    Emotional turmoil and how characters cope, or fail to cope makes excellent literature! That all you can think of is “going mad”, reflects only your poor imagination as both a writer and a reader.

    I predict, because to them I am the subject of the story and it has not occurred to them that there’s a whole planet out there too to be the story-subject.

    This is only true if they actually accept the premise of what you are trying to sell them.

    […] I was rolling my eyes about how they’d now found a new way of being the story’s subject.

    That is deeply Ironic, coming from someone who makes choice based on him being the main character of a novel.

    Besides being a thing I can just decide, my decision to stay sane is also something that I implement by not writing an expectation of future insanity into my internal script / pseudo-predictive sort-of-world-model that instead connects to motor output.

    If you are truly doing this, I would say that means you are expecting insanity wayyyyy to much. (also psychobabble)

    […Too painful to actually quote psychobabble about getting out of bed in the morning…]

    In which Yud goes in depth, and self-aggrandizing nonsensical detail about a very mundane trick about getting out of bed in the morning.


  • A fairly good and nuanced guide. No magic silver-bullet shibboleths for us.

    I particularly like this section:

    Consequently, the LLM tends to omit specific, unusual, nuanced facts (which are statistically rare) and replace them with more generic, positive descriptions (which are statistically common). Thus the highly specific “inventor of the first train-coupling device” might become “a revolutionary titan of industry.” It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated.

    I think it’s an excellent summary, and connects with the “Barnum-effect” of LLMs, making them appear smarter than they are. And that it’s not the presence of certain words, but the absence of certain others (and well content) that is a good indicator of LLM extruded garbage.










  • I’ll gladly endorse most of what the author is saying.

    This isn’t really a debate club, and I’m not really trying to change your mind. I will just end on a note that:

    I’ll start with the topline findings, as it were: I think the idea of a so-called “Artificial General Intelligence” is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.

    Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likely—nothing so stark as inherently—that Turing Machines cannot. “Computable” in the essay means something specific.

    Simulation != Simulacrum.

    And because I can’t resist, I’ll just clarify that when I said:

    Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.

    It means that the test does (or can possibly) exist that, it’s just not achievable by humans. [Although I will also note that for methods that don’t rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]


  • Even if true, why couldn’t the electrochemical processes be simulated too?

    • You’re missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.
    • Even if you simulate biological processes perfectly, no actual biology occurs.
    • The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

    But even if it is, it’s “just” a matter of scale.

    • Fundamentally what the author is saying, is that it’s a difference in kind not a difference in quantity.
    • Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation).
    • Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

    I do know how to write a program that produces indistinguishable results from a real coin for a simulation.

    • Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.
    • Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.
    • Also decent random-number generation is not actually properly speaking Turing complete [Unless again you simulate physics but then again, you have to properly choose random starting conditions even if you assume you have a capable simulator] , modern computers use stuff like component temperature/execution time/user interaction to add “entropy” to random number generation, not direct computation.

    As a summary,

    • When reducing any problem for a “simpler” one, you have to be careful what you ignore.
    • The simulation argument is a bit irrelevant, but as a small aside not guaranteed to be possible in principle, and certainly untractable with current physics model/technology.
    • Human intelligence has a lot of externalities and cannot be reduced to pure “functional objects”.
      • If it’s just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you’ll agree those aren’t intelligent. The output as meaning to you, but it doesn’t have meaning for the tape-recorder.


  • That’s because there’s absolutely reams of writing out there about Sonnet 18—it could draw from thousands of student essays and cheap study guides, which allowed it to remain at least vaguely coherent. But when forced away from a topic for which it has ample data to plagiarize, the illusion disintegrates.

    Indeed, Any intelligence present is that of the pilfered commons, and that of the reader.

    I had the same thought about the few times LLMs appear to be successful in translation, (where proper translation requires understanding), it’s not exactly doing nothing, but a lot of the work is done by the reader striving to make sense of what he reads, and because humans are clever they can somtimes glimpse the meaning, through the filter of AI mapping a set of words unto another, given enough context. (Until they really can’t, or the subtelties of language completely reverse the meaning when not handled with the proper care).




  • Some changes to adventofcode this year, will only have 12-days of puzzles, and no longer have global leaderboard according to the faq:

    Why did the number of days per event change?

    It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).

    Scaling it a bit down rather than completely burning out is nice i think.

    What happened to the global leaderboard?

    The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn’t compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I’ve made it so you can share a read-only view of your private leaderboard. Please don’t use this feature or data to create a “new” global leaderboard.)

    While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc?

    If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don’t agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.

    Probably the most positive change here, it’s a bit of shame we can’t have nice things, a no real way to police stuff like people using AI for leaderboard times. Still keeping the private one, for smaller groups of people, that can set expectations is unfortunately the only pragmatic thing to do.

    Should I use AI to solve Advent of Code puzzles?

    No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.

    It’s nice to know the creator (Eric Wastl) has a good head on his shoulders.