@BrickedKeyboard - eviltoast
  • 1 Post
  • 41 Comments
Joined 1 年前
cake
Cake day: 2023年8月31日

help-circle
  • now if that isn’t just the adderall talking

    Nail on the head. Especially on the internet/‘tech bro’ culture. All my leads at work also have such a, “extreme OCD” kinda attitude. Sorry if you feel offended emotionally, I didn’t mean it.

    The rest of your post is ironically very much something that Eliezer posits a superintelligence would be able to do. Or from the anime Death Note. I use a few words or phrases, you analyze the shit out of them and try to extract all the information you can and have concluded all this stuff like

    opening gambit

    “amongst friends”

    hiding all sorts of opinions behind a borrowed language

    guff about “discovering reality”

    real demands as “getting with the right programme”,

    allegedly, scoring points “off each other”

    Off each other” was another weasel phrase

    you know that at least at first blush you weren’t scoring points off anyone

    See everything you wrote above is a possibly correct interpretation of what I wrote. It’s like the english lit analysis after the author’s dead. Eliezer posits a superintelligence could use this kind of analysis to convince operators with admin authority to break the rules, or L in death note uses this to almost catch the killer.

    It’s also all false in this case. (it’s also why a superintelligence probably can’t actually do this) I’ve been on the internet long enough to know it is almost impossible to convince someone of anything, unless they already were willing and you just link some facts they didn’t know about. So my gambit actually something very different.

    Do you know how you get people to answer a question on the internet? To post something that’s wrong*. And it clearly worked, there’s more discussion on this thread than this entire forum in several pages, maybe since it was created.

    *ironically in this case I posted what I think is the correct answer but it disagrees with your ontology. If I wanted lesswrongers to comment on my post I would need a different OP.





  • Next time it would be polite to answer the fucking question.

    Sorry sir:

    *I have to ask, on the matter of (2): why? * I think I answered this.

    What’s being signified when you point to “boomer forums”? That’s an “among friends” usage: you’re free to denigrate the boomer fora here. And > then once again you don’t know yet if this is one of those “boomer forums”, or you wouldn’t have to ask.

    What people in their droves are now desperate to ask, I will ask too: which is it dummy? Take the stopper out of your speech hole and tell us how > you really feel.

    I am not sure what you are asking here, sir. It’s well known to those in the AI industry that a profound change is upon us and that GPT-4 shows generality for it’s domain, and robotics generality is likely also possible using a variant technique. So individuals unaware of this tend to be retired people who have no survival need to learn any new skills, like my boomer relatives. I apologize for using an ageist slur.








  • I agree completely. This is exactly where I break with Eliezer’s model. Yes obviously an AI system that can self improve can only do so until it’s either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute

    That’s not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.

    But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then…

    Genuinely asking, I don’t think it’s “religion” to suggest that a huge speedup in global GDP would be a dramatic event.


  • Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won’t be in a week or a month, energy requirements alone limit how fast it can happen.

    Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

    Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you “priced in” this possibility in your world view?



  • I wanted to know what you know and I don’t. If rationalists are all scammers and not genuinely trying to be, per the name ‘lesswrong’ in their view of reality, what’s your model of reality. What do you know? So far unfortunately I haven’t seen anything. Sneer club’s “reality model” seems to be “whatever the mainstream average person knows + 1 physicist”, and it exists to make fun of the mistakes of rationalists and I assume ignores any successes if there are any.

    Which is fine, I guess? Mainstream knowledge is probably usually correct. It’s just that I already know it, there’s nothing to be learned here.


  • This pattern shows up often when people are trying to criticize tesla or spaceX. And yeah, if you measure “current reality” vs “promises of their hype man/lead shitposter and internet troll”, absolutely. Tesla probably will never achieve full self driving using anything like their current approach. But if you compare Tesla “to other automakers, “to most automakers that ever existed””, or SpaceX to “any rocket company since 1970” there’s no comparison. If you’re going to compare the internet to pre-internet, compare it to BBS you would access via modem or fax machines or libraries. No comparison.

    Similarly you should compare GPT-4 and the next large model to be released, Gemini, vs all AI software for all time. It’s no comparison.


  • take some time and read this

    I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.

    It’s a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means “if the machine is given a task, what is the probability it completes the task successfully”. Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).

    People have benchmarked GPT-4 and it’s got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It’s below human level overall I think, but still surprisingly strong given it’s emergent behavior from computing tokens.


  • I appreciated this post because it never occurred to me that the “thumb might be on the scales” for the “rules for discourse” that seems to be the norm around the rat forms. I personally ignore most of it, however, the “ES” rat phrase is simply saying, “I know we humans are biased observers, this is where I’m coming from”. If the topic were renewable energy and I was the ‘head of extraction at BP’, you can expect that whatever I have to say is probably biased against renewable energy.

    My other thought reading this was : what about the truth. Maybe the mainstream is correct about everything. “Sneer club” seems to be mostly mainstream opinions. That’s fine I guess but the mainstream is sometimes wrong about issues that have been poorly examined or near future events. The collective opinions of everyone don’t really price in things that are about to happen, even if it’s obvious to experts. For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi’s posts on lesswrong.

    Where I am going with this is you can point out bad arguments on my part, but I mean in the end, does truth matter? Like are we here to score points on each other or share what we think reality is or will in the very near future be?