Debunking the Tech Hype Cycle with Dan Olson - eviltoast

This interview is 2 months old, but I haven’t seen it discussed so far and given the news about reddit’s new cryptobeaniebabies, here it goes. This is a critique of the tech hype cycle, LLMs, VR, the metaverse failure, NFTs and cryptocurrencies with a refreshing historical awareness of past attempts that failed, like second life and VR games. Adam Conover’s interviewee is Dan Olson https://www.vice.com/en/article/m7v5qq/meet-the-guy-who-went-viral-on-youtube-for-explaining-how-nfts-crypto-are-a-poverty-trap

  • Peanut@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    replying despite your warning. i also won’t be offended if you don’t read. and the frustration is fair.

    TLDR: intelligence is weird, complex, and abstract. it is very difficult for us to comprehend the complex nature of intelligence alien to our own. the human mind is a very specific combination of different intelligent functions.

    funny you mention about the technology not being an existential threat, as the two researchers that i’d mentioned were recently paired at the monk debate arguing against the “existential threat” narrative.

    getting into the deep end of the topic, i think most with a decent understanding of it would agree it is a form of “intelligence” alien to what most people would understand.

    technically a calculator can be seen as a very basic computational intelligence, although very limited in capability or purpose outside of a greater system. LLMs mirror the stochastic word generation element of our intelligence, and a lot of weird neat amazing things that come with the particular type of intelligent system that we’ve created, but it definitely lacks much of what would be needed to mirror our own brand of intelligence. it’s so alien in function, yet so capable at representing information that we are used to, it is almost impossible not to anthropomorphise.

    i’m currently excited by the work being done in understanding our own intelligence as well

    but how would you represent a function so complex and abstracted as this in a system like GPT? if qualia is an emergent experience developed through evolution reliant on the particular structure and makeup of our brains, you would need more than the aforementioned system at any level of compute. while i don’t think the principle function would be impossible to emulate, i don’t think it’d come about by upscaling GPT models. we will develop other facsimiles more aligned with the specific intentions we have for the tool the intelligence is designed and directed to be. i think we can sculpt some useful forms of intelligence out of upscaled and altered generative models, although yann lecun might disagree. either way, there’s still a fair way to go, and a lot of really neat developments to expect in the near future. (we just have to make sure the gains aren’t hoarded like every other technological gain of the past half century)