I am extremely curious what the general take around here is on the Singulairty - eviltoast

First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow ‘rationalists’ are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there’s 8 billion people alive right now, and we don’t actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying “fuck em”. This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.

Here’s my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…

  • BrickedKeyboard@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    Do you think the problems you outlined are solvable even in theory, or must humans slog along at the current pace for thousands of years to solve medicine?

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      rapid automated drug development != solving medicine, while that would be a good thing, these are not remotely similar. first one is partially engineering problem, the other requires much more theory building

      solving medicine would be more of a problem for biologists, and biology is a few magnitudes harder to simulate than chemistry. from my experience with computational chemists, this shit is hard, scales poorly (like n^7), and because of a large search space predictive power is limited. if you try to get out of wet lab despite all of this anyway and simulate your way to utopia, you get into rapidly compounding garbage in garbage out issues, and this is in fortunate case where you know what are you doing, that is, when you are sure that you have right protein at hand. this is the bigger problem, and this requires lots of advanced work from biologists. sometimes it’s interaction between two of proteins, sometimes you need some unusual cofactor (like cholesterol in membrane region for MOR, which was discovered fairly recently) some proteins have unknown functions, there are orphan receptors, some signalling pathways are little known. this is also far from given and more likely than you think https://www.science.org/content/blog-post/how-antidepressants-work-last good luck automating any of that

      that said, sane drug development has that benefit of providing some new toys for biologists, so that even if a given compound will shred liver of patient that might be fine for some cell assay. some of the time, that makes their work easier

      as a chemist i sometimes say that in some cosmic sense chemistry is solved, that is, when we want to go from point A to point B we don’t beat the bush wildly but instead most of the time there’s some clear first guess that works, some of the time. this seems to be a controversial opinion and even i became less sure of that sometime halfway through my phd, partially because i’ve found a counterexampleS

      there’s a reason why drug development takes years to decades

      i’m not saying that solving medicine will take thousands of years, whatever that even means. things are moving rapidly, but any advancement that will make it work even faster will come from biologists, not from you or any other AI bros

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        going off a tangent with these antidepressant thingy: if this paper holds up and it’s really how things work under the hood, we have a situation where for 40 years people were dead wrong about how antidepressants work, and now they do know. turns out, all these toys we give to biologists are pretty far from perfect and actually hit more than intended, for example all antidepressants in clinical use hit some other, now turns out unimportant target + TrkB. this is more common than you think, some receptors like sigma catch about everything you can throw at them, there are also orphan receptors with no clear function that maybe catch something and we have no idea. even such a simple compound like paracetamol works in formely unknown way, now we have a pretty good guess that it’s really cannabinoid, and paracetamol is a prodrug to that. then there are very similar receptors that are just a little bit different but do completely different things, and sometimes you can even differentiate between the same protein on basis of whether is bound to some other protein or not. shit’s complicated but we’re figuring it out

        catching up this difference was only possible by using tools - biological tools - that were almost unthinkable 20 years ago, and is far outside of that “just think about it really hard and you’ll know for sure” school of thought popular at LW, even if you offload the “thinking” part to chatgpt. my calculus prof used to warn: please don’t invent new mathematics during exam, maybe some of you can catch up and surpass 3000 years of mathematics development in 2h session, but it’s a humble thing to not do this and learn what was done in the past beforehand (or something to that effect. it was a decade ago)