funny how we went from the “trust machine” blockchain grift to the “could be an acceptable level of trustable machine” ai grift
Look I can implement a CEO replacement pretty simple:
Roll d4:
- Give me more stock options.
- Stock buyback to make share price go up with artificial demand.
- Move jobs overseas to lower labor costs.
- Spout nonsense about adhering to our core values will drive further value to the shareholders.
This is more telling about the order of society than it is about the power of LLMs.
AI doesn’t have to get better, we just have to get collectively worse.
It’s a classic God complex. Whoever thinks that can create something better than human can as well call himself a God. That’s bullshit.
Really starting to get a bit sick of Ars Technica. They’re OK for general interest tech stuff, but their editorial line (and some of their commenter base) have been really credulous about AI vendors’ PR.
With Ars, the best option is to stick to general tech, science, public policy and tech culture. I say this as someone who has read them for ~20 years and has been subscribing for ~10+ years.
I’ve seen people criticize Eric Berger for being up Musk’s ass about SpaceX, though I’m simply not that passionate about space stuff anymore. And so far I don’t see them posting anything about the NIH freezeout, even though that surely affects a vast swath of their reader base. Seems odd.
Same. They’ve been a staple in my RSS feed list for so long (and they are one of the few sites where the RSS feed isn’t just the headlines). But recently I’ve been thinking several times already about throwing them out.
Ask an AI to make an image with a word on it, lemme know when it can actually spell the word right. Because right now it spits out the concept of a word, which really isn’t the same thing.
Which AI models, though? Your synthetic text extruder LLMs that can’t accurately surpass humans at anything unless you train them specifically to do that and which are kinda shite even then unless you look at it exactly the right way? Or that fabled brain simulation AI that doesn’t even exist?
Instead, he prefers to describe future AI systems as a “country of geniuses in a data center,” […] [and] that such systems would need to be “smarter than a Nobel Prize winner across most relevant fields.”
Ah, “future” AI systems. As in the ones we haven’t built yet, don’t know how to build, and don’t even know whether we can build them. But let’s just feed more shit into Habsburg GPT in the meantime, maybe one will magically pop out.
“Shortly after 2027” is a fun phrasing. Means “not before 2028”, but mentioning “2027” so it doesn’t seem so far away.
I interpret it as “please bro, keep the bubble going bro, just 3 more years bro, this time for real bro”
Shortly after Musk puts people on Mars.
I recall there being 2 starships on Mars by 2024.
4 2026, 8 2028, 16 etc. (still amazed people don’t call him out on his exponential bullshit)
haha yeah I got very “we just need $29.95 billion bro just $29.95b trust me bro it’ll be so intelligent bro just watch” impression from that as well
Anthropic chief is a fucking idiot, and a liar.
“We’ve recognized that we’ve reached the point as a technological civilization where the idea, there’s huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth,” he added. “Once that idea gets invalidated, we’re all going to have to sit down and figure it out.”
Dario Amodei, the CEO of Anthropic, is one of the last people on earth that you’d want to have this conversation with.
And they’ll use it to make themselves even more wealthy. If there is something that makes work easier, those who oversee will demand more results.