• 0 Posts
  • 4.06K Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Dating fine, but if going for a long term commitment, it may be rough to be in your 60s with a partner in their 80s. They have to understand if they are theoretically on that path and that their relationship will transform into elder care at some point. Also before that the older one will stop keeping up sexually.

    If both see it as a short term fling, probably ok. The 46 year can probably keep up with a 25 year old in the ways that matter, and may have enough money for some interesting experiences to share.


  • Unfortunately, LLMs tend to be really bad at this. They spit out beginner programmer that can search stack overflow a lot type code.

    In one example I saw, it did some very expensive processing before a check to see if that processing would even be applicable, and this was a vibe coded project intended to be an “accelerator”. To the vibe videos dismay, even when it “worked”, it was noticeably even worse than the thing it was supposed to make faster.

    In pursuit of autonomous development, they tend to stop if the thing barely passes the tests at all. After doing the work to give it specific enough tests to let it retry until passing, they spend thousands of dollars of retry after retry and you are lucky to even get one barely working pass in the end. To try to have it iterate for optimization is going to be way more expensive, especially since it is thoughtlessly trying stuff without a theory of why a difference would be more optimized or not.


  • I’ll agree with the assessment, moderately useful, depending on context, but “vibe” coding is a recipe for failure.

    It also tends to neglect identifying a library and just embedding code directly, which for one makes me feel uncomfortable about not getting external maintenance and for another goes too far into maybe lifting someone’s work without attribution.

    So if I want to make a cli utility, sure I might prompt up the argv parsing since it’s tedious and obvious and not going to be knocking off a viable off the shelf option. But the tech has to be applied very carefully.


  • It depends on the situation.

    If the situation is you are playing in a very well trodden area and you can be flexible in accepting the LLM product even as it didn’t fit what you would have had in mind, it can likely do “ok”. “Make me a super Mario Brothers style game”. The output will not be what you probably wouild have wanted, and further it will be a soul crushingly pointless “game” compared to just playing an existing platformer, but it will crank out something vaguely like you would have guessed. The sort of projects I have generally avoided because they usually reinvent the wheel for pointless reasons and it’s very unrewarding for me. However fairly common in big businesses to make stupid internal applications like this. Very depressingly, I expect steam to be flooded with AI slop just like it has been flooded with stock asset slop.

    If you are making something more novel and/or cannot tolerate deviations from a very specific vision, well LLM goes more pointless.


  • I don’t think a tractor is the right form, but I could imagine a smaller electric drone for mechanical weed control instead of any herbicide. Something small and effective for that purpose but too small and weak to be broadly dangerous.

    Like how little robot lawn mowers are adequate to mow but safe for people because it weighs less than 20 pounds and has little 1 inch blades that swivel freely that can’t cut anything tougher than grass.

    Tractors are big and potentially dangerous, but for a lot of their tasks, they are only big to have a single human do a lot in a little time. Having dozens of 24/7 drones could do some of those tasks with very meager resources. Tiny equipment, slower movement.

    But have to live within the capabilities of AI techniques, which are selectively useful. Machine vision for flagging likely undesirable plants, maybe, operating equipment autonomously, maybe less so.



  • Yeah, that’s the thing where we get into what I call “superstitious prompting”, like when people say “And make sure you don’t make mistakes” or “Include only factual data without hallucinations” and think it works, until it doesn’t.

    It will at least reply in a way that is narratively consistent with being told to do something or another, and will do things like emit the words “Ok, I understand and will promise to only provide fact based feedback”, but doesn’t “understand” at all. It works surprisingly well as being narratively consistent with the prompt frequently looks exactly like following instructions.

    With people getting all the more frustrated when their superstitious prompt fails, they told the LLM to do something or specifically not do something and it even promised exactly to do as directed and then it just proceeds to be normal LLM anyway.


  • Yeah, it is hard grasping why online commenters that are fans are fans, but in my real world interactions, I get a better feel for it.

    The people that are all in on the AI, slop and all, are the people I really found annoying to begin with. They tend to think everyone is desperate to hear what they say, that verbosity is king, and generally don’t really know what they are talking about. They are the sort that would spend a ton of time fretting over some ‘design document’ that when finally shared is absolutely nothing actionable, despite 10 pages with of gorp. Any specific outcome has nothing to do with the document, but they’ll take credit for “thought leadership” if it works, and blame the “inadequate team” if it fails. They are used to and cherish verbose yes men and are used to making vague statements and getting results they can’t judge already.

    Or on the other end, people who endlessly fell for clickbait. Slop before AI was really a factor in slop. People forwarding those chain letters back in the day.

    The people I have held long respect for tend to range between “too annoying to even deal with” to “it’s a little useful in key circumstances”. I have yet to personally meet someone I had long respected who went all in on AI.

    The insidious thing is I’m pretty sure they both outnumber and tend to have more power. Those folks who “thought lead” without actionable direction nor even a vague understanding of how the work happens? Those are the ones that got promoted, with the good ones largely overlooked for promotions, mainly because at a certain point promotion requires “professional networking” and making the executives happier with themselves than it is about good work. Now we are in a position where those people who never “got” the work are telling themselves that the LLMs can replace those annoying “nerds” that have leverage over them, and if there’s one thing they can’t stand, it’s having people they don’t understand having anything looking like leverage over them.



  • Sadly I can’t tell if this is a joke or not because I have met so many people who seriously believe things like this work. They are the ones who eventually get the most pissed when LLM messes up on them because they got the LLM to “promise” not to do the specific thing it ends up doing.

    They generally evolve their superstitious ritual to something else that will eventually fail, like changing the wording, or making the LLM specifically include a phrase indicating a promise of quality. They also believe when the LLM “apologizes” and think that indicated self reflection and learning. Very few are prepared to accept that the LLM can go off the rails at unpredictable times and unpredictable circumstances, and their utility has to be monitored like a hawk unless the outcome really doesn’t matter.


  • It wasn’t going to happen, but if somehow it were possibly going to happen, that went out the window when they killed the kids, after specifically cutting measures intended to prevent precisely that sort of event.

    If they had managed to be super surgical, then maybe they could have gotten some popular support by undermining the regime.

    Like if a foreign power killed Trump, Hegseth. and Miller, a large chunk of the populace wouldn’t be too torn up over it. Though even then the ride or die MAGA wild be apoplectic and a huge risk of of overall making US more aggressive.




  • I say it’s generally a problem of long narratives, but some genres like comedy can get a pass since they don’t have to rely on growth and progression.

    To the extent a story needs to develop, running a long time is likely to doom something.

    Running a few books or a handful of seasons can work, but if a story has to evolve over decades…



  • Haven’t gotten around to One Piece (that episode count is… daunting), but I think I really know it’s done as soon as they have a ‘tournament arc’. Give up all pretense and just have them fight for the sake of fighting.

    And then there’s bleach, where, oh look, he has a somewhat cool sword, oh it has a cooler form, oh there’s an even cooler form, oh now he has mask powers, but limited, oh wait, we were lying that wasn’t his real cool sword form… Ugh…



  • I think the real problem is trying to keep a story going too long, and the need to escalate everything constantly serves to ultimately undermine how that progress feels.

    The stories tend to be repetitive, end up where a villain gets a new MacGuffin and the hero has to get some new capability to overcome only for the next villan to have an even bigger MacGuffin, rinse and repeat with each time being portrayed as some impossibly large leap over the last. To keep characters going they time jump, they get cloned, they come back from the dead, they cross over from some alternate universe.

    Basically, most genres of fiction have a risk of overstaying their welcome if you try to make it go on a long time.


  • jj4211@lemmy.worldtoMicroblog Memes@lemmy.worldImposter syndrome
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    He never said anything about his wife, or if 100k is combined household income or that he lives in a really nice part of town.

    As long as we are inventing circumstances, colud have an elder parent that needs a great deal of life care and medical care. Or they are otherwise trying to take care of poor relatives. Lot’s of circumstances can evaporate $100k despite very reasonable lifestyle choices.

    As a coincidence, I also have some folks living on far less in the family. Their strategy is inherited housing. At least in my rural family, if not for inherited land they would be screwed. If they didn’t directly inherit land, they find a vaguely viable trailer and have it towed onto a cousin’s land and plug into their power, well, and septic. For one of the poorer cousins, their strategy for child rearing was “dump their kids on the wealthy cousin” (thankfully not me).

    I just don’t think lamenting that a reasonably modest house for a family and indpendent living burns through $100k is “performative bullshit”.