@okwhateverdude - eviltoast
  • 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: August 1st, 2023

help-circle
  • okwhateverdude@lemmy.worldtoAsk Lemmy@lemmy.worldIs "retard" a slur?
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    5
    ·
    2 days ago

    I think “slur” also requires a component of direct offense for it mean anything. I don’t think it is valid to be offended for somebody else if that somebody else isn’t actually offended. If I make up a slur on the spot denigrating some aspect of your person that you do not find offensive (eg. Flumplenook - for a person who’s a bit clumsy), is it really a slur?

    So if you call someone retarded, and they do not have the mental faculties to be offended, is it really a slur?

    For slurs to have any meaning, any power, they need to be understood and internalized as offensive.







  • I do not agree with your assessment of “reality.” Only one side is actively against bodily autonomy (especially for women) and wants to look at kids genitals and police which toilet a human shits in. Only one side courts the religious ethnostate fascist where their whole stance is in-group vs. out-group (aka, demonization). Pointing these out isn’t demonization, these are facts. If someone likes politicians from that side, the odds are good it does indeed make them suitable for the various negative labels they don’t like. The late Dr. Bob Altemeyer studied right wing authoritarianism (both the leaders and followers) and even developed a rubric that would predict right wing authoritarian followers. They are dangerous because they eschew reason for vibes and are extremely susceptible to demagogues.

    Give this a read and see if it changes your opinion on “both sides”

    https://theauthoritarians.org/






  • okwhateverdude@lemmy.worldtoADHD@lemmy.worldEvening focus
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 days ago

    During my early adult years when I first moved out on my own and it was just me, I flipped my schedule to sleep 1700 until whenever I woke up. No alarms. Could sleep in every day because the result was “Oh no, still have many hours until work”. Would work 0700 until 1600. It was amazing. I was so awake and focused on my own stuff. Could practice piano, write poetry, work on open source code during those wee hours. Early morning work was also very productive. Afternoon work time was meh, but that was okay because of how the work was structured. Would bike into the office since it was only about 8km (5 mile) via residential streets. Do my grocery shopping at a 24hr market. Laundry room at my apartment complex was always open. It was such a magical time. Lonely, but would see friends late nights as their shifts ended or the evening was just peaking. Plus all my internet friends on IRC from all over.


  • okwhateverdude@lemmy.worldtoADHD@lemmy.worldEvening focus
    link
    fedilink
    English
    arrow-up
    8
    ·
    19 days ago

    What I’ve done before when I feel like I don’t have time for stuff I want to do is shift your hours a little bit to give yourself more time before work. If you take your meds first thing in the morning, you’ll get to enjoy all of that focus to yourself first, then work gets the dregs (as is proper)







  • This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.

    You’re correct that it will struggle to give you exactly what you want because you need to have some “machine sympathy.” If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.

    These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you’ll get much better results.


  • Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

    I, too, work in fintech. I agree with this analysis. That said, we currently have a large mishmash of regexes doing classification and they aren’t bulletproof. It would be useful to see about using something like a fine-tuned BERT model for doing classification for transactions that passed through the regex net without getting classified. And the PoC would be would be just context stuffing some examples for a few-shot prompt of an LLM and a constrained grammar (just the classification, plz). Because our finance generalists basically have to do this same process, and it would be nice to augment their productivity with a hint: “The computer thinks it might be this kinda transaction”