@0x01 - eviltoast
  • 6 Posts
  • 403 Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle

  • 0x01@lemmy.mltoFuck The USA@lemmy.caThe USA you knew is now dead.
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    1 day ago

    The time is now to delete your social media and connect on private communication channels with likeminded people. Use trusted vpns to connect to sites like lemmy. The next steps are crystal clear:

    1. It’s now illegal to publicly shit talk the administration, fuck the first amendment
    2. Use the massive massive budget assigned to ice to identify users who don’t fall in line
    3. Anyone who is so unpatriotic to publicly disavow the trump regime has their citizenship revoked
    4. ice concentration camps for those newly non-citizen “immigrants”
    5. Everyone else “internalized authoritarianism” s themselves into submission

    When nobody speaks out it will feel as though there is no hope and you’ll feel alone. Keep safe communication channels open now so you don’t have to fear later.

    Vpns may be made illegal, but there are other ways to anonymize communication.












  • I am not an expert in your field, so you’ll know better about the domain specific ramifications of using llms for the tasks you’re asking about.

    That said, one of the pieces of my post that I do think is relevant and important for both your domain and others is the idempotency and privacy of local models.

    Idempotent implies that the model is not liquid (changing weights from one input to the next), and that the entropy is wranglable.

    Local models are by their very nature not sending your data somewhere, rather they are running your input through your gpu, similar to many other programs on your computer. That needs to be qualified with: any non airgapped computer’s information is likely to be leaked at some point in its lifetime so adding classified information to any system is foolish and short sighted.

    If you use chatgpt for collating private, especially classified information, openai have explicitly stated that they use chatgpt prompts for further training so yes absolutely that information will leak not only into future models but also it must be expected to be leaked in such a way that it would be traceable to you personally.

    To summarize, using local llms is slightly better for tasks like the ones you’re asking about, and while the information won’t be shared with any ai company that does not guarantee safety from traditional snooping. Using remote commercial llms though? Absolutely your fears are justified and anyone using commercial systems like chatgpt inputting classified information will absolutely both leak that information and taint future models with the info. That taint isn’t even limited to just the one company/model, the act of distillation means other derivative models will also have that privileged information.

    TLDR; yes, but less so for local ai models.


  • Obviously this is the fuckai community so you’ll get lots of agreement here.

    I’m coming from all communities and don’t have the same hate for AI. I’m a professional software dev, have been for decades.

    I have two minds here, on the one hand you absolutely need to know the fundamentals. You must know how the technology works what to do when things go wrong or you’re useless on the job. On the other hand, I don’t demand that the people who work for me use x86 assembly and avoid stack overflow, they should use whatever language/mechanism produces the best code in the allotted time. I feel similarly with AI. Especially local models that can be used in an idempotent-ish way. It gets a little spooky to rely on companies like anthropic or openai because they could just straight up turn off the faucet one day.

    Those who use ai to sidestep their own education are doing themselves a disservice, but we can’t put our heads in the sand and pretend the technology doesn’t exist, it will be used professionally going forward regardless of anyone’s feelings.