

it’s such a weird stretch, honestly. songs and conversations are not different to predictive text, it’s just more of it. expecting it to do logic after ingesting more text is like expecting a chicken to lay kinder eggs just because you feed it more.
it’s such a weird stretch, honestly. songs and conversations are not different to predictive text, it’s just more of it. expecting it to do logic after ingesting more text is like expecting a chicken to lay kinder eggs just because you feed it more.
it’s called a “beat”
sorry, i had to think for a while about this one.
I see what you mean, but is there any evidence that the models are biased in a way that affirms the world view of the owners? If I understood you correctly? I couldn’t find any.
so, this is an interesting point. we know they are biased because we’ve done fairness reviews, and we know that that bias is in line with the bias of silicon valley as a whole. whether that means the bias is a) intentional, b) coincidentally aligned or c) completely random is impossible to tell. and, frankly, not the interesting part. we know there is bias, and we know it aligns.
whether or not the e/acc people at openai actually share the worldview they espouse is also impossible to tell. it could also be just marketing.
I’m as sceptical of the capitalist fuckwits as you seem to be, but their power seems to me to be more political/capitalist through the idea of AGI, than through the models themselves.
as long as the product is sold as it is today, i believe it reinforces that power.
Something that simple sequestration could solve. But that’s on the government and the voters.
ESL moment… i don’t really understand what you mean by sequestration here. like, limit who is allowed to use it? i feel like that power lies with the individual user, even though regulation definitely can help.
For me, the issue lies first in the overhyped marketing which is par on course for basically anything, unfortunately, as well as the fact, that suddenly copyright infringement is fine, if you make enough money off of it and lick powerful boots. If it was completely open for everyone, it wouls be a completely different story IMO.
agreed, which is why i as an “abolish copyright law” person am so annoyed to find myself siding with the industry in the cases ongoing against the ai companies. then again, we have “open weight” models that can still be used for the same thing, because the main problem was never copyright itself but the system it exists within.
Also, I do not think that the models were created with the goal of pushing a certain narrative. They lucked into it being popular, completely unexpectedly, and only then the vultures started seeing the opportunity. So we will see how it evolves in that regard, but I don’t think this is what we’re seeing currently.
the purpose of a system is what it does. some people with a certain ideology made a thing capable of “expressing itself”, and by virtue of the thing being made by those people it expresses itself in a similar way. whether it is intentional or not doesn’t really factor into it, because as long as the people selling it do not see a problem with it it will continue to express itself in that fashion. this connects back to my first point; we know the models have built in bias, and whether the bias was put there deliberately or is in there as a consequence of ingesting biased data (for which the same rule holds) doesn’t matter. it’s bias all the way down, and not intentionally working against that bias means the status quo will be reinforced.
yup, i was wrong. stats for totals rather than active are hard to find, annoyingly.
the problem with entirely separating the two is that progress and technology can be made with an ideology in mind.
the current wave of language model development is spearheaded by what basically amounts to a cult of tech-priests, going all-in on reaching AGI as fast as possible because they’re fully bought into rokos basilisk. if your product built to collect and present information in context is created by people who want that information to cater to their world view, do you really think that the result is going to be an unbiased view of the world? sure the blueprint for how to make an llm or diffusion model is (probably) unbiased, but when you combine it with data?
as an example, did you know that all the big diffusion models (stable, flux, illustrious etc) use the same version of CLIP, the part responsible for mapping text to features? and that the CLIP part is tailored for and trained on medical information? how might that affect the output? sure you can train your own CLIP, but will you? will anyone?
there’s a reason most nhl players are not from north america.
no, the linked table shows how python also returns the first non-falsey result of an a or b
expression rather than just giving a boolean. it’s useful for initialising optional reference args:
def foo(a: list = None)
a = a or []
works with and
as well.
ish. if your boot priority is set to windows first and it decides it needs to repair the bootloader it can wipe other oses from the boot order.
ok, don’t care, not my country. you’re singularly responsible for turning this thread into a you-fest. you just need to not reply.
no matter your stance on the morality of language models, it’s just plain rude to use a machine to generate text meant for people. i would never do that. if i didn’t take the time to write it, why would you take the time to read it?
the output gear in the first picture is a sticker.
on the one hand, this is an ai horde-based bot. the ai horde is just a bunch of users who are letting you run models on their personal machines, which means this is not “big ai” and doesn’t use up massive amounts of resources. it’s basically the “best” way of running stable diffusion at small to medium scale.
on the other, this is still using “mainstream” models like flux, which has been trained on copyrighted works without consent and used shitloads of energy to train. unfortunately models trained on only freely available data just can’t compete.
lemmy is majority anti-ai, but db0 is a big pro-local-ai hub. i don’t think they’re pro-big-ai. so what we’re getting here is a clash between people who feel like any use of ai is immoral due to the inherent infringement and the energy cost, and people who feel like copyright is a broken system anyway and are trying to tackle the energy thing themselves.
it’s a pretty thorny issue with both sides making valid points, and depending on your background you may very well hold all the viewpoints of both sides at the same time.
for a poweruser yeah but this is my grandmother we’re talking about. she only used the program once every six months, when her camera ran out of space and she emptied it onto the computer.
well, i have no evidence of this. however. looking at the way auto-generated subtitles are served at youtube right now, they are sent individually word-by-word from the server, pick up filler words like “uh”, and sometimes pause for several seconds in the middle of sentences. and they’re not sent by websocket, which means they go through multiple requests over the course of a video. more requests means the server works harder because it can’t just stream the text like it does the video, and the only reason they’d do that other than incompetence (which would surely have been corrected by now, it’s been like this for years) is if the web backend has to wait for the next word to be generated.
i would love to actually know what’s going on if anyone has any insight.
it’s also not a linux-first phone. idk if they even test that.
explain. edited with explanation. i’ve seen the technology connections video, thanks.
my comment is still about the actual post above, and i was specifically thinking about auto-generated subs rather than, say, movies. apparently that’s not obvious.
that’s me. op is the big image up top.
what you’re describing is known as “expert agencies”. non-elected experts work together to suggest courses of action for the government on their assigned topic. since they are not elected, they can not make decisions, but they can draft bills for parliament to vote on. hey also do studies on request of other branches.
you may have heard of some of these agencies, like the FDA, EPA, CDC…